Detailed Description
In order to describe the technical contents, the achieved objects and effects of the present invention in detail, the following description will be made with reference to the embodiments in conjunction with the accompanying drawings.
Referring to fig. 1, a method for analyzing an ultra-wide angle fundus image includes the steps of:
s1, cutting the fundus ultra-wide angle image according to different fundus tissues to obtain an area image corresponding to each fundus tissue;
s2, respectively carrying out lesion analysis on the regional image corresponding to each fundus tissue to obtain a regional analysis result of each fundus tissue;
and S3, integrating the regional analysis result of each fundus tissue to obtain and output a final analysis result.
From the above description, the beneficial effects of the invention are as follows: cutting the fundus ultra-wide angle image according to different fundus tissues, respectively analyzing more than one cut image, obtaining a specific fundus tissue condition, namely a regional analysis result, and integrating different fundus tissue conditions to obtain a final analysis result; the image is cut and then analyzed, compared with the whole image analysis, the size of the input image is reduced, and a plurality of cut images can be analyzed at the same time, so that the analysis speed of the images is improved; meanwhile, the specific conditions of different tissues of the fundus can be acquired and integrated and analyzed according to the needs, the specific conditions of the fundus tissues and the analysis results of the combination of the conditions of the fundus tissues can be obtained, and the reference with higher clinical practicability is provided.
Further, before S1, the method further includes:
training an AI model to automatically mark different fundus tissues;
the S1 specifically comprises the following steps:
s11, inputting the fundus ultra-wide angle image into the AI model to obtain fundus ultra-wide angle images with different fundus tissue marks;
s12, judging whether the position difference between the marks exceeds a threshold value, and if so, returning to S11; otherwise, cutting the fundus ultra-wide angle image according to the mark.
According to the description, the AI model is trained, different fundus tissues are automatically marked by the AI model, the marking result can be judged according to the preset threshold, if the corresponding condition is not met, the marking can be carried out again, the accuracy of the marking of the AI model is ensured, the efficiency of marking pictures is improved, the images after marking are cut according to the images after marking, and the integrity of the fundus tissues in the images after cutting can be ensured.
Further, the fundus tissue includes a midperipheral portion and a peripheral portion of the optic disc, macula, and retina;
the step of obtaining the region image corresponding to each fundus tissue in the step S1 specifically includes:
obtaining a first region image including a video disc, a second region image including a macula lutea, and a third region image including a middle peripheral portion and a peripheral portion of a retina;
classifying the first region image, the second region image and the third region image according to lesions by using an image classification model to obtain a lesion classification result of each region image;
performing semantic segmentation on the first region image, the second region image and the third region image by using an image semantic segmentation model to obtain fundus tissue shapes in each region image;
and obtaining a region analysis result of each region image according to the lesion classification result and the fundus tissue shape.
From the above description, the integrity of the video disc, the macula lutea and the middle and peripheral parts of the retina in the cut image is ensured, the subsequent analysis of the image is facilitated, the image is classified by an image classification model and is segmented by a semantic segmentation model, the shape of the fundus tissue can be obtained while the lesion probability is obtained, the subsequent formation of a report with high readability is facilitated, and a more detailed reference is provided for the diagnosis of doctors.
Further, the step S3 specifically includes:
and inputting the lesion classification result and the fundus tissue shape of each regional image into a preset logic model to obtain a final analysis result and outputting the final analysis result.
As can be seen from the above description, the lesion classification result and the fundus tissue shape of each regional image are input into a preset logic model, so that the analysis results of the divided multiple images can be combined and analyzed, the local analysis results can be integrated to obtain the whole analysis result while the local analysis is accurate, the detailed analysis process can be seen in the logic model, and the clear analysis process can provide more reference data for doctors.
Further, the step S12 is specifically that;
judging whether the position of the macula lutea mark is on the horizontal left side or the horizontal right side of the position of the disc mark, and if not, returning to S11;
judging whether the distance between the position of the macula lutea mark and the position of the disc mark exceeds ten times the diameter of the disc mark, and if so, returning to S11;
judging whether the marks of the peripheral part of the retina are out of the marks of the equatorial part, if not, returning to S11
And if the position of the macula lutea mark is on the left side or the right side of the horizontal direction of the position of the optical disc mark, the distance between the position of the macula lutea mark and the position of the optical disc mark is smaller than ten times of the diameter of the optical disc mark, and the mark on the middle periphery of the retina is outside the mark on the equatorial part, cutting the fundus ultra-wide angle image according to the mark.
From the above description, it can be seen that the specific standard determination AI model is set to verify the determination result of the bottom ultra-wide angle image, and if the determination result does not meet the threshold value, the marking is performed again, so that the marking result with obvious error can be eliminated, and the accuracy of the marking result can be improved.
Referring to fig. 2, a terminal for analyzing an ultra wide angle image of a fundus oculi includes a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor implements the following steps when executing the computer program:
s1, cutting the fundus ultra-wide angle image according to different fundus tissues to obtain an area image corresponding to each fundus tissue;
s2, respectively carrying out lesion analysis on the regional image corresponding to each fundus tissue to obtain a regional analysis result of each fundus tissue;
and S3, integrating the regional analysis result of each fundus tissue to obtain and output a final analysis result.
The invention has the beneficial effects that: cutting the fundus ultra-wide angle image according to different fundus tissues, respectively analyzing more than one cut image, obtaining a specific fundus tissue condition, namely a regional analysis result, and integrating different fundus tissue conditions to obtain a final analysis result; the image is cut and then analyzed, compared with the whole image analysis, the size of the input image is reduced, and a plurality of cut images can be analyzed at the same time, so that the analysis speed of the images is improved; meanwhile, the specific conditions of different tissues of the fundus can be acquired and integrated and analyzed according to the needs, the specific conditions of the fundus tissues and the analysis results of the combination of the conditions of the fundus tissues can be obtained, and the reference with higher clinical practicability is provided.
Further, before S1, the method further includes:
training an AI model to automatically mark different fundus tissues;
the S1 specifically comprises the following steps:
s11, inputting the fundus ultra-wide angle image into the AI model to obtain fundus ultra-wide angle images with different fundus tissue marks;
s12, judging whether the position difference between the marks exceeds a threshold value, and if so, returning to S11; otherwise, cutting the fundus ultra-wide angle image according to the mark.
According to the description, the AI model is trained, different fundus tissues are automatically marked by the AI model, the marking result can be judged according to the preset threshold, if the corresponding condition is not met, the marking can be carried out again, the accuracy of the marking of the AI model is ensured, the efficiency of marking pictures is improved, the images after marking are cut according to the images after marking, and the integrity of the fundus tissues in the images after cutting can be ensured.
Further, the fundus tissue includes a midperipheral portion and a peripheral portion of the optic disc, macula, and retina;
the step of obtaining the region image corresponding to each fundus tissue in the step S1 specifically includes:
obtaining a first region image including a video disc, a second region image including a macula lutea, and a third region image including a middle peripheral portion and a peripheral portion of a retina;
classifying the first region image, the second region image and the third region image according to lesions by using an image classification model to obtain a lesion classification result of each region image;
performing semantic segmentation on the first region image, the second region image and the third region image by using an image semantic segmentation model to obtain fundus tissue shapes in each region image;
and obtaining a region analysis result of each region image according to the lesion classification result and the fundus tissue shape.
From the above description, the integrity of the video disc, the macula lutea and the middle and peripheral parts of the retina in the cut image is ensured, the subsequent analysis of the image is facilitated, the image is classified by an image classification model and is segmented by a semantic segmentation model, the shape of the fundus tissue can be obtained while the lesion probability is obtained, the subsequent formation of a report with high readability is facilitated, and a more detailed reference is provided for the diagnosis of doctors.
Further, the step S3 specifically includes:
and inputting the lesion classification result and the fundus tissue shape of each regional image into a preset logic model to obtain a final analysis result and outputting the final analysis result.
As can be seen from the above description, the lesion classification result and the fundus tissue shape of each regional image are input into a preset logic model, so that the analysis results of the divided multiple images can be combined and analyzed, the local analysis results can be integrated to obtain the whole analysis result while the local analysis is accurate, the detailed analysis process can be seen in the logic model, and the clear analysis process can provide more reference data for doctors.
Further, the step S12 is specifically that;
judging whether the position of the macula lutea mark is on the horizontal left side or the horizontal right side of the position of the disc mark, and if not, returning to S11;
judging whether the distance between the position of the macula lutea mark and the position of the disc mark exceeds ten times the diameter of the disc mark, and if so, returning to S11;
judging whether the marks of the peripheral part of the retina are out of the marks of the equatorial part, if not, returning to S11
And if the position of the macula lutea mark is on the left side or the right side of the horizontal direction of the position of the optical disc mark, the distance between the position of the macula lutea mark and the position of the optical disc mark is smaller than ten times of the diameter of the optical disc mark, and the mark on the middle periphery of the retina is outside the mark on the equatorial part, cutting the fundus ultra-wide angle image according to the mark.
From the above description, it can be seen that the specific standard determination AI model is set to verify the determination result of the bottom ultra-wide angle image, and if the determination result does not meet the threshold value, the marking is performed again, so that the marking result with obvious errors can be eliminated, and the accuracy of the marking result is improved.
Referring to fig. 1, a first embodiment of the present invention is as follows:
a fundus ultra-wide angle image analysis method specifically comprises the following steps:
training an AI model to automatically mark different fundus tissues;
in an alternative implementation manner, the marking software (such as LableME, or independently developed marking software) is used for carrying out boundary tracing on the regions of eyelashes, eyelids, instrument frames and the like in a large number of fundus ultra-wide angle images, mask images generated by tracing are input into an AI model (which can be FCN, fully Convolutional Network, a full convolution neural network; U-NET, U-NET++ and other image semantic segmentation models) for training, and the trained model can automatically mark the regions of the eyelashes, eyelids, instrument frames and the like and can directly cut the marked instrument frames;
in an alternative embodiment, marking the optic disc, the macula and the vortex vein by marking software, inputting an image generated by marking into an AI model (image segmentation model such as FCN, U-NET and the like) for training, setting a marking range of other areas based on the position of the optic disc, the macula or the vortex vein, and marking different fundus tissues by the AI model according to the training result and the set position of the other areas relative to the optic disc, the macula or the vortex vein;
for example, determining the distance between the macula and the optic disc according to the positions of the marked macula and the optic disc, and taking the macula as the center, and taking the two to four times of the distance as the radius as a circle, wherein the circle is the rear pole part; determining an equatorial portion according to the position of the vortex vein, wherein the outside of the equatorial portion is the peripheral portion of the retina;
specifically, the marker range of the macula is determined through the fovea of the macula;
s1, cutting the fundus ultra-wide angle image according to different fundus tissues to obtain an area image corresponding to each fundus tissue;
wherein the fundus tissue includes a midperipheral portion and a peripheral portion of the optic disc, macula, and retina;
the S1 specifically comprises the following steps:
s11, inputting the fundus ultra-wide angle image into the AI model to obtain fundus ultra-wide angle images with different fundus tissue marks;
s12, judging whether the position difference between the marks exceeds a threshold value, and if so, returning to S11; otherwise, cutting the fundus ultra-wide angle image according to the mark;
the method specifically comprises the following steps of;
judging whether the position of the macula lutea mark is on the horizontal left side or the horizontal right side of the position of the disc mark, and if not, returning to S11;
judging whether the distance between the position of the macula lutea mark and the position of the disc mark exceeds ten times the diameter of the disc mark, and if so, returning to S11;
judging whether the marks of the peripheral part of the retina are out of the marks of the equatorial part, if not, returning to S11
Cutting the fundus ultra-wide angle image according to the marks if the position of the marks of the macula is on the horizontal left side or the right side of the position of the marks of the optic disc, the distance between the position of the marks of the macula and the position of the marks of the optic disc is less than ten times the diameter of the marks of the optic disc, and the marks of the middle and peripheral parts of the retina are outside the marks of the equatorial part;
s2, respectively carrying out lesion analysis on the regional image corresponding to each fundus tissue to obtain a regional analysis result of each fundus tissue;
and S3, integrating the regional analysis result of each fundus tissue to obtain and output a final analysis result.
The second embodiment of the invention is as follows:
an analysis method of fundus ultra-wide angle image is different from the first embodiment in that:
the step of obtaining the region image corresponding to each fundus tissue in the step S1 specifically includes:
obtaining a first region image including a video disc, a second region image including a macula lutea, and a third region image including a middle peripheral portion and a peripheral portion of a retina;
classifying the first region image, the second region image and the third region image according to lesions by using an image classification model (such as CNN, convolutional Neural Networks, convolutional neural network) to obtain a lesion classification result of each region image;
in an alternative embodiment, the lesion condition and the lesion classification result of each regional image are obtained by using an image classification model;
specifically, the image classification model is obtained through training, and the pathological changes of different fundus tissues in a large number of fundus ultra-wide angle images are marked, wherein the pathological changes comprise normal, exudation, hemorrhage, atrophy and degeneration, the pathological changes corresponding to the pathological changes of different fundus tissues are marked, the marked results are input into the CNN model for training, and the trained CNN model can be used for classifying pathological changes or marking pathological changes according to the input images;
the image classification model can also be effective NET, YOLO and the like;
the method comprises the following steps of: pathological conditions, optic disc atrophy; lesions, glaucoma;
performing semantic segmentation on the first region image, the second region image and the third region image by using an image semantic segmentation model to obtain fundus tissue shapes in each region image;
obtaining a region analysis result of each region image according to the lesion classification result and the fundus tissue shape;
specifically, for the first region image, determining the pathological change condition of the optic disc, namely normal, exudation, hemorrhage, atrophy or degeneration through an image classification model, and acquiring the range of the optic cup and the optic disc through a semantic segmentation model; determining the lesion condition of the macula lutea for the second region image through an image classification model, and acquiring the range of the macula lutea through a semantic segmentation model; determining whether degeneration exists in the peripheral part of the retina for the third region image through an image classification model;
the step S3 is specifically as follows:
inputting the lesion classification result and the fundus tissue shape of each regional image into a preset logic model to obtain a final analysis result and outputting the final analysis result;
specifically, the lesion classification results of a plurality of fundus tissues and the shape of the fundus tissues are subjected to combined analysis, or the whole image is subjected to lesion classification, the combined analysis result and the whole image lesion classification result are input into a logic model (such as a logistic model, a mixed linear model and the like) for judgment, a decision tree result is formed, and finally a conclusion is generated;
specifically, for the first region image, calculating a cup-disc ratio according to the range of the optic cup and the optic disc, and if the cup-disc ratio exceeds a threshold value and the classification result of the image classification model is optic disc atrophy, the first region image analysis result is that the possibility of glaucoma is high; if the cup-disk ratio does not exceed the threshold value, but the classification result of the image classification model is optic disk atrophy, the first region image analysis result is that glaucoma possibility exists, and the cup-disk condition is checked manually; if the cup-disc ratio exceeds a threshold value and the classification result of the image classification model is that the video disc is normal, the first area image analysis result is that the video disc is normal;
in an alternative embodiment, the fundus ultra-wide angle image is cut to obtain a fourth area image including a posterior pole, and a lesion condition and a lesion category of the posterior pole are obtained through the image classification model, wherein the lesion condition is bleeding and exudation, the lesion category is diabetic retinopathy and has the probability of 0.9, the lesion condition of macula is bleeding and exudation, and the lesion category is macula and has the probability of 0.94; because the two diseases are rarely generated at the same time under the general condition, according to a preset logic model, in the diabetic retinopathy, the macular area lesion weight is 0.3, the posterior pole lesion weight is 0.7, and 0.94×0.3+0.9×0.7=0.912 is calculated, and the final judgment result is diabetic retinopathy; if the lesion type of the posterior pole is diabetic retinopathy and the probability is 0.1, calculating 0.94×0.3+0.1×0.7=0.352, and finally judging that the result is macular lesion;
specifically, a corresponding threshold value can be set to determine a final judgment result, the threshold value can be set within a range of 0.5-0.8 according to the model calculation result, if the threshold value is exceeded, the final judgment result is considered as the final judgment result, the diabetic retinopathy is calculated as above, if the threshold value is set to be 0.8, the final judgment result is 0.912> 0.8; 0.352<0.8, the final judgment result of this case is maculopathy;
outputting the whole judging process when outputting the final judging result;
specifically, when the final analysis result is output, the lesion condition of each region, the lesion classification result and the combined analysis result of each region are output, if the above is determined to be glaucoma and diabetic retinopathy and the peripheral portion of the retina is denatured, the final analysis result is: optic disc atrophy, a relatively large cupdisc, a visible degeneration area in the peripheral portion of the retina, bleeding and exudation in the macula and posterior pole, suspected glaucoma and diabetic retinopathy.
Referring to fig. 2, a third embodiment of the present invention is as follows:
the fundus ultra-wide angle image analysis terminal 1 comprises a processor 2, a memory 3 and a computer program stored in the memory 3 and capable of running on the processor 2, wherein the processor 2 realizes the steps in the first or second embodiment when executing the computer program.
In summary, the invention provides a method and a terminal for analyzing an ultra-wide angle image of a fundus, which analyze a plurality of regional images obtained by dividing the ultra-wide angle image of the fundus according to fundus tissues, reduce single input quantity, analyze the plurality of regional images at the same time, and greatly improve the speed of image analysis; through training an AI model, automatic marking is carried out on fundus tissues so that image segmentation can be carried out according to the marking, a threshold value of the marking is matched, if a marking result exceeds the threshold value, marking errors are judged to be carried out for re-marking, the marking accuracy of fundus tissues is ensured, and a foundation is laid for subsequent analysis; the segmented images are subjected to lesion classification through the image classification model, and the shape of the segmented images is acquired through the image semantic segmentation model, so that not only can the lesion condition be acquired, but also the range of the lesion can be acquired, more reliable references are provided for the determination of the subsequent lesions, and when the conclusion is finally output, the judgment result of the lesion is output, the lesion condition and the lesion range are also output, the diagnosis report made by a doctor is more close to, the daily diagnosis habit of the doctor is met, and the conclusion which is more detailed and has high referenceis provided.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent changes made by the specification and drawings of the present invention, or direct or indirect application in the relevant art, are included in the scope of the present invention.