CN116563195A - Method for classifying nerve fiber layer defects based on fundus images and related products - Google Patents

Method for classifying nerve fiber layer defects based on fundus images and related products Download PDF

Info

Publication number
CN116563195A
CN116563195A CN202210101496.6A CN202210101496A CN116563195A CN 116563195 A CN116563195 A CN 116563195A CN 202210101496 A CN202210101496 A CN 202210101496A CN 116563195 A CN116563195 A CN 116563195A
Authority
CN
China
Prior art keywords
region
fiber layer
nerve fiber
interest
grading
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210101496.6A
Other languages
Chinese (zh)
Inventor
黄烨霖
王欣
赵昕
和超
张大磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Airdoc Technology Co Ltd
Original Assignee
Beijing Airdoc Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Airdoc Technology Co Ltd filed Critical Beijing Airdoc Technology Co Ltd
Priority to CN202210101496.6A priority Critical patent/CN116563195A/en
Publication of CN116563195A publication Critical patent/CN116563195A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The present disclosure relates to methods and related products for classifying a nerve fiber layer defect based on fundus images. The method comprises inputting a fundus image into a detection model to obtain a detection frame result of a target region in the fundus image; determining a region of interest based on a detection frame result of the target region; inputting the region of interest into a nerve fiber layer defect segmentation model to obtain a foreground region related to the nerve fiber layer defect region; and grading a nerve fiber layer defect according to the region of interest and the foreground region. By utilizing the scheme disclosed by the invention, automatic and rapid grading judgment can be performed based on fundus images, so that the diagnosis efficiency is greatly improved.

Description

Method for classifying nerve fiber layer defects based on fundus images and related products
Technical Field
The present disclosure relates generally to the field of image processing technology. More particularly, the present disclosure relates to a method, apparatus, and computer-readable storage medium for grading a nerve fiber layer defect based on fundus images.
Background
Glaucoma is the second blinding eye disease worldwide, a progressive optic nerve disorder that causes alterations in optic nerve structure, ultimately resulting in irreversible visual impairment. For glaucoma patients, early diagnosis is critical for early treatment. Treatment can be intervened in time by early diagnosis, thereby slowing down or preventing the progressive visual function impairment. With the development of science and technology, the efficiency of glaucoma screening is greatly improved based on the analysis of fundus images photographed by a fundus mirror. Evaluation of optic nerve structures in fundus images (e.g., loss of disc rim, nerve fiber defect, etc.) is the most rapid, simple and objective method of examining optic nerve damage.
At present, the clinical diagnosis and analysis of the nerve fiber layer defect in the fundus image depend on naked eyes of human doctors, and qualitative subjective judgment is made according to years of theoretical knowledge and clinical experience. However, this not only has low diagnostic efficiency, but also results in different doctors often making differential diagnostic situations for the same case. In addition, the diagnostic analysis method lacks more accurate quantification of the disease, which is not beneficial to accurately monitoring the development progress of the disease for a long time.
Disclosure of Invention
In order to at least partially solve the technical problem mentioned in the background art, the solution of the present disclosure provides a solution for classifying a nerve fiber layer defect based on a fundus image. By utilizing the scheme disclosed by the invention, the grading result of the nerve fiber layer defect can be rapidly obtained, and the diagnosis efficiency is improved. To this end, the present disclosure provides solutions in a number of aspects as follows.
In one aspect, the present disclosure provides a method of grading a nerve fiber layer defect based on a fundus image, comprising: inputting a fundus image into a detection model to obtain a detection frame result of a target region in the fundus image; determining a region of interest based on a detection frame result of the target region; inputting the region of interest into a nerve fiber layer defect segmentation model to obtain a foreground region related to the nerve fiber layer defect region; and grading a nerve fiber layer defect according to the region of interest and the foreground region.
In an embodiment, wherein the target area comprises at least a disc area, wherein determining the region of interest based on a detection frame result of the target area comprises: determining the center of the video disc area and the diameter of the video disc area based on the detection frame result of the video disc area; and taking the center of the video disc area as a center point, and taking an area with the length and the width being a first preset multiple of the diameter of the video disc area as an interested area.
In another embodiment, wherein the target region further comprises a macular region, grading the nerve fiber layer defect according to the region of interest and the foreground region comprises: determining a specific region related to the grading according to the region of interest and the macular region; and grading a nerve fiber layer defect based on the specific region and the foreground region.
In yet another embodiment, wherein determining a particular region associated with the classification from the region of interest and the macular region comprises: dividing the region of interest by using an auxiliary horizontal line and an auxiliary vertical line passing through the center of the region of interest; and determining a specific region related to the grading based on the segmented region of interest and the macular region.
In yet another embodiment, wherein the specific region comprises a temporal region, an episodic region, and/or a subnasal region.
In yet another embodiment, wherein determining a particular region associated with the classification based on the segmented region of interest and the macular region comprises: determining a half area containing the macula area in the divided region of interest as a temporal side, and determining a half area not containing the macula area in the divided region of interest as a nasal side; and determining an upper half region and a lower half region of the temporal side as the upper temporal region and the lower temporal region, respectively, and determining an upper half region and a lower half region of the nasal side as the upper nasal region and the lower nasal region, respectively.
In yet another embodiment, wherein grading the nerve fiber layer defect based on the specific region and the foreground region comprises: forming an auxiliary circle taking the center of the region of interest as a center and taking a second preset multiple of the diameter of the video disc region as a radius; calculating the occupation ratio of the foreground region to the pixel points of the circular arcs of the auxiliary circles corresponding to the temporal-inferior region, the temporal-superior region, the nasal-superior region and the nasal-inferior region respectively; calculating a weighted sum of the corresponding duty ratios and comparing the weighted sum to a classification threshold; and classifying the nerve fiber layer defect based on a comparison of the weighted sum and a classification threshold.
In yet another embodiment, wherein the classification threshold comprises a plurality of thresholds and the plurality of thresholds comprise a plurality of threshold ranges, each of the threshold ranges corresponding to a number of classification levels associated with a severity of the nerve fiber layer defect, wherein classifying the nerve fiber layer defect based on a comparison of the weighted sum and the classification threshold comprises: determining a threshold range corresponding to the weighted sum based on a comparison result of the weighted sum and a grading threshold; and determining a corresponding grading level according to a threshold range corresponding to the weighted sum so as to grade the nerve fiber layer defect.
In another aspect, the present disclosure also provides an apparatus for grading a nerve fiber layer defect based on a fundus image, comprising: a processor; and a memory coupled to the processor, the memory having stored therein computer program code that, when executed by the processor, causes the apparatus to perform the foregoing embodiments.
In yet another aspect, the present disclosure also provides a computer-readable storage medium having stored thereon computer-readable instructions for grading a nerve fiber layer defect based on a fundus image, which when executed by one or more processors, implement the various embodiments as previously described.
It should be appreciated that nerve fiber layer defects are typically seen in the disc along the upper and lower arcuate regions, appearing as a slit-like dark band (e.g., as indicated by the arrows in the right hand view of fig. 4). If further developed, the nerve fiber layer defect can spread throughout the fundus. To facilitate quantitative assessment of nerve fiber layer defects, the present disclosure normalizes partial regions (i.e., regions of interest) in fundus images to rank the nerve fiber layer defects. Specifically, the scheme of the present disclosure obtains a target region detection frame result by first inputting a fundus image into a detection model, and determines a region of interest. And then inputting the interested region into a nerve fiber layer defect segmentation model to obtain a foreground region, and grading the nerve fiber layer defects according to the interested region and the foreground region, so that automatic rapid grading is realized, and the diagnosis efficiency is improved. Further, embodiments of the present disclosure enable quantitative statistics of nerve fiber layer defect segmentation results by calculating respective duty cycles of foreground regions in specific regions (including temporal, superior nasal and/or inferior regions), such that the scoring results are more accurate and have a stronger interpretability. Further, the embodiment of the disclosure firstly extracts the region of interest, thereby reducing the complexity of calculation and discrimination.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar or corresponding parts and in which:
FIG. 1 is an exemplary flow diagram illustrating a method of grading a nerve fiber layer defect based on an eye bottom image in accordance with an embodiment of the present disclosure;
fig. 2 is an exemplary diagram illustrating a detection frame result of obtaining a target region in a fundus image according to an embodiment of the present disclosure;
FIG. 3 is an exemplary schematic diagram illustrating a region of interest according to an embodiment of the present disclosure;
FIG. 4 is an exemplary schematic diagram illustrating obtaining a foreground region associated with a nerve fiber layer defect according to an embodiment of the present disclosure;
FIG. 5 is an exemplary diagram illustrating determination of a particular region according to an embodiment of the present disclosure;
FIG. 6 is an exemplary flow diagram illustrating grading of nerve fiber layer defects based on specific and foreground regions in accordance with an embodiment of the present disclosure;
FIG. 7 is an exemplary diagram illustrating calculating a ratio of foreground regions to a particular region according to an embodiment of the present disclosure; and
fig. 8 is a block diagram illustrating an apparatus for grading a nerve fiber layer defect based on an eye bottom image in accordance with an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings. It should be understood that the embodiments described in this specification are only some embodiments of the disclosure provided to facilitate a clear understanding of the solution and to meet legal requirements, and not all embodiments of the disclosure may be implemented. All other embodiments, which can be made by those skilled in the art without the exercise of inventive faculty, are intended to be within the scope of the present disclosure, based on the embodiments disclosed herein.
Fig. 1 is an exemplary flow diagram illustrating a method 100 of grading a nerve fiber layer defect based on an eye fundus image in accordance with an embodiment of the present disclosure. As shown in fig. 1, at step S102, a fundus image is input into a detection model to obtain a detection frame result of a target region in the fundus image. In one embodiment, the aforementioned fundus image may be acquired by, for example, an ophthalmic instrument fundus camera. In one embodiment, the aforementioned detection model may be a pre-trained neural network model, such as the YOLOv3 target detection model. The backbone network of the YOLOv3 target detection model is dark-53. In an implementation scenario, the acquired fundus image is input into, for example, a YOLOv3 target detection model, and the detection frame results of the optic disc region and the macular region in the fundus image can be obtained. That is, the target area of the embodiment of the present disclosure may include a disc area and a macula area, whereby the detection frame result of the aforementioned target area corresponds to a rectangular frame (e.g., rectangular frame a and rectangular frame B shown in the right diagram of fig. 2) containing the disc area and the macula area.
After outputting the detection frame result of the target region via the detection model, in step S104, the region of interest is determined based on the detection frame result of the target region. In one embodiment, the center of the optic disc area and the diameter of the optic disc area may be determined based on the detection frame result of the optic disc area first, and then an area having the center of the optic disc area as the center point and a length and width that are a first preset multiple of the diameter of the optic disc area is taken as the area of interest. In the implementation scenario, for example, the center of the optic disc area and the diameter of the optic disc area may be determined according to the length and the width of the rectangular frame containing the optic disc area, so that the center of the optic disc area may be taken as the center point, and an area with the length and the width being a first preset multiple of the diameter of the optic disc area may be taken as the area of interest. That is, a rectangular frame including the disc area is enlarged to a range included in a first preset multiple of the diameter of the disc area as the area of interest (for example, as shown in fig. 3). In one implementation scenario, the aforementioned first preset multiple may be, for example, 3, and assuming that the diameter of the optic disc area is D, a range with a length and a width of 3D is taken as the region of interest. In an embodiment of the present disclosure, the aforementioned first preset multiple 3 may be obtained according to medical experience.
Based on the acquired region of interest, at step S106, the region of interest is input into a nerve fiber layer defect segmentation model to obtain a foreground region related to the nerve fiber layer defect region. In one embodiment, the neural fiber layer defect segmentation model may also be a pre-trained neural network model, such as a U-connected full convolutional neural network Unet. In an implementation scenario, inputting a region of interest into, for example, a fully convolutional neural network, unet, a foreground region associated with a neural fiber layer defect region (e.g., two slit-like dark bands shown by arrows in the right-hand diagram of fig. 4) may be directly output.
Finally, at step S108, the nerve fiber layer defect is graded according to the region of interest and the foreground mask region. Specifically, a specific region associated with grading may be first determined from the region of interest and the macular region, followed by grading the nerve fiber layer defect based on the specific region and the foreground mask region. As described above, the aforementioned macular region can be obtained by inputting a fundus image into a detection model, and the temporal side and the nasal side of a specific region can be determined using the macular region. In one embodiment, the region of interest is first partitioned using auxiliary horizontal lines and auxiliary vertical lines passing through the center of the region of interest, and then a specific region related to the classification is determined based on the partitioned region of interest and the macular region. In some embodiments, the specific region may include a temporal region, an epistaxis region, and/or a subnasal region. For example, a half region including the macular region in the divided region of interest is determined as the temporal side, and a half region not including the macular region in the divided region of interest is determined as the nasal side. Wherein the upper and lower half regions of the temporal side are respectively determined as the superior temporal region and the inferior temporal region, and the upper and lower half regions of the nasal side are respectively determined as the superior nasal region and the inferior nasal region. The aforementioned specific areas will be described in detail later with reference to fig. 5.
As previously noted, nerve fiber layer defects are commonly seen in the superior and inferior arcuate regions of the optic disc, i.e., the superior temporal region. As further developed, the nerve fiber layer defect may spread to the upper and/or lower nasal regions. Thus embodiments of the present disclosure quantitatively evaluate foreground regions in the temporal, superior nasal, and/or inferior regions described previously, thereby classifying nerve fiber layer defects. For example by calculating the occupancy values of the foreground region in the temporal upper region, the nasal upper region and the nasal lower region, respectively, and weighting the respective occupancy values. And then determining the grading result of the nerve fiber layer defect according to the comparison result of the weighted sum and the grading threshold value. In one implementation, the aforementioned classification threshold may include a plurality of thresholds, and the plurality of thresholds constitute a plurality of threshold ranges, each threshold range corresponding to a classification level associated with a severity of nerve fiber layer loss. In this case, the threshold range corresponding to the weighted sum can be determined based on the comparison result of the weighted sum of the duty ratios and the classification threshold, and the corresponding classification level can be determined according to the threshold range corresponding to the weighted sum to classify the nerve fiber layer defect. The grading of the aforementioned nerve fiber layer defects will be described in detail later in connection with fig. 6-7.
As is apparent from the above description, the embodiments of the present disclosure acquire the detection frame results of the optic disc region and the macular region of the fundus image by the deep learning method, and acquire the region of interest and the specific region (including the temporal superior region, the nasal superior region, and the subnasal region) based on the detection frame results of the optic disc region and the macular region, respectively. And then, acquiring a foreground region related to the nerve fiber layer defect in the region of interest by a deep learning method, and grading the nerve fiber layer defect according to the ratio of the foreground region to the temporal upper region, the nasal upper region and the subnasal lower region. Based on this, the grading result of the nerve fiber layer defect can be obtained quickly based on the fundus image, so that the advice of the treatment is provided for the patient and the medical staff later.
Fig. 2 is an exemplary diagram illustrating a detection frame result of obtaining a target region in a fundus image according to an embodiment of the present disclosure. As shown in the left diagram of fig. 2, which is an acquired original fundus image, the fundus image is input into the detection model 201, and a detection frame result including a disc region (for example, shown by a rectangular frame a) and a detection frame result including a macula region (for example, shown by a rectangular frame B) as shown in the right diagram of the figure can be output. In one embodiment, the fundus image may be preprocessed prior to being input into the detection model 201. The preprocessing may include, but is not limited to, image transformation, such as resizing the fundus image to 416 x 3. In some embodiments, the aforementioned detection model 201 may be, for example, a YOLOv3 target detection model.
As described above, the center of the disc area and the diameter of the disc area can be determined based on the obtained detection frame result of the disc area. In the presently disclosed embodiments, a range of length and width, e.g., 3D (where D is the diameter of the optic disc area), is determined as the region of interest by taking the center of the optic disc area as the center point, such as shown in fig. 3.
Fig. 3 is an exemplary schematic diagram illustrating a region of interest according to an embodiment of the present disclosure. The region included in the rectangular frame C shown in the left diagram of fig. 3 is the region of interest. It will be appreciated that the region included in the rectangular frame C is an enlarged region of the region included in the rectangular frame a shown in fig. 2, and the classification result of the nerve fiber layer defect can be more accurate based on the quantitative evaluation of the nerve fiber layer defect in a larger area. The region of interest may be individually truncated for convenience in subsequent further processing, for example, the right hand view of the figure shows the truncated region of interest. That is, the region included in the rectangular frame C shown in the left drawing is cut out to form a patch image of the region of interest. Further, inputting the patch image into a nerve fiber layer defect segmentation model can obtain a foreground region associated with the nerve fiber layer defect, for example, as shown in fig. 4.
Fig. 4 is an exemplary schematic diagram illustrating obtaining a foreground region associated with a nerve fiber layer defect according to an embodiment of the present disclosure. As shown in the left diagram of fig. 4, a patch image of the region of interest (i.e., shown in the right diagram of fig. 3) is taken, and the patch image is input into the nerve fiber layer defect segmentation model 401, so that a foreground region can be output. For example, two slit-like dark bands (as indicated by the arrows in the figure) shown in the right figure are the foreground regions associated with a nerve fiber layer defect. Similar to the fundus image described above, the patch image may be resized to, for example, 512×512×3 before being input into the nerve fiber layer defect segmentation model 401. In some embodiments, the aforementioned neural fiber layer defect segmentation model 401 may be, for example, a full convolutional neural network Unet.
Based on the above description, after the region of interest and the foreground region are obtained, the nerve fiber layer defect may be classified according to the region of interest and the foreground region. For example, a particular region may first be determined based on the region of interest and the macular region, and the nerve fiber layer defect may be ranked based on the particular region and the foreground region. The determination of a particular region is first described in detail below in connection with fig. 5.
Fig. 5 is an exemplary schematic diagram illustrating determination of a specific area according to an embodiment of the present disclosure. As shown in the left diagram of fig. 5, which is a region of interest including a foreground region, a center point O of the region of interest, i.e., the center of the optic disc region, is first found, for example, as shown in the middle diagram. Next, an auxiliary vertical line (for example, shown by a vertical dotted line in the figure) is drawn through the center point O of the region of interest to divide the region of interest into left and right half regions, wherein the half region containing the above-mentioned macular region is determined as the temporal side, and the half region not containing the macular region is determined as the nasal side, for example, the left half region shown in the middle figure is the nasal side, and the right half region is the temporal side. Further, an auxiliary horizontal line (for example, indicated by a horizontal dashed line in the figure) is made through the center point O of the region of interest to divide the nasal side and temporal side into an upper half region and a lower half region. Wherein the upper and lower half areas of the nasal side are defined as the upper nasal area and the lower nasal area, respectively, and the upper and lower half areas of the temporal side are defined as the upper temporal area and the lower temporal area, respectively, as shown, for example, in the right drawing. Based on this, the region of interest is divided into an temporal superior region, a temporal inferior region, an nasal superior region and a nasal inferior region, and then classification of the nerve fiber layer defect can be achieved according to the ratio of the foreground region in the foregoing temporal superior region, temporal inferior region, nasal superior region and nasal inferior region. How to rank the nerve fiber layer defects based on specific and foreground regions will be described in detail below in connection with fig. 6-7.
Fig. 6 is an exemplary flow diagram illustrating grading of nerve fiber layer defects based on specific and foreground regions in accordance with an embodiment of the present disclosure. It should be understood that fig. 6 is a branching step of step S108 of the method 100 of fig. 1 described above. As shown in fig. 6, at step S602, an auxiliary circle is formed centered on the center of the region of interest and having a radius that is a second preset multiple of the diameter of the optic disc region. That is, an auxiliary circle (e.g., as shown in fig. 7) is centered around its center in the region of interest and a second predetermined multiple of the diameter of the optic disc region is used as a radius. In one embodiment, the second preset multiple may be, for example, 1.5. Whereby the temporal superior region, temporal inferior region, nasal superior region and nasal inferior region each comprise a quarter of an arc of an auxiliary circle (e.g., arcs L1, L2, L3 and L4 shown in fig. 7). At step S604, the occupation ratios of the foreground region to the pixels of the arcs of the auxiliary circles corresponding to the temporal-inferior region, the temporal-superior region, the nasal-superior region, and the nasal-inferior region are calculated, respectively. In other words, the occupation ratio of the foreground region to the pixels on the quarter circular arc in each region is calculated.
Based on the corresponding occupancy values of the foreground region in the above-mentioned temporal inferior, superior temporal, superior nasal and inferior regions, at step S606, a weighted sum of the corresponding occupancy values is calculated and the weighted sum is compared with a classification threshold. In one embodiment, the weighted sum of the corresponding duty ratios may be calculated by the following formula:
T=w 1 R 1 +w 2 R 2 +w 3 R 3 +w 4 R 4 +w 5 R 1 R 2 +w 6 R 3 R 4 (1)
wherein T represents the weighted sum of the corresponding duty ratios, R 1 、R 2 、R 3 、R 4 Representing the corresponding ratio of the foreground region to the temporal region, nasal region and subnasal region, w i Representing the weight coefficient. In some embodiments, there may be situations where the foreground region is in both the temporal and superior regions, or where the foreground region is in both the superior and inferior regions. Whereby R is represented by formula (1) 1 And R is 2 Product R of (2) 1 R 2 R is as follows 3 And R is 4 Product R of (2) 3 R 4 To represent both of the foregoing. In one implementation, the aforementioned weight coefficient w i Regression statistics may be calculated based on the training dataset and the labels.
Further, at step S608, the nerve fiber layer defect is classified based on the comparison of the weighted sum and the classification threshold. As previously mentioned, the aforementioned classification threshold may comprise a plurality of thresholds, and the plurality of thresholds may be a preset set of small-to-large super-parameters, such as { T }, for example 1 ,T 2 ,T 3 ,T 4 }. Wherein T in the classification threshold i Regression statistics may be calculated based on the training dataset and the labels. In an implementation scenario, frontThe plurality of thresholds may constitute a plurality of threshold ranges, and each threshold range corresponds to a hierarchical level associated with a severity of nerve fiber layer loss. For example with the aforementioned classification threshold { T } 1 ,T 2 ,T 3 ,T 4 By way of example, it may be composed of T 1 ,T 2 ,T 3 ,T 4 Five threshold ranges are formed for the demarcation points and correspond to the hierarchical levels 1,2,3,4,5. Specifically, assume that the weighted sum T of the duty ratios<T 1 Corresponding to the stage number 1, when T 1 ≤T<T 2 And when the number of the steps is 2. Similarly, when T 2 ≤T<T 3 When the number of the steps is T, the corresponding step number is 3 3 ≤T<T 4 When T is greater than or equal to T, the corresponding grading level is 4 4 And when the number of the steps is 5. It is understood that a smaller value of the number of stages indicates a smaller degree of defect of the nerve fiber layer. In contrast, a larger value of the number of gradations indicates a more serious degree of defect of the nerve fiber layer. For example, a grading scale of 1 indicates a mild defect in the nerve fiber layer, and a grading scale of 5 indicates a severe defect in the nerve fiber layer.
Fig. 7 is an exemplary diagram illustrating calculating a ratio of foreground regions to specific regions according to an embodiment of the present disclosure. As shown in fig. 7, the region of interest is divided into an upper temporal region, a lower temporal region, an upper nasal region, and a lower nasal region by an auxiliary horizontal line (shown by a horizontal dotted line in the figure, for example) and an auxiliary vertical line (shown by a vertical dotted line in the figure, for example) passing through the center point O of the region of interest. Further, the superior temporal and inferior temporal regions show fissure-like dark bands (i.e., foreground regions associated with nerve fiber layer defects), respectively. The figure further shows an auxiliary circle formed with a radius of 1.5D (where D is the diameter of the disc area) around the center point O of the area of interest. In this scenario, it is assumed that the superior temporal region and the inferior temporal region correspond to the arcs L1 and L2 of the auxiliary circle, and the superior nasal region and the inferior nasal region correspond to the arcs L3 and L4 of the auxiliary circle. The intersection of the foreground region with the arcs L1 and L2 is the arcs L1 and L2 respectively, the ratio R of the foreground region in the superior temporal region and the inferior temporal region 2 And R is 1 The ratio of the pixel point on the arc L1 to the pixel point on the arc L1 and the pixel point on the arc L2 to the arc L2 are respectivelyThe ratio of the pixel points on the display. Further, the ratio R of the foreground region to the temporal upper region and the temporal lower region 2 And R is 1 R is obtained by substituting the above formula (1) 2 And R is 1 Is provided for the sum T of the weights of (a). Further, the weighted sum T is compared to a classification threshold to classify the nerve fiber layer defect. Grading of nerve fiber layer defects may be accomplished with reference to step S608 described above in fig. 6, and this disclosure is not repeated here.
Fig. 8 is a block diagram illustrating an apparatus 800 for grading a nerve fiber layer defect based on an eye bottom image in accordance with an embodiment of the present disclosure. It is to be appreciated that the device implementing aspects of the present disclosure may be a single device (e.g., a computing device) or a multi-function device including various peripheral devices.
As shown in fig. 8, the apparatus of the present disclosure may include a central processing unit or central processing unit ("CPU") 811, which may be a general purpose CPU, a special purpose CPU, or other execution unit for information processing and program execution. Further, the device 800 may also include a mass memory 812 and a read only memory ("ROM") 813, wherein the mass memory 812 may be configured to store various types of data, including various types of fundus images to be processed, algorithm data, intermediate results, and various programs required to operate the device 800. ROM 813 may be configured to store data and instructions necessary to power-on self-test of device 800, initialization of functional modules in the system, drivers for basic input/output of the system, and boot the operating system.
Optionally, the device 800 may also include other hardware platforms or components, such as a tensor processing unit ("TPU") 814, a graphics processing unit ("GPU") 815, a field programmable gate array ("FPGA") 816, and a machine learning unit ("MLU") 817, as shown. It will be appreciated that while various hardware platforms or components are shown in device 800, this is by way of example only and not limitation, and that one of skill in the art may add or remove corresponding hardware as desired. For example, device 800 may include only a CPU, associated memory device, and interface device to implement the methods of the present disclosure for grading a nerve fiber layer defect based on fundus images.
In some embodiments, to facilitate the transfer and interaction of data with external networks, the device 800 of the present disclosure further comprises a communication interface 818 whereby a local area network/wireless local area network ("LAN/WLAN") 805 may be connected through the communication interface 818 and further whereby a local server 806 or Internet ("Internet") 807 may be connected through the LAN/WLAN. Alternatively or additionally, the device 800 of the present disclosure may also be directly connected to the internet or cellular network via the communication interface 818 based on wireless communication technology, such as wireless communication technology based on generation 3 ("3G"), generation 4 ("4G"), or generation 5 ("5G"). In some application scenarios, the device 800 of the present disclosure may also access the server 808 and database 809 of the external network as needed to obtain various known image models, data, and modules, and may store various data remotely, such as various types of data or instructions for presenting, for example, regions of interest, foreground regions, and grading results, etc.
Peripheral devices of device 800 may include a display device 802, an input device 803, and a data transmission interface 804. In one embodiment, the display device 802 may, for example, include one or more speakers and/or one or more visual displays configured for voice prompts and/or visual display of the process or end result of the present disclosure of grading a nerve fiber layer defect based on fundus images. The input device 803 may include other input buttons or controls, such as a keyboard, mouse, microphone, gesture-capturing camera, etc., configured to receive inputs of fundus images and/or user instructions. The data transfer interface 804 may include, for example, a serial interface, a parallel interface, or a universal serial bus interface ("USB"), a small computer system interface ("SCSI"), serial ATA, fireWire ("FireWire"), PCI Express, and high definition multimedia interface ("HDMI"), etc., configured for data transfer and interaction with other devices or systems. According to aspects of the present disclosure, the data transmission interface 804 may receive fundus images from fundus camera acquisitions and transmit data or results, including fundus images or various other types, to the device 800.
The above-described CPU 811, mass memory 812, ROM 813, TPU 814, GPU 815, FPGA 816, MLU 817, and communication interface 818 of the device 800 of the present disclosure can be interconnected by a bus 819 and enable data interaction with peripheral devices through the bus. In one embodiment, CPU 811 may control other hardware components in device 800 and its peripherals through the bus 819.
An apparatus that may be used to perform the present disclosure to rank nerve fiber layer defects based on fundus images is described above in connection with fig. 8. It is to be understood that the device structure or architecture herein is merely exemplary, and that the implementations and implementation entities of the present disclosure are not limited thereto, but may be modified without departing from the spirit of the present disclosure.
Those skilled in the art will also appreciate from the foregoing description, taken in conjunction with the accompanying drawings, that embodiments of the present disclosure may also be implemented in software programs. The present disclosure thus also provides a computer program product. The computer program product may be used to implement the method of grading a nerve fiber layer defect based on fundus images described in connection with fig. l-7 of the present disclosure.
It should be noted that although the operations of the disclosed methods are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in that particular order or that all of the illustrated operations be performed in order to achieve desirable results. Rather, the steps depicted in the flowcharts may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
It should be understood that when the terms "first," "second," "third," and "fourth," etc. are used in the claims, the specification and the drawings of the present disclosure, they are used merely to distinguish between different objects, and not to describe a particular order. The terms "comprises" and "comprising" when used in the specification and claims of this disclosure are taken to specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present disclosure is for the purpose of describing particular embodiments only, and is not intended to be limiting of the disclosure. As used in this disclosure and in the claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the present disclosure and claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
While the embodiments of the present disclosure are described above, the descriptions are merely examples employed to facilitate understanding of the present disclosure, and are not intended to limit the scope and application of the present disclosure. Any person skilled in the art to which this disclosure pertains will appreciate that numerous modifications and variations in form and detail can be made without departing from the spirit and scope of the disclosure, but the scope of the disclosure is to be determined by the appended claims.

Claims (10)

1. A method of grading a nerve fiber layer defect based on fundus images, comprising:
inputting a fundus image into a detection model to obtain a detection frame result of a target region in the fundus image;
determining a region of interest based on a detection frame result of the target region;
inputting the region of interest into a nerve fiber layer defect segmentation model to obtain a foreground region related to the nerve fiber layer defect region; and
the nerve fiber layer defects are graded according to the region of interest and the foreground region.
2. The method of claim 1, wherein the target area comprises at least a disc area, wherein determining a region of interest based on a detection frame result of the target area comprises:
determining the center of the video disc area and the diameter of the video disc area based on the detection frame result of the video disc area; and
taking the center of the video disc area as a center point, and taking an area with the length and the width being a first preset multiple of the diameter of the video disc area as an interested area.
3. The method of claim 2, wherein the target region further comprises a macular region, grading a nerve fiber layer defect according to the region of interest and the foreground region comprising:
determining a specific region related to the grading according to the region of interest and the macular region; and
the nerve fiber layer defect is graded based on the specific region and the foreground region.
4. A method according to claim 3, wherein determining a specific region related to the grading from the region of interest and the macular region comprises:
dividing the region of interest by using an auxiliary horizontal line and an auxiliary vertical line passing through the center of the region of interest; and
a specific region associated with the grading is determined based on the partitioned region of interest and the macular region.
5. The method of claim 4, wherein the specific region comprises a temporal-inferior region, a temporal-superior region, an superior nasal region, and/or a sub-nasal region.
6. The method of claim 5, wherein determining a particular region associated with the classification based on the partitioned region of interest and the macular region comprises:
determining a half area containing the macula area in the divided region of interest as a temporal side, and determining a half area not containing the macula area in the divided region of interest as a nasal side; and
the upper and lower half regions of the temporal side are determined as the temporal upper and temporal lower regions, respectively, and the upper and lower half regions of the nasal side are determined as the nasal upper and nasal lower regions, respectively.
7. The method of claim 6, wherein grading a nerve fiber layer defect based on the specific region and the foreground region comprises:
forming an auxiliary circle taking the center of the region of interest as a center and taking a second preset multiple of the diameter of the video disc region as a radius;
calculating the occupation ratio of the foreground region to the pixel points of the circular arcs of the auxiliary circles corresponding to the temporal-inferior region, the temporal-superior region, the nasal-superior region and the nasal-inferior region respectively;
calculating a weighted sum of the corresponding duty ratios and comparing the weighted sum to a classification threshold; and
the nerve fiber layer defect is classified based on a comparison of the weighted sum to a classification threshold.
8. The method of claim 7, wherein the classification threshold comprises a plurality of thresholds and the plurality of thresholds comprise a plurality of threshold ranges, each of the threshold ranges corresponding to a number of classification levels associated with a severity of nerve fiber layer defects, wherein classifying the nerve fiber layer defects based on a comparison of the weighted sum and the classification threshold comprises:
determining a threshold range corresponding to the weighted sum based on a comparison result of the weighted sum and a grading threshold; and
and determining a corresponding grading level according to a threshold range corresponding to the weighted sum so as to grade the nerve fiber layer defect.
9. An apparatus for grading a nerve fiber layer defect based on fundus images, comprising:
a processor; and
a memory coupled to the processor, the memory having stored therein computer program code which, when executed by the processor, causes the apparatus to perform the method of any of claims 1-8.
10. A computer-readable storage medium having stored thereon computer-readable instructions for grading a nerve fiber layer defect based on a fundus image, which when executed by one or more processors, implement the method of any of claims 1-8.
CN202210101496.6A 2022-01-27 2022-01-27 Method for classifying nerve fiber layer defects based on fundus images and related products Pending CN116563195A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210101496.6A CN116563195A (en) 2022-01-27 2022-01-27 Method for classifying nerve fiber layer defects based on fundus images and related products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210101496.6A CN116563195A (en) 2022-01-27 2022-01-27 Method for classifying nerve fiber layer defects based on fundus images and related products

Publications (1)

Publication Number Publication Date
CN116563195A true CN116563195A (en) 2023-08-08

Family

ID=87502316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210101496.6A Pending CN116563195A (en) 2022-01-27 2022-01-27 Method for classifying nerve fiber layer defects based on fundus images and related products

Country Status (1)

Country Link
CN (1) CN116563195A (en)

Similar Documents

Publication Publication Date Title
EP3659067B1 (en) Method of modifying a retina fundus image for a deep learning model
Wang et al. Simultaneous diagnosis of severity and features of diabetic retinopathy in fundus photography using deep learning
Gao et al. Automatic feature learning to grade nuclear cataracts based on deep learning
Kauppi et al. The diaretdb1 diabetic retinopathy database and evaluation protocol.
Aurangzeb et al. Contrast enhancement of fundus images by employing modified PSO for improving the performance of deep learning models
CN113768461B (en) Fundus image analysis method, fundus image analysis system and electronic equipment
CN113768460A (en) Fundus image analysis system and method and electronic equipment
Li et al. Vessel recognition of retinal fundus images based on fully convolutional network
KR102220573B1 (en) Method, apparatus and computer program for calculating quality score of fundus image data using artificial intelligence
CN112884729A (en) Auxiliary diagnosis method and device for fundus diseases based on bimodal deep learning
Liu et al. Small sample color fundus image quality assessment based on gcforest
CN110610480B (en) MCASPP neural network eyeground image optic cup optic disc segmentation model based on Attention mechanism
CN113610842B (en) OCT image retina detachment and splitting automatic segmentation method based on CAS-Net
Neto et al. Optic disc and cup segmentations for glaucoma assessment using cup-to-disc ratio
CN116563195A (en) Method for classifying nerve fiber layer defects based on fundus images and related products
CN113011340B (en) Cardiovascular operation index risk classification method and system based on retina image
CN111291706B (en) Retina image optic disc positioning method
Mall et al. Fixated and not fixated regions of mammograms: a higher-order statistical analysis of visual search behavior
CN111191684B (en) Visual otoscope with intelligent image classification diagnosis function based on deep learning
US20220058371A1 (en) Classification of cell nuclei
CN112734701A (en) Fundus focus detection method, fundus focus detection device and terminal equipment
Naramala et al. Enhancing Diabetic Retinopathy Detection Through Machine Learning with Restricted Boltzmann Machines
Hussein et al. Convolutional Neural Network in Classifying Three Stages of Age-Related Macula Degeneration
KR102438659B1 (en) The method for classifying diabetic macular edema and the device thereof
Lu et al. Automatic fundus image classification for computer-aided diagonsis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination