CN116681707B - Cornea fluorescein staining image identification grading method - Google Patents

Cornea fluorescein staining image identification grading method Download PDF

Info

Publication number
CN116681707B
CN116681707B CN202310979173.1A CN202310979173A CN116681707B CN 116681707 B CN116681707 B CN 116681707B CN 202310979173 A CN202310979173 A CN 202310979173A CN 116681707 B CN116681707 B CN 116681707B
Authority
CN
China
Prior art keywords
cornea
region
brightness
gray
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310979173.1A
Other languages
Chinese (zh)
Other versions
CN116681707A (en
Inventor
田磊
周光泉
接英
任子恺
冯珺
文晧男
吕静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tongren Medical Technology Co ltd
Beijing Tongren Hospital
Original Assignee
Beijing Tongren Medical Technology Co ltd
Beijing Tongren Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tongren Medical Technology Co ltd, Beijing Tongren Hospital filed Critical Beijing Tongren Medical Technology Co ltd
Priority to CN202310979173.1A priority Critical patent/CN116681707B/en
Publication of CN116681707A publication Critical patent/CN116681707A/en
Application granted granted Critical
Publication of CN116681707B publication Critical patent/CN116681707B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Eye Examination Apparatus (AREA)

Abstract

The invention relates to a cornea fluorescein staining image identification grading method, which comprises the following steps: obtaining a corneal fluorescein staining image; identifying a region of interest of the cornea in the image; the cornea region of interest is a sector region, the included angle is 90 degrees, the central point of the cornea region of interest is the central point of the pupil, and the radius of the cornea region of interest is the distance between the central point of the cornea and the low edge of the cornea; dividing a cornea region of interest into stained regions; extracting topological features of the segmented cornea region of interest, and extracting morphological features, first-order histogram statistical features and second-order gray matrix features of the cornea region of interest; and identifying and grading the cornea fluorescent dye based on the topological feature, the morphological feature, the first-order histogram statistical feature and the second-order gray matrix feature. The cornea fluorescent dye identification and classification method based on the topological features, the morphological features, the first-order histogram statistical features and the second-order gray matrix features improves classification accuracy.

Description

Cornea fluorescein staining image identification grading method
Technical Field
The invention relates to the technical field of image processing, in particular to a cornea fluorescein staining image identification grading method.
Background
Corneal fluorescein staining is a key biomarker for assessing dry eye. However, the lack of consistency in subjective dimensions of corneal fluorescein staining increases the difficulty of accurate diagnosis by the clinician.
One common solution is currently: sodium fluorescein was taken as a cornea staining dye into diseased cells and transported through altered tight junctions and microcells in the space left by the exfoliated epithelial cells. The stained cornea images were acquired and corneal fluorescein staining was assessed by subjective scoring scale.
Wherein the subjective scoring scale is as NEI (National Eye Institute ) scale. The NEI scale divides the cornea into 5 areas (central, upper, lower, nasal and temporal) and scores each area on a scale of 0 to 3.
When cornea images are classified by the existing method, spatial position information is ignored, and classification accuracy is limited.
Disclosure of Invention
First, the technical problem to be solved
In order to solve the problems, the invention provides a cornea fluorescein staining image identification grading method.
(II) technical scheme
In order to achieve the above purpose, the main technical scheme adopted by the invention comprises the following steps:
a corneal fluorescein staining image identification grading method, comprising the steps of:
Obtaining a corneal fluorescein staining image;
identifying a region of interest of the cornea in the image; the cornea region of interest is a sector region, the included angle is 90 degrees, the central point of the cornea region of interest is the central point of the pupil, and the radius of the cornea region of interest is the distance between the central point of the cornea and the low edge of the cornea;
dividing a cornea region of interest into stained regions;
extracting topological features of the segmented cornea region of interest, and extracting morphological features, first-order histogram statistical features and second-order gray matrix features of the cornea region of interest;
and identifying and grading the cornea fluorescent dye based on the topological feature, the morphological feature, the first-order histogram statistical feature and the second-order gray matrix feature.
Optionally, identifying a region of interest of the cornea in the image includes:
binarizing and segmenting the scleral region through an Otsu threshold; the scleral region is positioned outside the cornea, and the brightness of the scleral region is greater than the brightness of the cornea;
determining a distance of the corneal center point from the corneal low edge based on the maximum diameter of the inner edge of the treated scleral region;
determining a central region of the cornea based on the inner edge of the treated scleral region;
forming a plurality of groups of center-radius combinations by taking each point in the central area of the cornea as a center and taking x times of the distance between the central point of the cornea and the low edge of the cornea as a radius, wherein x is a decimal;
Detecting the intensity of a round gray value formed by each group of center-radius combinations by using a Bowman integral derivative operator;
the area with the highest gray intensity is determined as the pupil, and the center point of the pupil is determined as the center point of the cornea region of interest.
Optionally, performing a dye region segmentation of the region of interest of the cornea, comprising:
extracting green channel data in a cornea region of interest;
performing an on operation on the green channel data to eliminate bright colored regions;
taking a cornea region of interest as a template, and carrying out gray reconstruction on the result of the split operation;
the intra-staining region segmentation is performed based on the difference between the region of interest of the cornea and the result of the opening operation after reconstruction.
Optionally, extracting topological features of the segmented cornea region of interest includes:
taking the centroid of each segmented cornea region of interest as a vertex;
expanding each segmented cornea region of interest over a plurality of dimensions;
if the expanded areas overlap, connecting the vertexes of the overlapped areas to form a topological graph of space connectivity and distribution;
extracting topological features based on the topological graph;
the topological features include: subgraph number, average vertex degree, maximum vertex degree, average vertex eccentricity, diameter, average cluster coefficient, giant connected component ratio, isolated point percentage.
Optionally, the morphological features include: average area, total area, average perimeter to area ratio, total perimeter to area ratio, average circularity, average perimeter, total perimeter, minimum external rectangular aspect ratio, number.
Optionally, the first-order histogram statistical feature includes: tenth percentile, nineteenth percentile, energy, entropy, quartile range, kurtosis, maximum, mean absolute deviation, average, median, minimum, range, robust mean absolute deviation, root mean square, bias, total energy, uniformity, variance.
Optionally, the second order gray matrix features include: gray level co-occurrence matrix characteristics, gray level area size matrix characteristics, gray level travel matrix characteristics, adjacent gray level difference matrix characteristics and gray level dependency matrix characteristics;
the gray level co-occurrence matrix features include: autocorrelation, cluster saliency, cluster darkness, cluster tendency, contrast, correlation, difference average, difference entropy, difference variance, inverse difference, normalized inverse difference, inverse difference moment, normalized inverse difference moment, correlation information measure, inverse variance, joint average, joint energy, joint entropy, maximum correlation coefficient, maximum probability, and average, sum entropy, and sum variance;
The gray area size matrix features include: gray non-uniformity, normalized gray non-uniformity, gray variance, high gray area emphasis, large area high gray level emphasis, large area low gray level emphasis, low gray area emphasis, area size non-uniformity, normalized area size non-uniformity, small area emphasis, small area high gray level emphasis, small area low gray level emphasis, area entropy, area percentage, area variance;
the gray scale travel matrix features include: gray non-uniformity, normalized gray non-uniformity, gray variance, high gray scale stroke emphasis, long stroke high gray scale emphasis, long stroke low gray scale emphasis, short stroke high gray scale emphasis, short stroke low gray scale emphasis;
the adjacent gray level difference adjustment matrix features include: busyness, roughness, complexity, contrast, intensity;
the gray scale dependency matrix features include: dependent entropy, dependent non-uniformity, normalized dependent non-uniformity, dependent variance, gray non-uniformity, gray variance, high gray emphasis, large dependent high gray emphasis, large dependent low gray emphasis, small dependent high gray emphasis, small dependent low gray emphasis.
Optionally, identifying and grading the corneal fluorescent dye based on the topological feature, the morphological feature, the first-order histogram statistical feature and the second-order gray matrix feature comprises:
removing insignificant features in topological features, morphological features, first-order histogram statistical features and second-order gray matrix features by adopting single-factor analysis of variance and a filter based on Pearson redundancy to obtain preliminary identification grading features;
based on a feature selection method of the decision tree and the preliminary identification grading feature, obtaining the feature importance of the preliminary identification grading feature, and selecting a final identification grading feature according to the importance, wherein the final identification grading feature mainly comprises topological features; and carrying out the identification grading of the cornea fluorescent dye based on a support vector machine method and final identification grading characteristics.
Optionally, the method for determining the scleral region includes:
acquiring RGB values of each pixel point in the cornea region;
determining the brightness Bi=max { Ri, gi, bi }, wherein i is a pixel mark in the cornea region, bi is the brightness of the pixel i, ri is the R value of the pixel i, gi is the G value of the pixel i, bi is the B value of the pixel i;
determining standard deviation sigma 0 of brightness of each pixel point of the boundary of the region to which the cornea belongs;
The maximum value of the brightness of all the pixel points is determined as the brightness of the cornea, and the sclera area is determined according to the brightness of the cornea and sigma 0.
Optionally, determining the scleral region from the brightness of the cornea and σ0 includes:
initializing the attribute of all pixel points outside the area of the cornea as undetermined;
according to the attribute, determining a continuous high-brightness area of each pixel point of the boundary of the area to which the cornea belongs;
determining a region covered by a continuous high-brightness region of all pixel points of the boundary of the region to which the cornea belongs as a sclera region;
wherein, the continuous high brightness area of any pixel point j of the boundary of the area of the cornea is determined by the following steps:
step 1, taking any pixel point j as a current processing point;
step 2, determining whether adjacent pixel points exist in the current processing point; the adjacent pixel points are one or more of left, right, upper and lower adjacent to the current processing point, and the attribute of the adjacent pixel points is undetermined;
step 3, if the adjacent pixel points do not exist, determining the area composed of the adjacent pixel points with the determined attribute of the pixel point j as a continuous high-brightness area of any pixel point j; if the adjacent pixel points exist, executing the steps 4 to 6;
Step 4, determining RGB values of adjacent pixel points;
step 5, determining the brightness b=max { R, G, B } of the adjacent pixel points;
step 6, if the brightness of all the adjacent pixel points is less than the cornea brightness, stopping the continuous high brightness region determining step of any pixel point j, and determining the region composed of the adjacent pixel points with the determined attribute of the pixel point j as the continuous high brightness region of any pixel point j;
if the suspected high-brightness pixel point exists, updating the attribute of the suspected high-brightness pixel point to be a pixel point j to be determined, taking the suspected high-brightness pixel point as a current processing point, and repeatedly executing the step 2 and the subsequent steps; the suspected high-brightness pixel points are adjacent pixel points, the brightness of the suspected high-brightness pixel points is larger than or equal to the brightness of the cornea, and the brightness of the suspected high-brightness pixel points is smaller than a brightness threshold; the brightness threshold is the brightness of cornea x (1+σ0);
if the highlight pixel exists, updating the attribute of the highlight pixel to be determined as a pixel j, updating the attribute of the pixel to be determined as the pixel j, taking the highlight pixel as a current processing point, and repeating the step 2 and the subsequent steps; wherein the highlighted pixel is an adjacent pixel, and the brightness of the highlighted pixel is greater than or equal to the brightness threshold.
(III) beneficial effects
The beneficial effects of the application are as follows:
the application relates to a cornea fluorescein staining image identification grading method, which comprises the following steps: obtaining a corneal fluorescein staining image; identifying a region of interest of the cornea in the image; the cornea region of interest is a sector region, the included angle is 90 degrees, the central point of the cornea region of interest is the central point of the pupil, and the radius of the cornea region of interest is the distance between the central point of the cornea and the low edge of the cornea; dividing a cornea region of interest into stained regions; extracting topological features of the segmented cornea region of interest, and extracting morphological features, first-order histogram statistical features and second-order gray matrix features of the cornea region of interest; and identifying and grading the cornea fluorescent dye based on the topological feature, the morphological feature, the first-order histogram statistical feature and the second-order gray matrix feature. The application carries out the evaluation, identification and grading of the cornea fluorescent dye based on the topological feature, the morphological feature, the first-order histogram statistical feature and the second-order gray matrix feature, thereby improving the grading accuracy.
Drawings
FIG. 1 is a flow chart of a method for identifying and classifying corneal fluorescein stained images according to an embodiment of the present application;
FIG. 2 is a schematic diagram of brightest area elimination according to an embodiment of the present application;
FIG. 3 is a schematic representation of a corneal fluorescein staining image provided in accordance with one embodiment of the present application;
FIG. 4 is a schematic view of a cornea region of interest according to one embodiment of the present application;
fig. 5 (a) is a schematic diagram of scale=4 according to an embodiment of the present application;
fig. 5 (b) is a schematic diagram of scale=16 according to an embodiment of the present application;
fig. 5 (c) is a schematic diagram of scale=32 according to an embodiment of the present application;
fig. 5 (d) is a schematic diagram of scale=48 according to an embodiment of the present application;
fig. 5 (e) is a schematic diagram of scale=64 according to an embodiment of the present application.
Detailed Description
The application will be better explained by the following detailed description of the embodiments with reference to the drawings.
One common solution is currently: sodium fluorescein was taken as a cornea staining dye into diseased cells and transported through altered tight junctions and microcells in the space left by the exfoliated epithelial cells. The stained cornea images were acquired and corneal fluorescein staining was assessed by subjective scoring scale. Wherein the subjective scoring scale is as NEI scale. The NEI scale divides the cornea into 5 areas (central, upper, lower, nasal and temporal) and scores each area on a scale of 0 to 3. The existing method ignores spatial position information when the cornea image is recognized as far as possible, and limits recognition accuracy.
Based on this, the invention relates to a corneal fluorescein staining image identification grading method, which comprises the following steps: obtaining a corneal fluorescein staining image; identifying a region of interest of the cornea in the image; the cornea region of interest is a sector region, the included angle is 90 degrees, the central point of the cornea region of interest is the central point of the pupil, and the radius of the cornea region of interest is the distance between the central point of the cornea and the low edge of the cornea; dividing a cornea region of interest into stained regions; extracting topological features of the segmented cornea region of interest, and extracting morphological features, first-order histogram statistical features and second-order gray matrix features of the cornea region of interest; and identifying and grading the cornea fluorescent dye based on the topological feature, the morphological feature, the first-order histogram statistical feature and the second-order gray matrix feature. According to the invention, the cornea fluorescent dye is identified and classified based on the topological feature, the morphological feature, the first-order histogram statistical feature and the second-order gray matrix feature, so that the identification accuracy is improved.
Referring to fig. 1, the implementation process of the corneal fluorescein staining image identification grading method provided in this embodiment is as follows:
s101, obtaining a corneal fluorescein staining image.
The corneal fluorescein stained image obtained in this step may be a continuous multiple piece corneal fluorescein stained image.
In addition, in order to improve the sharpness of the limbus, the reflection area is eliminated, and after the corneal fluorescein-stained image is acquired in step S101, a series of image preprocessing operations are also performed on the corneal fluorescein-stained image. For example, in the case of obtaining low contrast corneal fluorescein stained images, image enhancement methods are employed to increase the visibility of the corneal limbus, possibly due to changes in the light environment during image capture.
In addition, the Otsu thresholding method may be used to detect and eliminate the brightest regions, thereby addressing potential reflection regions, as shown in fig. 2.
S102, identifying a cornea region of interest in the image.
The cornea region of interest (Region of Interes, ROI) is a sector region, the included angle is 90 degrees, the central point of the cornea region of interest is the central point of the pupil, and the radius of the cornea region of interest is the distance between the central point of the cornea and the low edge of the cornea.
The implementation process of the steps is as follows:
s102-1, binarizing and segmenting the scleral region through an Otsu threshold.
Wherein the scleral region is located outside the cornea, and the brightness of the scleral region is greater than the brightness of the cornea.
The determination method of the sclera area can be determined by adopting the existing identification method or the method provided by the proposal, and the method comprises the following steps:
s201, RGB values of all pixel points in the area of the cornea are obtained.
For example, as shown in fig. 3, a corneal fluorescein-stained image (a preprocessed image in this case) is a region to which the cornea belongs, white cells are pixels, and each cell is a pixel. Each pixel in the region to which the cornea belongs includes a pixel (e.g., pixel a 1) that is fully covered by the shadow portion and a pixel (e.g., pixel a 2) that is partially covered.
In this step, RGB values of each pixel in the region of the cornea are obtained first, that is, the values of three color channels of red (R), green (G), and blue (B) of each pixel are obtained.
S202, determining the brightness bi=max { Ri, gi, bi } of each pixel point.
Where i is a pixel identifier in the area to which the cornea belongs, bi is the brightness of the pixel i, ri is the R value of the pixel i, gi is the G value of the pixel i, and Bi is the B value of the pixel i.
S203, the standard deviation σ0 of the brightness of each pixel point of the boundary of the region to which the cornea belongs is determined.
For example, the boundary of the region to which the cornea belongs includes x pixels, then
1) The brightness of the x pixel points and sum=b1+b2+ … +bx are calculated.
2) The average avg=sum/x of the x pixels is calculated.
3) The standard deviation σ0= { [ (B1-Avg)/(2+ (B2-Avg)/(2 + … + (Bx-Avg)/(2)/x } ] 1/2 is calculated.
S204, determining the maximum value of the brightness of all the pixel points as the brightness of the cornea, and determining the sclera area according to the brightness of the cornea and sigma 0.
The implementation process of determining the sclera area according to the brightness of the cornea and sigma 0 comprises the following steps:
1) And initializing the attribute of all pixel points outside the area of the cornea as undetermined.
That is, the attribute of the pixel point of the full white background in fig. 3 is initialized to be undetermined.
2) And determining a continuous high-brightness area of each pixel point of the boundary of the area to which the cornea belongs according to the attribute.
The boundary is a pixel point, such as a2, which is not all white or all diagonal in fig. 3.
For example, the continuous high-luminance area for any pixel point j of the boundary of the area to which the cornea belongs is determined by:
and step 1, taking any pixel point j as a current processing point.
And 2, determining whether the current processing point has adjacent pixel points.
The adjacent pixel points are one or more of left, right, upper and lower adjacent to the current processing point, and the attribute of the adjacent pixel points is undetermined.
Taking the current processing point as a2 as an example, the adjacent left pixel point is a6, the right pixel point is a4, the upper pixel point is a5, and the lower pixel point is a3. Since only pixels with a completely white background have the attribute, a3, a5, a6 have no attribute, and only a4 has the attribute, and if the attribute is undetermined, the adjacent pixel is a4.
Step 3, if the adjacent pixel points do not exist, determining the area composed of the adjacent pixel points with the determined attribute of the pixel point j as a continuous high-brightness area of any pixel point j; if there are adjacent pixels, go to step 4 to step 6.
If there is no adjacent pixel, the image edge is indicated, or there is no pixel outside the cornea region, and the continuous high brightness region determining step of a2 is stopped (i.e. the processes of step 1 to step 5 are stopped), and the region composed of the adjacent pixels determined by the attribute of a2 is determined as the continuous high brightness region of a 2.
If there is no adjacent pixel point, it indicates that the image edge is not reached, or there is a pixel point outside the cornea region, and at this time, a continuous high brightness region is found through steps 4 to 6.
And 4, determining RGB values of adjacent pixel points.
This step will obtain the RGB values of a4, i.e. the values of the three color channels of red (R), green (G), blue (B) of a4.
And 5, determining the brightness B=max { R, G, B } of the adjacent pixel points.
This step determines a4 brightness to be the maximum value among the R value, G value, and B value of a 4.
And 6, if the brightness of all the adjacent pixel points is smaller than the cornea brightness, stopping the continuous high brightness region determining step of any pixel point j, and determining the region composed of the adjacent pixel points with the determined attribute of the pixel point j as the continuous high brightness region of any pixel point j.
And if the suspected high-brightness pixel exists, updating the attribute of the suspected high-brightness pixel to be the pixel j to be determined, taking the suspected high-brightness pixel as the current processing point, and repeating the step 2 and the subsequent steps. The suspected high-brightness pixel points are adjacent pixel points, the brightness of the suspected high-brightness pixel points is larger than or equal to the brightness of the cornea, and the brightness of the suspected high-brightness pixel points is smaller than a brightness threshold value. The brightness threshold is the brightness of the cornea x (1+σ0).
If the highlight pixel exists, updating the attribute of the highlight pixel to be determined as the pixel j, updating the attribute of the pixel to be determined as the pixel j, taking the highlight pixel as the current processing point, and repeating the step 2 and the subsequent steps. Wherein the highlighted pixel is an adjacent pixel, and the brightness of the highlighted pixel is greater than or equal to the brightness threshold.
For example, if the brightness of a4 is smaller than the brightness of the cornea, since there is only one a4 adjacent pixel, the brightness of all adjacent pixels is smaller than the brightness of the cornea at this time, which means that the boundary of the continuous high brightness region of a2 is reached, at this time, the continuous high brightness region determining step of a2 is stopped (i.e., the processes of steps 1 to 5 are stopped), and the region composed of the adjacent pixels whose attribute is a2 is determined as the continuous high brightness region of a 2.
If the brightness of a4 is greater than or equal to the brightness of the cornea, but the brightness of a4 is less than the brightness of the cornea× (1+σ0), a4 is a suspected high-brightness pixel, and at this time, although the brightness of a4 is greater than the brightness of the cornea, the difference is small, possibly due to the image color, and therefore, it cannot be concluded that it is necessarily a pixel in a continuous high-brightness region. Whether it is a pixel in the continuous high luminance area or not, depending on the case of its neighboring pixel, a4 is also a pixel in the continuous high luminance area if the neighboring pixel is a pixel in the continuous high luminance area, and a4 is also not a pixel in the continuous high luminance area if the neighboring pixel is not a pixel in the continuous high luminance area. At this time, the attribute of a4 is updated to a2 to be determined, and a4 is used as the current processing point, and the step 2 and the subsequent steps (i.e. the steps of determining the adjacent pixel point of a2, determining the brightness of the adjacent pixel point of a2, etc.) are repeatedly executed.
If the brightness of a4 is greater than or equal to the brightness x (1+σ0) of the cornea, a4 is a highlighted pixel, and is a pixel in a continuous high brightness area, at this time, the attribute of a4 is updated to a2 to be determined, meanwhile, if a pixel with the attribute of a2 to be determined exists, the pixel is described as a suspicious pixel, and the suspicious pixel is no longer suspicious due to the pixel in the a4 continuous high brightness area, but is determined to be a pixel in the continuous high brightness area, then the attribute of the suspicious pixel (i.e. the pixel with the attribute of a2 to be determined) is updated to a2 to be determined, a4 is taken as a current processing point, and the steps of 2 and subsequent steps are repeatedly executed to continuously find the boundary of the continuous high brightness area.
3) The region covered by the continuous high-brightness region of all the pixel points of the boundary of the region to which the cornea belongs is determined as the sclera region.
There may be a case where consecutive high brightness regions of some pixels overlap, and at this time, the overlapping portion is also a sclera region.
In addition, the sclera region is generally semicircular, and in this step, the inscribed semicircle of the region covered by the continuous high-brightness region of all the pixels at the boundary of the region to which the cornea belongs may be determined as the sclera region.
S102-2, determining the distance between the central point of the cornea and the low edge of the cornea based on the maximum diameter of the inner edge of the processed scleral region.
This step determines the distance of the corneal center point from the lower corneal edge, which corresponds to the rounded edge of the cornea, based on the maximum diameter of the binarized scleral margin.
S102-3, determining a cornea central area based on the inner edge of the processed scleral area.
S102-4, forming a plurality of groups of center-radius combinations by taking each point in the central area of the cornea as a center and taking x times of the distance between the central point of the cornea and the low edge of the cornea as a radius.
Where x is a fraction, such as x is one third. That is, a plurality of sets of center-radius combinations are formed with each point in the central region of the cornea as a center and with the length of the vicinity of one third of the distance from the central point of the cornea to the lower edge of the cornea as a radius.
S102-5, detecting the gray value intensity of the circle formed by each group of center-radius combinations by using a Bowman integral derivative operator.
S102-6, determining the area with the highest gray intensity as a pupil, and determining the central point of the pupil as the central point of the cornea region of interest.
Through steps S102-4 to S102-6, the maximum outward radial gradient of the intensity of the gray value of the circle at any point and radius combination in the central region of the cornea can be found, which constitutes the pupil.
Taking any point (x 0, y 0) in the central region of the cornea, the radius is r, the combination of the point and the radius is (x 0, y0, r) as an example, the intensity of the gray value of the circle formed by each group of center-radius combinations detected by the sigma integral derivative operator of the following formula:
after the cornea and pupil are detected, the position of the region of interest of the cornea is determined to be a lower sector with an included angle of 90 degrees, as shown in fig. 4. I.e. the center of the cornea region of interest is located in the middle of the pupil, the distance from the center to the bottom edge of the cornea is taken as the radius, the cornea region of interest is located in a lower sector, and the included angle is 90 degrees.
The region of interest of the cornea can be accurately determined through S102, and the accurate determination of the region is critical to the classification effect of the corneal fluorescein staining image identification classification method provided by the present embodiment.
S103, dividing the dyeing region of the cornea region of interest.
The implementation process of the steps is as follows:
1. green channel data is extracted in the region of interest of the cornea.
Extracting green channel data may highlight the green stained area.
2. The green channel data is on-operated to eliminate bright colored areas.
3. And taking the cornea region of interest as a template, and carrying out gray reconstruction on the result of the opening operation.
The bright colored region can be effectively removed while the background is maintained by the on operation and the gray reconstruction.
4. The intra-stained region segmentation is performed based on the difference between the region of interest of the cornea and the open operation after reconstruction.
The stained region is segmented using the difference before and after reconstruction of the region of interest of the cornea. Compared with the common threshold segmentation method such as Otsu, the segmentation scheme adopted in the embodiment has better detection performance on the dyed areas under different contrast ratios.
The segmentation of the stained area can be accurately performed through S103, and the accuracy of the segmentation is critical to the classification effect of the corneal fluorescein staining image identification classification method provided in the present embodiment.
S104, extracting topological features of the segmented cornea region of interest, and extracting morphological features, first-order histogram statistical features and second-order gray matrix features of the cornea region of interest.
Wherein, the liquid crystal display device comprises a liquid crystal display device,
1. the process for extracting the topological feature of the segmented cornea region of interest comprises the following steps:
1) The centroid of each segmented cornea region of interest is taken as the vertex.
2) Each segmented cornea region of interest is dilated over multiple scales.
3) If the expanded areas overlap, connecting the vertexes of the overlapped areas to form a topological graph of spatial connectivity and distribution.
4) And extracting topological features based on the topological graph.
Wherein the topological feature comprises: subgraph number, average vertex degree, maximum vertex degree, average vertex eccentricity, diameter, average cluster coefficient, giant connected component ratio, isolated point percentage.
For the topological feature, the embodiment is implemented by a set of multi-scale graph theory topological features, such as regional cluster distribution generated by fusion dyeing, which cannot be accurately described in texture and morphological features, so that the embodiment takes the centroid of each dyed region as the vertex of a graph, and estimates the connectivity between the dyed regions through expansion operation on multiple scales. If two stained areas overlap after expansion, one edge is used to connect their nodes, generating a topology map reflecting connectivity. The number of edges and connectivity of the graph will be positively correlated with the increase in scale of expansion, from small to large. The topology map construction process is as shown in fig. 5 (a), 5 (b), 5 (c), 5 (d) and 5 (e), and fig. 5 (a) is a topology map illustration intention of scale=4; fig. 5 (b) topological diagram intent of scale=16; fig. 5 (c) is a scale=32 topological schematic diagram intent; fig. 5 (d) is a scale=48 topological schematic diagram intent; fig. 5 (e) is a scale=64 topological schematic diagram.
In a specific implementation, 8 topological features such as vertex degree and eccentricity can be calculated on each scale. The calculation uses different disk expansion operators, the size of which is increased from 4 to 64, and finally 128 multi-scale topological features with 16 scales can be extracted.
2. Morphological features include: average area, total area, average perimeter to area ratio, total perimeter to area ratio, average circularity, average perimeter, total perimeter, minimum external rectangular aspect ratio, number.
3. The first-order histogram statistical features include: tenth percentile, nineteenth percentile, energy, entropy, quartile range, kurtosis, maximum, mean absolute deviation, average, median, minimum, range, robust mean absolute deviation, root mean square, bias, total energy, uniformity, variance.
4. The second order gray matrix features include: gray Level Co-occurrence Matrix (GLCM) features, gray area size Matrix (Gray-Level Size Zone Matrix, GLSZM) features, gray Level Run-Length Matrix (GLRLM) features, adjacent Gray Level difference Matrix (Neighbourhood Gray-Tone Difference Matrix, NGTDM) features, gray dependent Matrix (Gray-Level Dependence Matrix, GLDM) features.
The gray level co-occurrence matrix features include: autocorrelation, cluster saliency, cluster darkness, cluster tendency, contrast, correlation, difference average, difference entropy, difference variance, inverse difference, normalized inverse difference, inverse difference moment, normalized inverse difference moment, correlation information measure, inverse variance, joint average, joint energy, joint entropy, maximum correlation coefficient, maximum probability, and average, and entropy, and variance.
The gray area size matrix features include: gray non-uniformity, normalized gray non-uniformity, gray variance, high gray area emphasis, large area high gray level emphasis, large area low gray level emphasis, low gray area emphasis, area size non-uniformity, normalized area size non-uniformity, small area emphasis, small area high gray level emphasis, small area low gray level emphasis, area entropy, area percentage, area variance.
The gray scale travel matrix features include: gray non-uniformity, normalized gray non-uniformity, gray variance, high gray scale stroke emphasis, long stroke high gray scale emphasis, long stroke low gray scale emphasis, short stroke high gray scale emphasis, short stroke low gray scale emphasis.
The adjacent gray level difference adjustment matrix features include: busyness, roughness, complexity, contrast, intensity.
The gray scale dependency matrix features include: dependent entropy, dependent non-uniformity, normalized dependent non-uniformity, dependent variance, gray non-uniformity, gray variance, high gray emphasis, large dependent high gray emphasis, large dependent low gray emphasis, small dependent high gray emphasis, small dependent low gray emphasis.
In the specific implementation, radiological features including texture and morphological features can be calculated, for example, 837 texture features are extracted, and then background gray texture and pattern information of the region of interest image are analyzed. 18 first-order histogram statistical features and 75 second-order gray matrix features (comprising a gray co-occurrence matrix, a gray area size matrix, a gray travel matrix, an adjacent gray difference matrix and a gray dependency matrix) are calculated respectively, and the image can be transformed three times on the basis of an original image (the preprocessed image is used if the original image is preprocessed), so that texture information of an interested area is enriched.
The transformation schemes employed include, but are not limited to: laplacian gaussian filtering (Laplacian of Gaussian, loG) (σ=1, 2, 3), four-component and local binary patterns (Local Binary Patterns, LBP) of wavelet decomposition. Meanwhile, in order to avoid neglecting the information of the dyeing regions, 9 morphological features are extracted by calculating the roundness, the area and other shape information of each divided dyeing region, and finally the features shown in table 1 are extracted.
TABLE 1
S105, identifying and grading the cornea fluorescent dye based on the topological feature, the morphological feature, the first-order histogram statistical feature and the second-order gray matrix feature.
In this step it is possible that,
1. and removing the insignificant features in the topological features, the morphological features, the first-order histogram statistical features and the second-order gray matrix features by adopting single-factor analysis of variance (Analysis of Variance, ANOVA)) and a filter (PRBF) based on Pearson redundancy to obtain the final identification grading features.
In order to reduce the number of features, avoid excessive fitting of the model without causing performance degradation, a single-factor variance analysis and a filter based on Pearson redundancy are adopted to remove insignificant features in topological features, morphological features, first-order histogram statistical features and second-order gray matrix features, so as to obtain preliminary identification grading features.
For example, a single-factor analysis of variance and a filter based on Pearson redundancy are adopted to remove redundant items in the feature vector, and a backward feature selection method based on a linear regression model is adopted to remove insignificant features, so that preliminary identification grading features are obtained.
The preliminary identification selection of the important features is accomplished using a decision tree-based method, i.e., the importance of the preliminary identified hierarchical features is obtained, and the basic impurities are measured at each node. The higher the importance, the more separable the feature is to the different categories of the model. Based on different combinations of importance ranking and feature categories, several alternative signatures were constructed. I.e. the final features are selected according to importance, which are mainly composed of topological features.
2. And carrying out the identification grading of the cornea fluorescent dye based on a method of a support vector machine and the identification grading characteristics.
For example, the international cooperative clinical consortium eye staining scoring (Oxford Shoulder Score, OSS) scale of Sj gren is used to score the region of interest of the cornea based on the support vector machine method and the identification grading feature, thereby performing identification grading of the fluorescent dye of the cornea.
Wherein the OSS score is calculated as follows:
0 min, no staining.
1 minute, one to five stains.
2 minutes, 6-30 dyeing points.
3 minutes, more than 30 staining spots, if (1) one or more fusion staining plaques; (2) one or more filaments; (3) The cornea was stained centrally and additional scores were performed (one case plus one score).
The highest score for each region of interest of the cornea was 5 points. The final OSS score is considered the final grading result.
The cornea fluorescein staining image identification grading method provided by the embodiment obtains a cornea fluorescein staining image; identifying a region of interest of the cornea in the image; the cornea region of interest is a sector region, the included angle is 90 degrees, the central point of the cornea region of interest is the central point of the pupil, and the radius of the cornea region of interest is the distance between the central point of the cornea and the low edge of the cornea; dividing a cornea region of interest into stained regions; extracting topological features of the segmented cornea region of interest, and extracting morphological features, first-order histogram statistical features and second-order gray matrix features of the cornea region of interest; and identifying and grading the cornea fluorescent dye based on the topological feature, the morphological feature, the first-order histogram statistical feature and the second-order gray matrix feature.
The method of the embodiment carries out the identification classification of the cornea fluorescent dye based on the topological feature, the morphological feature, the first-order histogram statistical feature and the second-order gray matrix feature, thereby improving the identification accuracy.
It should be understood that the invention is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and shown, and those skilled in the art can make various changes, modifications and additions, or change the order between steps, after appreciating the spirit of the present invention.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
Finally, it should be noted that: the embodiments described above are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced with equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (5)

1. A method for identifying and grading a corneal fluorescein staining image, the method comprising:
obtaining a corneal fluorescein staining image;
identifying a region of interest of the cornea in the image; the cornea region of interest is a sector region, the included angle is 90 degrees, the central point of the cornea region of interest is the central point of the pupil, and the radius of the cornea region of interest is the distance between the central point of the cornea and the low edge of the cornea;
dividing the cornea region of interest into staining regions;
extracting topological features of the segmented cornea region of interest, and extracting morphological features, first-order histogram statistical features and second-order gray matrix features of the cornea region of interest;
identifying and grading corneal fluorescein staining based on topological features, morphological features, first-order histogram statistical features and second-order gray matrix features;
the extracting topological features of the segmented cornea region of interest comprises:
taking the centroid of each segmented cornea region of interest as a vertex;
expanding each segmented cornea region of interest over a plurality of dimensions;
if the expanded areas overlap, connecting the vertexes of the overlapped areas to form a topological graph of space connectivity and distribution;
Extracting topological features based on the topological graph;
the topological feature comprises: sub-graph number, average vertex degree, maximum vertex degree, average vertex eccentricity, diameter, average clustering coefficient, giant connected component ratio, isolated point percentage;
the morphological features include: average area, total area, average perimeter to area ratio, total perimeter to area ratio, average circularity, average perimeter, total perimeter, minimum external rectangular aspect ratio, number;
the first-order histogram statistical feature includes: tenth percentile, nineteenth percentile, energy, entropy, quartile range, kurtosis, maximum, mean absolute deviation, average, median, minimum, range, robust mean absolute deviation, root mean square, bias, total energy, uniformity, variance;
the second-order gray matrix feature includes: gray level co-occurrence matrix characteristics, gray level area size matrix characteristics, gray level travel matrix characteristics, adjacent gray level difference matrix characteristics and gray level dependency matrix characteristics;
the gray level co-occurrence matrix feature comprises: autocorrelation, cluster saliency, cluster darkness, cluster tendency, contrast, correlation, difference average, difference entropy, difference variance, inverse difference, normalized inverse difference, inverse difference moment, normalized inverse difference moment, correlation information measure, inverse variance, joint average, joint energy, joint entropy, maximum correlation coefficient, maximum probability, and average, sum entropy, and sum variance;
The gray area size matrix features include: gray non-uniformity, normalized gray non-uniformity, gray variance, high gray area emphasis, large area high gray level emphasis, large area low gray level emphasis, low gray area emphasis, area size non-uniformity, normalized area size non-uniformity, small area emphasis, small area high gray level emphasis, small area low gray level emphasis, area entropy, area percentage, area variance;
the gray scale travel matrix feature comprises: gray non-uniformity, normalized gray non-uniformity, gray variance, high gray scale stroke emphasis, long stroke high gray scale emphasis, long stroke low gray scale emphasis, short stroke high gray scale emphasis, short stroke low gray scale emphasis;
the adjacent gray level difference matrix features include: busyness, roughness, complexity, contrast, intensity;
the gray scale dependency matrix feature includes: dependent entropy, dependent non-uniformity, normalized dependent non-uniformity, dependent variance, gray non-uniformity, gray variance, high gray emphasis, large dependent high gray emphasis, large dependent low gray emphasis, small dependent high gray emphasis, small dependent low gray emphasis;
The identification grading of corneal fluorescein staining based on topological features, morphological features, first-order histogram statistical features and second-order gray matrix features comprises the following steps:
removing insignificant features in the topological features, morphological features, first-order histogram statistical features and second-order gray matrix features by adopting single-factor analysis of variance and a filter based on Pearson redundancy to obtain identification grading features;
and carrying out the identification grading of the corneal fluorescein staining based on a method of a support vector machine and the identification grading characteristic.
2. The method of claim 1, wherein the identifying the region of interest of the cornea in the image comprises:
binarizing and segmenting the scleral region through an Otsu threshold; the scleral region is positioned outside the cornea, and the brightness of the scleral region is greater than the brightness of the cornea;
determining a distance of the corneal center point from the corneal low edge based on the maximum diameter of the inner edge of the treated scleral region;
determining a central region of the cornea based on the inner edge of the treated scleral region;
forming a plurality of groups of center-radius combinations by taking each point in the central area of the cornea as a center and taking x times of the distance between the central point of the cornea and the low edge of the cornea as a radius, wherein x is a decimal;
Detecting the gray value intensity of a circle formed by each group of center-radius combinations by using a Bowman integral derivative operator;
and determining the area with the highest gray intensity as a pupil, and determining the central point of the pupil as the central point of the cornea region of interest.
3. The method of claim 1, wherein the segmenting the region of interest of the cornea comprises:
extracting green channel data in the cornea region of interest;
performing an on operation on the green channel data to eliminate small bright details;
taking the result of the opening operation as a template, and carrying out gray reconstruction on the cornea region of interest;
and performing internal staining region segmentation based on the difference value between the cornea region of interest and the reconstructed cornea region.
4. The method of claim 2, wherein the determination of the scleral region is:
acquiring RGB values of each pixel point in the cornea region;
determining the brightness Bi=max { Ri, gi, bi }, wherein i is a pixel mark in the cornea region, bi is the brightness of the pixel i, ri is the R value of the pixel i, gi is the G value of the pixel i, bi is the B value of the pixel i;
determining standard deviation sigma 0 of brightness of each pixel point of the boundary of the region to which the cornea belongs;
And determining the maximum value of the brightness of all the pixel points as the brightness of the cornea, and determining a sclera area according to the brightness of the cornea and sigma 0.
5. The method of claim 4, wherein said determining the scleral region from the brightness and σ0 of the cornea comprises:
initializing the attribute of all pixel points outside the area of the cornea as undetermined;
according to the attribute, determining a continuous high-brightness area of each pixel point of the boundary of the area to which the cornea belongs;
determining a region covered by a continuous high-brightness region of all pixel points of the boundary of the region to which the cornea belongs as a sclera region;
wherein, the continuous high brightness area of any pixel point j of the boundary of the area of the cornea is determined by the following steps:
step 1, taking any pixel point j as a current processing point;
step 2, determining whether adjacent pixel points exist in the current processing point; the adjacent pixel points are one or more of left, right, upper and lower adjacent to the current processing point, and the attribute of the adjacent pixel points is undetermined;
step 3, if the adjacent pixel points do not exist, determining the area composed of the adjacent pixel points with the determined attribute of the pixel point j as a continuous high-brightness area of any pixel point j; if the adjacent pixel points exist, executing the steps 4 to 6;
Step 4, determining RGB values of adjacent pixel points;
step 5, determining the brightness b=max { R, G, B } of the adjacent pixel points;
step 6, if the brightness of all the adjacent pixel points is smaller than the cornea brightness, stopping the continuous high brightness region determining step of any pixel point j, and determining the region composed of the adjacent pixel points with the determined attribute of the pixel point j as the continuous high brightness region of any pixel point j;
if the suspected high-brightness pixel point exists, updating the attribute of the suspected high-brightness pixel point to be a pixel point j to be determined, taking the suspected high-brightness pixel point as a current processing point, and repeatedly executing the step 2 and the subsequent steps; the suspected high-brightness pixel points are adjacent pixel points, the brightness of the suspected high-brightness pixel points is larger than or equal to the brightness of the cornea, but the brightness of the suspected high-brightness pixel points is smaller than a brightness threshold; the brightness threshold is the brightness of cornea x (1+σ0);
if the highlight pixel exists, updating the attribute of the highlight pixel to be determined as a pixel j, updating the attribute of the pixel to be determined as the pixel j, taking the highlight pixel as a current processing point, and repeating the step 2 and the subsequent steps; wherein the highlight pixel point is the adjacent pixel point, and the brightness of the highlight pixel point is greater than or equal to the brightness threshold.
CN202310979173.1A 2023-08-04 2023-08-04 Cornea fluorescein staining image identification grading method Active CN116681707B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310979173.1A CN116681707B (en) 2023-08-04 2023-08-04 Cornea fluorescein staining image identification grading method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310979173.1A CN116681707B (en) 2023-08-04 2023-08-04 Cornea fluorescein staining image identification grading method

Publications (2)

Publication Number Publication Date
CN116681707A CN116681707A (en) 2023-09-01
CN116681707B true CN116681707B (en) 2023-10-20

Family

ID=87784123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310979173.1A Active CN116681707B (en) 2023-08-04 2023-08-04 Cornea fluorescein staining image identification grading method

Country Status (1)

Country Link
CN (1) CN116681707B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359365A (en) * 2008-08-07 2009-02-04 电子科技大学中山学院 Iris positioning method based on Maximum between-Cluster Variance and gray scale information
CN106535740A (en) * 2014-05-02 2017-03-22 马萨诸塞眼科耳科诊所 Grading corneal fluorescein staining
CN107122597A (en) * 2017-04-12 2017-09-01 广东顺德中山大学卡内基梅隆大学国际联合研究院 A kind of corneal damage intelligent diagnosis system
CN109410236A (en) * 2018-06-12 2019-03-01 佛山市顺德区中山大学研究院 The method and system that fluorescent staining image reflective spot is identified and redefined
CN109886931A (en) * 2019-01-25 2019-06-14 中国计量大学 Gear ring of wheel speed sensor detection method of surface flaw based on BP neural network
CN111178449A (en) * 2019-12-31 2020-05-19 浙江大学 Liver cancer image classification method and device combining computer vision characteristics and imaging omics characteristics
CN111951252A (en) * 2020-08-17 2020-11-17 中国科学院苏州生物医学工程技术研究所 Multi-sequence image processing method, electronic device and storage medium
CN113570556A (en) * 2021-07-08 2021-10-29 北京大学第三医院(北京大学第三临床医学院) Method and device for grading eye dyeing image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2478329B (en) * 2010-03-03 2015-03-04 Samsung Electronics Co Ltd Medical image processing

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359365A (en) * 2008-08-07 2009-02-04 电子科技大学中山学院 Iris positioning method based on Maximum between-Cluster Variance and gray scale information
CN106535740A (en) * 2014-05-02 2017-03-22 马萨诸塞眼科耳科诊所 Grading corneal fluorescein staining
CN107122597A (en) * 2017-04-12 2017-09-01 广东顺德中山大学卡内基梅隆大学国际联合研究院 A kind of corneal damage intelligent diagnosis system
CN109410236A (en) * 2018-06-12 2019-03-01 佛山市顺德区中山大学研究院 The method and system that fluorescent staining image reflective spot is identified and redefined
CN109886931A (en) * 2019-01-25 2019-06-14 中国计量大学 Gear ring of wheel speed sensor detection method of surface flaw based on BP neural network
CN111178449A (en) * 2019-12-31 2020-05-19 浙江大学 Liver cancer image classification method and device combining computer vision characteristics and imaging omics characteristics
CN111951252A (en) * 2020-08-17 2020-11-17 中国科学院苏州生物医学工程技术研究所 Multi-sequence image processing method, electronic device and storage medium
CN113570556A (en) * 2021-07-08 2021-10-29 北京大学第三医院(北京大学第三临床医学院) Method and device for grading eye dyeing image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
医学图像纹理分析的方法及应用;朱碧云;陈卉;;中国医学装备(08);全文 *
语义融合眼底图像动静脉分类方法;高颖琪;郭松;李宁;王恺;康宏;李涛;;中国图象图形学报(10);全文 *

Also Published As

Publication number Publication date
CN116681707A (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN115082683B (en) Injection molding defect detection method based on image processing
US10565479B1 (en) Identifying and excluding blurred areas of images of stained tissue to improve cancer scoring
CN107833220B (en) Fabric defect detection method based on deep convolutional neural network and visual saliency
Li et al. Integrating holistic and local deep features for glaucoma classification
WO2019062092A1 (en) Superpixel- and multivariate color space-based body outline extraction method
Jaafar et al. Detection of exudates in retinal images using a pure splitting technique
CN108073918B (en) Method for extracting blood vessel arteriovenous cross compression characteristics of fundus retina
CN111507932B (en) High-specificity diabetic retinopathy characteristic detection method and storage device
Wuest et al. Region based segmentation of QuickBird multispectral imagery through band ratios and fuzzy comparison
CN112185523B (en) Diabetic retinopathy classification method based on multi-scale convolutional neural network
CN103295013A (en) Pared area based single-image shadow detection method
Salazar-Gonzalez et al. Optic disc segmentation by incorporating blood vessel compensation
Srivastava et al. Automatic nuclear cataract grading using image gradients
Pal et al. Mathematical morphology aided optic disk segmentation from retinal images
Hassan et al. Skin lesion segmentation using gray level co-occurance matrix
CN107194940A (en) A kind of coloured image contour extraction method based on color space and line segment
Vajravelu et al. Machine learning techniques to detect bleeding frame and area in wireless capsule endoscopy video
CN116681707B (en) Cornea fluorescein staining image identification grading method
Nija et al. An automated method of optic disc detection from retinal fundus images
Elbalaoui et al. Exudates detection in fundus images using mean-shift segmentation and adaptive thresholding
Rathore et al. CBISC: a novel approach for colon biopsy image segmentation and classification
CN109241865B (en) Vehicle detection segmentation algorithm under weak contrast traffic scene
Soares et al. Exudates dynamic detection in retinal fundus images based on the noise map distribution
Barhoumi et al. Pigment network detection in dermatoscopic images for melanoma diagnosis
Abdelsamea An enhancement neighborhood connected segmentation for 2D-Cellular Image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant