CN117058393A - Super-pixel three-evidence DPC method for fundus hard exudation image segmentation - Google Patents

Super-pixel three-evidence DPC method for fundus hard exudation image segmentation Download PDF

Info

Publication number
CN117058393A
CN117058393A CN202311108211.2A CN202311108211A CN117058393A CN 117058393 A CN117058393 A CN 117058393A CN 202311108211 A CN202311108211 A CN 202311108211A CN 117058393 A CN117058393 A CN 117058393A
Authority
CN
China
Prior art keywords
sample
image
pixel
super
evidence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311108211.2A
Other languages
Chinese (zh)
Inventor
鞠恒荣
陆杨
杨光
丁卫平
黄嘉爽
楚永贺
曹金鑫
程纯
姜舒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN202311108211.2A priority Critical patent/CN117058393A/en
Publication of CN117058393A publication Critical patent/CN117058393A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a super-pixel three-branch evidence DPC method for fundus hard exudation image segmentation, and belongs to the technical field of image processing analysis. The method solves the technical problems that parameters are difficult to determine in clustering medical image segmentation and edge area division is unclear. The technical proposal is as follows: the method comprises the following steps: s10, manually acquiring a lesion area of a fundus hard exudation image; s20, preprocessing the hard exudation image of the eye bottom to obtain a CIELab space of the image; s30, performing SLIC super-pixel processing on the acquired CIELab space; s40, dividing the image into two stages based on a three-branch clustering theory; s50, acquiring the lesion image information returned in the first stage. The beneficial effects of the invention are as follows: the invention improves the operation efficiency by introducing the super-pixel algorithm, and provides important medical image basis for clinical diagnosis of diabetic retina hard exudation pathological change diseases and discovery and treatment of patients.

Description

Super-pixel three-evidence DPC method for fundus hard exudation image segmentation
Technical Field
The invention relates to the technical field of image processing analysis, in particular to a super-pixel three-evidence DPC method for fundus hard exudation image segmentation.
Background
Diabetic Retinopathy (DR) is one of the most common microvascular complications of diabetes, being retinal microvascular leakage and blockage caused by chronic progressive diabetes, resulting in a range of fundus pathologies such as microangioma, hard exudation, cotton-wool spots, neovascularization, vitreous proliferation, macular edema, and even retinal detachment. Diabetic retinopathy (hereinafter referred to as "sugar net") which is one of the most common fundus complications for diabetics has a prevalence of 24.7% -37.5% among adult diabetics in China. According to the estimation, the sugar net patients in China are 3200 to 4800 thousands of people. At present, the screening work of diabetic retinopathy in China faces serious challenges, firstly, 87% of diabetics with limited resources are treated in ophthalmology, and medical resources are extremely limited in basic medical institutions, secondly, more than 50% of diabetics attach importance to the fact that the diabetics are not informed of regular fundus examination, and finally, about 70% of other diabetics never receive the standardized fundus examination.
Currently, aiming at DR clustering fundus image segmentation, gao Junshan et al propose a fundus image segmentation model based on an FCM (fuzzy C-means) algorithm in a diabetes retina image optic disk segmentation method based on FCM, and solve the problems of large calculation amount, long time consumption, low precision and the like by using FCM. Since the clustering segmentation effect of FCM depends on the selection of the initial value, the segmentation effect is often poor when facing complex pathological images. Meanwhile, due to the limitation of the FCM, the data segmentation effect facing the non-ball clusters is poor. In subsequent studies, scholars have proposed using density clustering algorithms instead of FCM. Ling Chaodong et al use DBSCAN (density-based algorithm) to segment fundus images in the method for hard exudation detection of fundus images combining SLIC super-pixels and DBSCAN clusters, improving segmentation accuracy and being able to process arbitrary shaped datasets. However, because the DBSCAN needs to select two parameters at the same time, it is difficult to adjust the parameters. Meanwhile, the current clustering segmentation method is one-step, namely, all images are segmented at one time, and the effect of processing the edge area of the pathological image is often poor.
Therefore, how to adaptively select parameters and precisely segment the image edges is the subject of the present invention.
Disclosure of Invention
The invention aims to provide a super-pixel three-evidence DPC method for fundus hard exudation image segmentation, which improves the operation efficiency by introducing a super-pixel algorithm, and simultaneously adopts a two-stage segmentation strategy, wherein the first stage uses a double-layer nearest neighbor strategy to segment a lesion core region, and the second stage uses a D-S evidence theory to segment a lesion image edge region for the second time, thereby improving the accuracy of image segmentation and providing an important medical image basis for clinical diagnosis of diabetic retina hard exudation lesion diseases and discovery and treatment of patients.
In order to achieve the aim of the invention, the invention adopts the technical scheme that: .
A super-pixel three-evidence DPC method for fundus hard exudation image segmentation, comprising the steps of:
s10, manually acquiring a lesion area of a diabetic fundus image; and performing convolution operation on the image by using a Laplacian filter in an OpenCV computer vision processing software library, and enhancing the image edge and detail information. Processing the enhanced fundus lesion image into RGB space,N RGB ={x 1 ,x 2 ,...,x n The (i) is a lesion image pixel point information set, and the (i) th sample is x i =[R i ,G i ,B i ,X i ,Y i ]Wherein R is i ,G i ,B i Respectively representing the brightness values of red, green and blue of the sample i, X i ,Y i Pixel coordinates representing sample i;
s20, preprocessing a diabetic fundus lesion image; RGB space of fundus lesion images was converted to CIELab color space using cvtColor functions in OpenCV computer vision processing software library. CIELab space N for obtaining lesion image Lab ={x 1 ,x 2 ,...,x n The ith sample is x i =[L k ,a k ,b k ,X k ,Y k ]Wherein L is i ,a i ,b i Representing the brightness of sample i, the component from green to red, the component from blue to yellow, respectively;
s30, performing SLIC super-pixel processing on the acquired CIELab space, and taking super-pixel points as samples of three evidence DPC;
s40, dividing the image into two stages based on a three-branch clustering theory. The first stage, processing the super-pixel sample by using a double-layer nearest neighbor strategy to obtain a segmentation result of a fundus lesion image core region;
s50, introducing a D-S evidence theory to secondarily segment the edge area of the lesion image on the basis of acquiring the lesion image information returned in the first stage.
As a super-pixel three-branch evidence DPC method for fundus hard exudation image segmentation provided by the present invention, the step S30 includes the steps of:
s31, the super pixel processing is carried out by adopting the SLIC algorithm, so that the complexity of image processing can be reduced. According to the preset number H of super pixels, uniformly distributing cluster centers C in the lesion image k =[L k ,a k ,b k ,X k ,Y k ]Each super pixel has a size ofThe |·| is the cardinality of the collection. The distance between adjacent cluster centers is calculated according to the formula (1):
s32, re-selecting the clustering center in the n-by-n field of the clustering center, and moving the clustering center to the place with the minimum gradient.
S33, distributing labels to each pixel point in the field around the clustering center point of each lesion image.
S34, calculating the distance measurement from the point to the clustering center of the lesion image, wherein the calculation formula is as follows:
wherein d c Representing the color distance, L k -L i Represents the relative distance between pixel points k and i on the L channel, a k -a i Representing the relative distance between pixel points k and i on the a-channel, b k -b i Represents the relative distance between pixel points k and i on the b channel, d s Representing the spatial distance X k -X i And Y k -Y i Representing the relative distance of pixel points k and i in the x-y coordinate system, S is the maximum spatial distance within the class, and m is the maximum color distance.
And S35, continuously iterating until the clustering center of each pixel point is not changed. Outputting the fundus lesion image processed by the super pixel, and taking the super pixel point as a sample of a super pixel three-evidence DPC method.
As a method for super-pixel three-branch evidence DPC for fundus hard exudation image segmentation provided by the present invention, the step S40 includes the steps of:
s41, calculating the local density rho of the sample points as shown in a formula (5):
wherein d is ij Is the Euclidean distance between the sample points i and j in CIELab space, nei (i) is k neighbors of the sample point i.t is a positive integer greater than zero for adjusting the Euclidean distance d ij For local density ρ i Is a function of the degree of influence of (a).
S42, in order to determine the number k of neighbors, from the particle calculation perspective, converting the local density of the sample into a reasonable particle size, and searching the optimal k value by adopting a reasonable particle size principle. And (3) carrying out iteration on all sample points to obtain the optimal neighbor k value by constructing two standards of coverage rate and specificity. The coverage cov is calculated as shown in formula (6):
where d is the average distance between the sample point i and its neighbors, N Ω Is the evolution of the total sample number. d, d ij The absolute value between d reflects the degree of fluctuation in the similarity between sample i and its neighbors.
The specific sp calculation mode is shown in formula (7):
where Nei (i) is k neighbors of sample point i, N Ω Is the evolution of the total sample number.
The optimization function for constructing reasonable granularity is shown in formulas (8) and (9):
Q i =cov i ×sp i (8)
wherein, the process of obtaining the optimal k value is to optimize Q total Is a process of (2). For each sample i, there is a Q i 。Q total =Q 1 +Q 1 +…+Q n Q representing all sample points i And (3) summing.
S43, calculating delta of the sample points, wherein delta represents the minimum distance (center offset distance) from the points with larger density as shown in a formula (10):
for sample i with the greatest local density, its delta i =max j d ij .
S44, selecting a clustering sample center, and calculating local densities ρ and δ through formulas (6) and (10) in order to obtain a correct lesion image segmentation area. The cluster center is a point where ρ and δ are both large. And calculating gamma of the product of rho and delta, and drawing a cluster center decision diagram through gamma to select a cluster center. The calculation of γ is shown in formula (11):
γ i =ρ i ×δ i (11)
s45, taking the result obtained in S44 as a clustering center. And (3) performing segmentation of the pathological image in the first stage, and distributing the sample points to the core area of the pathological part by using a double-layer nearest neighbor strategy. Obtaining a first clustering result C one . The division of the core region sample points is shown in formulas (12) and (13):
where k represents the best number of neighbors that have been obtained. Sample j is the neighbor of sample i, j e Nei (i). TLN comprehensively considers local structure information between sample i and sample j by computing the intersection of Nei (i) and Nei (t). When the intersection of Nei (i) and Nei (t) is greater than or equal to k/2. The TLN value is 1, otherwise 0.
Wherein,it holds that the sample point j will be assigned to the lesion core area to which the sample point i belongs.
As a super-pixel three-branch evidence DPC method for fundus hard exudation image segmentation provided by the present invention, the step S50 includes the steps of:
s51, clustering the result C obtained in the step S40 one For segmentation of the border region of the pathology image of the second stage. For any j ε Nei (i), D-S evidence theory considers j ε C h Can be regarded as a evidence that the sample i belongs to the class cluster C h Is a confidence level of (2). Fusing known clustering result C by constructing a new D-S evidence function one And calculating the confidence that the sample points belong to different clusters according to the sample point field information. The calculation mode is shown in the formula (14):
where k represents the best number of neighbors that have been obtained. m is m i,j (C h ) Indicating that neighbor j supports sample i belonging to class cluster C h To a degree of (3). m is m i,j (Θ) represents the uncertainty of the part of uncertainty, i.e. the uncertainty that sample i belongs to a certain class cluster. e, e -dij For representing the difference in evidence information provided by neighbors of different distances. If the neighbors are closer, providing more evidence information; if the neighbors are far apart, less evidence information is provided. Nei (j) ≡C h I represents the neighbors of sample j and cluster C of classes h The number of intersections between, which digitizes the neighbor j support samplesi belongs to cluster C h Is a confidence level of (c).
S52, carrying out quality fusion on the sample points according to the D-S quality function constructed in the step S51. Consider the case of only two neighbors whose quality fusion calculations are shown in equations (15) and (16):
s53, considering the quality functions of all neighbors in Nei (i), and calculating the combined quality functions as shown in formulas (17), (18) and (19):
s54, combining the quality functions of all the class clusters, wherein the global quality functionThe calculations are shown in formulas (20), (21) and (22):
where K is a normalization constant used to describe the degree of conflict between different evidences. The larger K indicates a larger conflict between different evidence.
S55, calculating the probability that the sample points belong to different class clusters according to the global quality fusion rule of the sample points obtained in the step S54, and distributing the probability to the class cluster with the highest probability. The allocation formula is shown as formula (23):
wherein,representing the assignment of sample i to class cluster C h The border area of the pathological image is located. max {.cndot }, represents m where the evidence information is selected to be maximum i (C h )。
S56, calculating the probability that the sample points belong to different clusters according to the S55, and distributing the sample points to the cluster with the highest probability to obtain a clustering result C of the second stage two . Fusing the first stage clustering result C one Second stage clustering result C two And obtaining a final lesion image segmentation graph.
Compared with the prior art, the invention has the beneficial effects that:
(1) The invention provides a super-pixel three-branch evidence DPC method for fundus hard exudation image segmentation, which converts the problem that the cut-off distance in a DPC algorithm is difficult to determine into the problem of selecting the number of neighbors, and constructs two standards of coverage rate and specificity through the introduction of a reasonable granularity principle, so as to self-adaptively find the optimal neighbors, thereby improving the speed of the clustering optimization process.
(2) The SLIC super-pixel segmentation algorithm is adopted to preprocess the images in the CIELab color space, so that the color distance and the space distance are considered, the clustering speed is increased, the important characteristics of the images are reserved, and the segmentation efficiency of the fundus hard exudation images is further improved.
(3) Based on the three-branch clustering thought, the whole fundus image segmentation process is divided into two stages. The first stage, a double-layer nearest neighbor strategy is used for dividing a core area of a lesion image focus; and in the second stage, introducing an evidence theory, and fusing a plurality of uncertain information to form an edge area of the lesion image focus by considering the information of the surrounding neighbors and the segmentation result in the first stage. The two-stage image segmentation strategy improves the precision of the segmentation of the fundus hard exudation image, and has important significance for the diagnosis of diabetic fundus lesions.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
Fig. 1 is an overall flowchart of a superpixel three-branch evidence DPC method for fundus rigid exudation image segmentation of the present invention.
Fig. 2 is a block diagram of an overall data processing framework of a super-pixel three-branch evidence DPC method for fundus rigid exudation image segmentation of the present invention.
Fig. 3 is a flowchart of the super-pixel three-branch evidence DPC algorithm of the present invention for fundus rigid exudation image segmentation.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. Of course, the specific embodiments described herein are for purposes of illustration only and are not intended to limit the invention.
Example 1
Referring to fig. 1-3, the invention provides a super-pixel evidence DPC method for segmenting diabetic fundus images, which comprises the following steps:
s10, manually acquiring a lesion area of a diabetic fundus image; convolving an image using Laplacian filters in an OpenCV computer vision processing software libraryImage edges and detail information are enhanced. The enhanced fundus lesion image is processed into RGB space, N (N) RGB ={x 1 ,x 2 ,...,x n The (i) is a lesion image pixel point information set, and the (i) th sample is x i =[R i ,G i ,B i ,X i ,Y i ]Wherein R is i ,G i ,B i Respectively representing the brightness values of red, green and blue of the sample i, X i ,Y i Pixel coordinates representing sample i;
s20, preprocessing a diabetic fundus lesion image; RGB space of fundus lesion images was converted to CIELab color space using cvtColor functions in OpenCV computer vision processing software library. CIELab space N for obtaining lesion image Lab ={x 1 ,x 2 ,...,x n A process of the polymer (c) is performed, the ith sample is x i =[L k ,a k ,b k ,X k ,Y k ]Wherein L is i ,a i ,b i Representing the brightness of sample i, the component from green to red, the component from blue to yellow, respectively;
s30, performing SLIC super-pixel processing on the acquired CIELab space, and taking super-pixel points as samples of three evidence DPC;
s40, dividing the image into two stages based on a three-branch clustering theory. The first stage, processing the super-pixel sample by using a double-layer nearest neighbor strategy to obtain a segmentation result of a fundus lesion image core region;
s50, introducing a D-S evidence theory to secondarily segment the edge area of the lesion image on the basis of acquiring the lesion image information returned in the first stage.
Step S10 includes the steps of:
s11, manually obtaining an image of a lesion area of the fundus image, taking the fundus image in the message data set as an example, reading the message data set, and obtaining the fundus image;
s12, performing convolution operation on the image by using a Laplacian filter in an OpenCV computer vision processing software library, and enhancing the image edge and detail information. Processing the enhanced fundus lesion image into RGB space, N RGB ={x 1 ,x 2 ,...,x n The (i) is a lesion image pixel point information set, and the (i) th sample is x i =[R i ,G i ,B i ,X i ,Y i ]Wherein R is i ,G i ,B i Respectively representing the red color of sample i, brightness values of green and blue, X i ,Y i Representing the pixel coordinates of sample i.
Step S20 includes the steps of:
s21, converting the RGB space of the fundus lesion image into a CIELab color space by using a cvtColor function in an OpenCV computer vision processing software library. Obtaining CIELab space, N Lab ={x 1 ,x 2 ,...,x n The ith sample is x i =[L k ,a k ,b k ,X k ,Y k ]Wherein L is i ,a i ,b i Representing the brightness of sample i, the components from green to red, and the components from blue to yellow, respectively, as shown in table 1 below:
N Lab L a b X Y
x 1 255 128 128 0 0
x 2 255 128 128 1 0
x n-1 255 128 128 127 86
x n 255 128 128 128 86
step S30 includes the steps of:
s31, the super pixel processing is carried out by adopting the SLIC algorithm, so that the complexity of image processing can be reduced. According to the preset number H of super pixels, uniformly distributing cluster centers C in the lesion image k =[L k ,a k ,b k ,X k ,Y k ]Each super pixel has a size ofThe |·| is the cardinality of the collection. The distance between adjacent cluster centers is calculated according to the formula (1):
s32, reselecting the clustering center in the n-by-n field of the clustering center, wherein the n is 3, and moving the clustering center to the place with the minimum gradient.
S33, distributing labels to each pixel point in the field around the clustering center point of each lesion image.
S34, calculating the distance measurement from the point to the clustering center of the lesion image, wherein the calculation formula is as follows:
wherein d c Representing the color distance, L k -L i Represents the relative distance between pixel points k and i on the L channel, a k -a i Representing the relative distance between pixel points k and i on the a-channel, b k -b i Represents the relative distance between pixel points k and i on the b channel, d s Representing the spatial distance X k -X i And Y k -Y i Representing the relative distance between the pixel points k and i in an x-y coordinate system, S is the maximum space distance in the class, m is the maximum color distance, and the value of m is 10.
And S35, continuously iterating until the clustering center of each pixel point is not changed. Outputting the fundus lesion image processed by the super pixel, and taking the super pixel point as a sample of a super pixel three-evidence DPC method.
Step S40 includes the steps of:
s41, calculating the local density rho of the sample points as shown in a formula (5):
wherein d is ij Is the Euclidean distance between the sample points i and j in CIELab space, nei (i) is k neighbors of the sample point i.t is a positive integer greater than zero for adjusting the Euclidean distance d ij For local density ρ i Is a function of the degree of influence of (a). t takes a value of 2.
S42, in order to determine the number k of neighbors, from the particle calculation perspective, converting the local density of the sample into a reasonable particle size, and searching the optimal k value by adopting a reasonable particle size principle. And (3) carrying out iteration on all sample points to obtain the optimal neighbor k value by constructing two standards of coverage rate and specificity. The coverage rate calculation mode is shown in a formula (6):
where d is the average distance between the sample point i and its neighbors, N Ω Is the evolution of the total sample number. d, d ij The absolute value between d reflects the degree of fluctuation in the similarity between sample i and its neighbors.
The specific calculation mode is shown in the formula (7):
where Nei (i) is k neighbors of sample point i, N Ω Is the evolution of the total sample number.
The optimization function for constructing reasonable granularity is shown in formulas (8) and (9):
Q i =cov i ×sp i (8)
wherein, the process of obtaining the optimal k value is to optimize Q total Is a process of (2). For each sample i, there is a Q i 。Q total =Q 1 +Q 1 +…+Q n Q representing all sample points i And (3) summing. And obtaining the optimal neighbor number k as 12 through iterative calculation.
S43, calculating delta of the sample points, wherein delta represents the minimum distance (center offset distance) from the points with larger density as shown in a formula (10):
for sample i with the greatest local density, its delta i =max j d ij .
S44, selecting a clustering sample center, and calculating local densities ρ and δ through formulas (6) and (10) in order to obtain a correct lesion image segmentation area. The cluster center is a point where ρ and δ are both large. And calculating gamma of the product of rho and delta, and drawing a cluster center decision diagram through gamma to select a cluster center. The calculation of γ is shown in formula (11):
γ i =ρ i ×δ i (11)
the local density, center offset distance and γ of the superpixel samples were calculated as shown in table 2 below:
N Lab ρ i δ i γ i
x 1 5.9824 0.0267 0.1595
x 2 6.1029 0.0267 0.1627
x n-1 6.1918 0.0533 0.3302
x n 4.3026 0.0395 0.1699
s45, taking the result obtained in S44 as a clustering center. And (3) performing segmentation of the pathological image in the first stage, and distributing the sample points to the core area of the pathological part by using a double-layer nearest neighbor strategy. Obtaining a first clustering result C one . Core region sampleThe division of this point is shown in formulas (12) and (13):
where k represents the best number of neighbors that have been obtained. Sample j is the neighbor of sample i, j e Nei (i). TLN comprehensively considers local structure information between sample i and sample j by computing the intersection of Nei (i) and Nei (t). When the intersection of Nei (i) and Nei (t) is greater than or equal to k/2. The optimal neighbor value is obtained according to step S42 as 12. When the intersection of Nei (i) and Nei (t) is equal to or greater than 6, the TLN value is 1, otherwise it is 0.
Wherein POS (C) i j ) The case =true, the sample point j will be assigned to the lesion core area to which the sample point i belongs.
Step S50 includes the steps of:
s51, clustering the result C obtained in the step S40 one For segmentation of the border region of the pathology image of the second stage. For any j ε Nei (i), D-S evidence theory considers j ε C h Can be regarded as a evidence that the sample i belongs to the class cluster C h Is a confidence level of (2). Fusing known clustering result C by constructing a new D-S evidence function one And calculating the confidence that the sample points belong to different clusters according to the sample point field information. The calculation mode is shown in the formula (14):
where k represents the best number of neighbors that have been obtained. m is m i,j (C h ) Indicating that neighbor j supports sample i belonging to class cluster C h To a degree of (3). m is m i,j (Θ) represents the uncertainty of the part of uncertainty, i.e. the uncertainty that sample i belongs to a certain class cluster. e, e -dij Providing evidence information for representing neighbors with different distancesDifferences in information. If the neighbors are closer, providing more evidence information; if the neighbors are far apart, less evidence information is provided. Nei (j) ≡C h I represents the neighbors of sample j and cluster C of classes h The number of intersections between the two is counted, and the neighbor j support samples i belong to the class cluster C h Is a confidence level of (c).
S52, carrying out quality fusion on the sample points according to the D-S quality function constructed in the step S51. Consider the case of only two neighbors whose quality fusion calculations are shown in equations (15) and (16):
s53, considering the quality functions of all neighbors in Nei (i), and calculating the combined quality functions as shown in formulas (17), (18) and (19):
s54, combining the quality functions of all the class clusters, wherein the global quality functionThe calculations are shown in formulas (20), (21) and (22):
where K is a normalization constant used to describe the degree of conflict between different evidences. The larger K indicates a larger conflict between different evidence.
S55, calculating the probability that the sample points belong to different class clusters according to the global quality fusion rule of the sample points obtained in the step S54, and distributing the probability to the class cluster with the highest probability. The allocation formula is shown as formula (23):
wherein,representing the assignment of sample i to class cluster C h The border area of the pathological image is located. max {.cndot }, represents m where the evidence information is selected to be maximum i (C h )。
S56, calculating the probability that the sample points belong to different clusters according to the S55, and distributing the sample points to the cluster with the highest probability to obtain a clustering result C of the second stage two . Fusing the first stage clustering result C one Second stage clustering result C two And obtaining a final lesion image segmentation graph.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (6)

1. A super-pixel three-evidence DPC method for fundus hard exudation image segmentation, comprising the steps of:
s10, manually acquiring a lesion area of a diabetic fundus image;
s20, preprocessing a diabetic fundus lesion image;
s30, performing SLIC super-pixel processing on the acquired CIELab space, and taking super-pixel points as samples of three evidence DPC;
s40, dividing the image into two stages based on a three-branch clustering theory; the first stage, processing the super-pixel sample by using a double-layer nearest neighbor strategy to obtain a segmentation result of a fundus lesion image core region;
s50, introducing a D-S evidence theory to secondarily segment the edge area of the lesion image on the basis of acquiring the lesion image information returned in the first stage.
2. The method as claimed in claim 1, wherein in step S10, the images are convolved by using Laplacian filters in an OpenCV computer vision processing software library to enhance the image edges and detail information, and the enhanced fundus lesion image is processed into RGB space, N RGB ={x 1 ,x 2 ,...,x n The (i) is a lesion image pixel point information set, and the (i) th sample is x i =[R i ,G i ,B i ,X i ,Y i ]Wherein R is i ,G i ,B i Respectively representing the brightness values of red, green and blue of the sample i, X i ,Y i Representing the pixel coordinates of sample i.
3. The method as claimed in claim 1, wherein in the step S20, the RGB space of the fundus lesion image is converted into the CIELab color space by using the cvtColor function in the OpenCV computer vision processing software library to obtain the CIELab space N of the lesion image Lab ={x 1 ,x 2 ,...,x n The ith sample is x i =[L k ,a k ,b k ,X k ,Y k ]Wherein L is i ,a i ,b i Representing the luminance of sample i, the component from green to red, and the component from blue to yellow, respectively.
4. The super-pixel three-branch evidence DPC method for diabetic fundus image segmentation of claim 1, wherein said step S30 includes the steps of:
s31, performing superpixel processing by adopting an SLIC algorithm to reduce complexity of image processing, and uniformly distributing a clustering center C in a lesion image according to the preset superpixel number H k =[L k ,a k ,b k ,X k ,Y k ]Each super pixel has a size ofThe I.S. is the cardinal number of the set, the distance between adjacent cluster centers is calculated according to the formula (1):
s32, re-selecting a clustering center in the n-by-n field of the clustering center, and moving the clustering center to a place with the minimum gradient;
s33, distributing labels to each pixel point in the field around the clustering center point of each lesion image;
s34, calculating the distance measurement from the point to the clustering center of the lesion image, wherein the calculation formula is as follows:
wherein d c Representing the color distance, L k -L i Represents the relative distance between pixel points k and i on the L channel, a k -a i Representing the relative distance between pixel points k and i on the a-channel, b k -b i Represents the relative distance between pixel points k and i on the b channel, d s Representing the spatial distance X k -X i And Y k -Y i Representing the relative distance between the pixel points k and i in an x-y coordinate system, wherein S is the maximum space distance in the class, and m is the maximum color distance;
and S35, continuously iterating until the clustering center of each pixel point is not changed any more, outputting a fundus lesion image processed by the super pixel, and taking the super pixel point as a sample of the super pixel three evidence DPC method.
5. The method of super-pixel three-branch evidence DPC for diabetic fundus image segmentation according to claim 1, wherein said step S40 includes the steps of:
s41, calculating the local density rho of the sample points as shown in a formula (5):
wherein d is ij Is the Euclidean distance between the sample points i and j in CIELab space, nei (i) is k neighbors of the sample point i.t is a positive integer greater than zero for adjusting the Euclidean distance d ij For local density ρ i Is a degree of influence of (a);
s42, in order to determine the number k of neighbors, from the point of particle calculation, converting the local density of a sample into a reasonable particle size, searching an optimal k value by adopting a reasonable particle size principle, and carrying out iteration on all sample points to obtain the optimal neighbor k value by constructing two standards of coverage rate and specificity, wherein the coverage rate cov is calculated in the way shown in a formula (6):
where d is the average distance between the sample point i and its neighbors, N Ω Is the evolution of the total sample number, d ij The absolute value between d reflects the degree of fluctuation in similarity between sample i and its neighbors;
the specific sp calculation mode is shown in formula (7):
where Nei (i) is k neighbors of sample point i, N Ω Is the evolution of the total sample number;
the optimization function for constructing reasonable granularity is shown in formulas (8) and (9):
Q i =cov i ×sp i (8)
wherein, the process of obtaining the optimal k value is to optimize Q total For each sample i, there is a Q i ,Q total =Q 1 +Q 1 +…+Q n Q representing all sample points i And (3) summing;
s43, calculating delta of the sample points, wherein delta represents the minimum distance from the points with larger density, and the center offset distance is shown in a formula (10):
for sample i with the greatest local density, its delta i =max j d ij
S44, selecting a cluster sample center to obtain a correct lesion image segmentation area, calculating local densities rho and delta through formulas (6) and (10), calculating gamma of products of rho and delta by using the points with large rho and delta as the cluster centers, and drawing a cluster center decision diagram through gamma to select the cluster center, wherein the calculation of gamma is shown in formula (11):
γ i =ρ i ×δ i (11)
s45, taking the result obtained in S44 as a clustering center, dividing a pathological image in a first stage, and distributing sample points to a core area of a pathological change part by using a double-layer nearest neighbor strategy to obtain a first clustering result C one The division of the core region sample points is shown in formulas (12) and (13):
wherein k represents the number of best neighbors obtained, the sample j is the neighbor of the sample i, j is epsilon Nei (i), TLN comprehensively considers the local structure information between the sample i and the sample j by calculating the intersection of Nei (i) and Nei (t), when the intersection of Nei (i) and Nei (t) is greater than or equal to k/2, the TLN value is 1, otherwise, the TLN value is 0;
wherein,it holds that the sample point j will be assigned to the lesion core area to which the sample point i belongs.
6. The method of super-pixel three-branch evidence DPC for diabetic fundus image segmentation according to claim 1, wherein the step S50 includes the steps of:
s51, clustering the result C obtained in the step S40 one Segmentation of the edge region of the pathology image for the second stage, D-S evidence theory for arbitrary j ε Nei (i)Theory j epsilon C h Considered as a proof that sample i belongs to cluster C h By constructing a new D-S evidence function, fusing the known clustering result C one And sample point field information, calculating the confidence coefficient of the sample points belonging to different clusters, wherein the calculation mode is shown in a formula (14):
where k represents the best neighbor number obtained, m i,j (C h ) Indicating that neighbor j supports sample i belonging to class cluster C h Degree of (m) i,j (Θ) represents the uncertainty of the part of uncertainty, sample i belongs to a cluster of a certain class, e -dij The method is used for representing the difference of evidence information provided by neighbors with different distances, and if the neighbors are closer, the more the evidence information is provided; if the neighbors are far apart, less evidence information is provided, |Nei (j) ≡C h I represents the neighbors of sample j and cluster C of classes h The number of intersections between the two is counted, and the neighbor j support samples i belong to the class cluster C h Confidence level of (2);
s52, carrying out quality fusion on the sample points according to the D-S quality function constructed in the step S51, and considering the situation that only two neighbors exist, wherein the quality fusion calculation is shown in formulas (15) and (16):
s53, considering the quality functions of all neighbors in Nei (i), and calculating the combined quality functions as shown in formulas (17), (18) and (19):
s54, combining the quality functions of all the class clusters, wherein the global quality functionThe calculations are shown in formulas (20), (21) and (22):
wherein K is a normalization constant used for describing the conflict degree between different evidences, and the larger K indicates the larger conflict between different evidences;
s55, calculating the probability that the sample points belong to different class clusters according to the global quality fusion rule of the sample points obtained in the step S54, and distributing the probability to the class cluster with the largest probability, wherein a distribution formula is shown in a formula (23):
wherein,representing the assignment of sample i to class cluster C h The edge area of the pathological image where the information is located, max {. Cndot. Is represented by m with the largest evidence information i (C h );
S56, calculating the probability that the sample points belong to different clusters according to the step S55, and distributing the sample points to the cluster with the highest probability to obtain a clustering result C of the second stage two Fusing the first stage clustering result C one Second stage clustering result C two Obtaining a segmentation map of the final fundus hard exudation image.
CN202311108211.2A 2023-08-30 2023-08-30 Super-pixel three-evidence DPC method for fundus hard exudation image segmentation Pending CN117058393A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311108211.2A CN117058393A (en) 2023-08-30 2023-08-30 Super-pixel three-evidence DPC method for fundus hard exudation image segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311108211.2A CN117058393A (en) 2023-08-30 2023-08-30 Super-pixel three-evidence DPC method for fundus hard exudation image segmentation

Publications (1)

Publication Number Publication Date
CN117058393A true CN117058393A (en) 2023-11-14

Family

ID=88657073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311108211.2A Pending CN117058393A (en) 2023-08-30 2023-08-30 Super-pixel three-evidence DPC method for fundus hard exudation image segmentation

Country Status (1)

Country Link
CN (1) CN117058393A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016172889A1 (en) * 2015-04-29 2016-11-03 华为技术有限公司 Image segmentation method and device
CN115131566A (en) * 2022-07-25 2022-09-30 北京帝测科技股份有限公司 Automatic image segmentation method based on super-pixels and improved fuzzy C-means clustering

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016172889A1 (en) * 2015-04-29 2016-11-03 华为技术有限公司 Image segmentation method and device
CN115131566A (en) * 2022-07-25 2022-09-30 北京帝测科技股份有限公司 Automatic image segmentation method based on super-pixels and improved fuzzy C-means clustering

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈相廷 等: "SLIC超像素分割在医学图像处理中的应用", 现代计算机(专业版), no. 35, 15 December 2015 (2015-12-15) *

Similar Documents

Publication Publication Date Title
He et al. Structured layer surface segmentation for retina OCT using fully convolutional regression networks
Almazroa et al. Optic disc and optic cup segmentation methodologies for glaucoma image detection: a survey
CN110503649B (en) Liver segmentation method based on spatial multi-scale U-net and superpixel correction
CN108198184B (en) Method and system for vessel segmentation in contrast images
Abramoff et al. Automated segmentation of the optic disc from stereo color photographs using physiologically plausible features
CN110659692A (en) Pathological image automatic labeling method based on reinforcement learning and deep neural network
CN111292338B (en) Method and system for segmenting choroidal neovascularization from fundus OCT image
Ho et al. An atomatic fundus image analysis system for clinical diagnosis of glaucoma
CN101238987A (en) Processing method of CT cerebral hemorrhage image
Yazid et al. Exudates segmentation using inverse surface adaptive thresholding
Huang et al. A new deep learning approach for the retinal hard exudates detection based on superpixel multi-feature extraction and patch-based CNN
CN111507932B (en) High-specificity diabetic retinopathy characteristic detection method and storage device
US20210272291A1 (en) Method and computer program for segmentation of optical coherence tomography images of the retina
CN110889846A (en) Diabetes retina image optic disk segmentation method based on FCM
CN111815610A (en) Lesion focus detection method and device of lesion image
CN107610148B (en) Foreground segmentation method based on binocular stereo vision system
CN113052859A (en) Super-pixel segmentation method based on self-adaptive seed point density clustering
CN116721099A (en) Image segmentation method of liver CT image based on clustering
CN109410191B (en) OCT (optical coherence tomography) image-based fundus blood vessel positioning method and anemia screening method thereof
CN117058393A (en) Super-pixel three-evidence DPC method for fundus hard exudation image segmentation
CN111340829B (en) Improved DME edema regional neural network segmentation model construction method
CN108596932A (en) A kind of overlapping cervical cell image partition method
Cheng et al. Superpixel classification based optic disc segmentation
CN116071337A (en) Endoscopic image quality evaluation method based on super-pixel segmentation
Mehena et al. Medical image segmentation and detection of MR images based on spatial multiple-kernel fuzzy C-means algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination