CN107437252B - Method and device for constructing classification model for macular lesion region segmentation - Google Patents
Method and device for constructing classification model for macular lesion region segmentation Download PDFInfo
- Publication number
- CN107437252B CN107437252B CN201710661951.7A CN201710661951A CN107437252B CN 107437252 B CN107437252 B CN 107437252B CN 201710661951 A CN201710661951 A CN 201710661951A CN 107437252 B CN107437252 B CN 107437252B
- Authority
- CN
- China
- Prior art keywords
- matrix
- low
- sample
- representing
- optimal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 89
- 230000011218 segmentation Effects 0.000 title claims abstract description 42
- 238000013145 classification model Methods 0.000 title claims abstract description 36
- 230000003902 lesion Effects 0.000 title claims abstract description 10
- 239000011159 matrix material Substances 0.000 claims abstract description 162
- 238000006243 chemical reaction Methods 0.000 claims abstract description 37
- 238000005457 optimization Methods 0.000 claims abstract description 31
- 238000012545 processing Methods 0.000 claims abstract description 26
- 238000005070 sampling Methods 0.000 claims abstract description 20
- 230000009467 reduction Effects 0.000 claims abstract description 12
- 238000010276 construction Methods 0.000 claims abstract description 11
- 238000012549 training Methods 0.000 claims description 60
- 230000006870 function Effects 0.000 claims description 29
- 239000013598 vector Substances 0.000 claims description 22
- 208000002780 macular degeneration Diseases 0.000 claims description 18
- 238000012360 testing method Methods 0.000 claims description 16
- 230000009466 transformation Effects 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 238000005065 mining Methods 0.000 claims description 3
- 210000001508 eye Anatomy 0.000 description 5
- 238000003709 image segmentation Methods 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000011160 research Methods 0.000 description 3
- 201000004569 Blindness Diseases 0.000 description 1
- 241000282414 Homo sapiens Species 0.000 description 1
- 206010025421 Macule Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000002189 macula lutea Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The invention discloses a classification model construction method for segmentation of a macular lesion area of a fundus image, which comprises the following steps of: selecting a plurality of fundus images, carrying out graying processing on the fundus images to obtain a plurality of gray level images, and respectively sampling the foreground and the background of the gray level images to obtain samples; obtaining a conversion matrix by adopting a generalized low-rank approximation method, and carrying out dimensionality reduction processing on a sample based on the conversion matrix to obtain a low-rank approximation matrix of the sample; adding label information into the low-rank approximate matrix of the sample as supervision, and constructing a manifold regularization item; constructing an objective function by combining a generalized low-rank approximation method and the manifold regularization item, and solving the objective function by adopting an iterative optimization method to obtain an optimal conversion matrix and an optimal low-rank approximation matrix of the sample; and constructing a classification model based on the optimal low-rank approximation matrix and the label information. The classification model of the invention can extract the feature descriptors with low dimension and strong distinguishability, and can improve the segmentation precision.
Description
Technical Field
The invention relates to the field of medical image processing, in particular to a classification model construction method and device for segmentation of macular degeneration areas of fundus images and an image segmentation method.
Background
The eye is the most important organ for human beings to obtain information. The macula is located in the back of the eyeball and is an important tissue for people to receive external light and object images. If the disease occurs in the area, the vision will be decreased and even the vision will be lost, which is one of the important causes of blindness in the elderly. When doctors diagnose the macular degeneration area (due) of the eyeground image, the eyeground image has the defects of low accuracy, poor repeatability, multiple subjective factors and the like. Therefore, the application and research of the macular lesion region segmentation technology are urgently needed to meet the clinical auxiliary medical requirements of screening, diagnosing, treating and the like of the macular lesions.
Many of the existing macular lesion segmentation methods are feature-based. The features used in these methods generally include two types: one is that a plurality of bottom layer characteristics are combined to obtain a new characteristic; the other is a more successful manual profile. The characteristics are the extraction of the bottom content of the image, the selection and the design of the characteristics are time-consuming and labor-consuming and depend too much on the professional knowledge of people, the selection can not be well done to a great extent by depending on experience and luck, and the adjustment of the characteristics requires a great amount of time; and these methods are limited in robustness and applicability.
In consideration of the distinguishability of features, the distinguishability feature learning algorithms are mainly classified into two categories. One is to design a new algorithm based on the traditional manual descriptors such as SIFT, LBP, HOG and the like so as to obtain new features. The other is to reconstruct and parameterize the existing manual descriptors with a priori knowledge to obtain new features. Although the algorithm has proved to have good effect in the research fields of image classification, face recognition and the like. The supervised learning is to obtain an optimal model through training of an existing training sample, then map all inputs into corresponding outputs by using the optimal model, and simply judge the outputs, so that the ability of classifying unknown data is realized. Compared with the traditional rule-based method, the supervised learning model has remarkable advantages in characterization capability and effect.
However, application studies of the feature learning method based on supervision to fundus image segmentation are still relatively rare. Other manually designed features for fundus image segmentation do not have strong distinctiveness and description capability, and cannot obtain more accurate segmentation results. How to obtain a feature with stronger expression ability through learning is still a key point and a difficult point of the current research. Therefore, how to acquire more distinctive features and realize accurate and rapid segmentation of the macular region is a technical problem which needs to be urgently solved by those skilled in the art at present.
Disclosure of Invention
In order to solve the problems, the invention provides a classification model construction method for segmentation of a macular degeneration area of a fundus image. The method learns new features by combining supervised learning with the underlying features of the image. Based on the selected image, carrying out gray processing and sampling on the image; based on the gray characteristic of an image sample, firstly reducing the dimension of the sample by using a generalized low-rank matrix approximation method, then adding label information of the sample as a supervision item, and finally obtaining the low-rank approximation representation of the sample; after vectorization, the vector is used as a feature and sent to a classifier, and the classifier is obtained through training; the classifier is used to classify the pixels of the test image, thereby completing the classification-based segmentation. The characteristics obtained by the method for supervising the learning characteristics have strong distinguishability, so that the characteristics of the lesion area can be better described, and an accurate segmentation result is obtained.
In order to achieve the purpose, the invention adopts the following technical scheme:
a classification model construction method for segmentation of macular lesion areas of fundus images comprises the following steps:
step 1: selecting a plurality of fundus images, carrying out graying processing on the fundus images to obtain a plurality of gray level images, and respectively sampling the foreground and the background of the gray level images to obtain samples;
step 2: obtaining a conversion matrix by adopting a generalized low-rank approximation method, and carrying out dimensionality reduction processing on a sample based on the conversion matrix to obtain a low-rank approximation matrix of the sample;
and step 3: adding label information into the low-rank approximate matrix of the sample as supervision, and constructing a manifold regularization item based on the low-rank approximate matrix and the label information;
and 4, step 4: constructing an objective function by combining a generalized low-rank approximation method and the manifold regularization item, and solving the objective function by adopting an iterative optimization method to obtain an optimal conversion matrix and an optimal low-rank approximation matrix of the sample;
and 5: and constructing a classification model based on the optimal low-rank approximation matrix and the label information.
Further, the step 1 specifically includes:
step 101: selecting fundus images containing different types and sizes of macular regions from the STARE data set, and carrying out gray processing on the fundus images;
step 102: manually marking the positions of the foreground point and the background point to be used as image marks;
step 103: and respectively sampling the foreground and the background according to the image marks to obtain samples.
Further, the step 2 specifically includes:
step 201: constructing an optimization problem to express the original generalized low-rank approximation problem, wherein the optimization problem minimizes the total reconstruction error of the known components in the input matrix set, and two transformation matrices can be obtainedAndand a matrix of low rank representationThe formula is as follows:
representing the F norm, n representing the number of training samples, SiRepresents the ith training sample, AiRepresenting a low rank approximation matrix corresponding to Si, U and V representing two transformation matrices;andrepresenting an identity matrix;
step 202: solving transformation matrices U and V, using Ai=USiV represents approximately the sample Si。
Further, the step 3 specifically includes:
step 301: constructing a similarity matrix M, the elements M of the matrixijRepresenting the similarity between training samples i and j;
step 302: for the obtained low rank listSample matrix of displayAdding a sample label L epsilon (1,0) as supervision, mining the geometric shape of data distribution, and constructing a manifold regularization itemWherein A isiAnd AjLow rank approximation matrices representing the ith and jth samples, respectively; the item can reflect the manifold space structure of the training sample; wherein,
the method for constructing the similarity matrix M in step 301 includes: constructing a graph structure from n points, each point corresponding to a sample, connecting points i and j if i belongs to a point in the k-th inner neighbor of j or j belongs to a point in the k-th inner neighbor of i, MijIs represented as:
alpha represents a parameter, LiAnd LjLabels of the training samples i and j are respectively represented, if the training samples belong to the foreground, the label L is 1, and if the training samples belong to the background, the label L is 0.
Further, the step 4 specifically includes:
step 401: and combining a generalized low-rank approximation method and the regularization term to construct an objective function:
Γ ∈ (0, ∞) represents a parameter;
step 402: solving the optimal solution U, V and by adopting an iterative optimization methodThe iterative optimization method specifically comprises the following steps:
the objective function is rewritten as:
given an initial V0=(I0,0)τ,I0For the identity matrix, the optimal U is obtained through the following formula:
only if U contains matrix XUL of1When the feature vector corresponding to each feature value is obtained, the formula reaches the maximum value, and the optimal solution is obtained; and (3) solving the optimal V by using the optimal U calculated by the formula:
only if V contains matrix XVL of2When the feature vector corresponding to each feature value is obtained, the formula reaches the maximum value, and the optimal solution is obtained;
based on the calculated V, by calculating XUUpdating U by the characteristic vector of the matrix, repeating the process until convergence, and finally obtaining the optimal U, V and
further, the step 5 specifically includes:
step 501: vectorizing the optimal low-rank approximate matrix of the sample to obtain a characteristic vector;
step 502: and training the SVM classifier by using the feature vectors and the corresponding labels to obtain the trained classifier.
Further, the step 6 specifically includes:
step 601: carrying out graying processing and sampling on the test image;
step 602: reducing the dimension of the sample of the test image by adopting the optimal conversion matrix to obtain an optimal low-rank approximate matrix of the test image;
step 603: and taking the optimal low-rank approximate matrix of the test image as the input of the SVM classifier to obtain a classification result and further obtain a segmentation result.
Based on the second aspect of the present invention, the present invention also provides a fundus image macular degeneration area segmentation method based on the classification model, which is characterized by comprising: step 1: classifying the test image based on the classification model to obtain foreground points and background points of the test image; step 2: and taking the region where the foreground point is as a segmentation result.
Based on the third aspect of the present invention, the present invention also provides a computer device for constructing a classification model for segmentation of macular degeneration areas of fundus images, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of:
receiving selection of a user on an eye fundus training image, and carrying out graying processing on the training image to obtain a gray image; respectively sampling the foreground and the background of the gray level image to obtain samples;
obtaining a conversion matrix by adopting a generalized low-rank approximation method, and carrying out dimensionality reduction processing on a sample based on the conversion matrix to obtain a low-rank approximation matrix of the sample;
adding label information into the low-rank approximate matrix of the sample as supervision, and constructing a manifold regularization item based on the low-rank approximate matrix and the label information;
constructing an objective function by combining a generalized low-rank approximation method and the manifold regularization item, and solving the objective function by adopting an iterative optimization method to obtain an optimal conversion matrix and an optimal low-rank approximation matrix of the sample;
and constructing a classification model based on the optimal low-rank approximation matrix and the label information.
Based on the fourth aspect of the present invention, the present invention also provides a computer-readable storage medium having stored thereon a computer program for classification model construction for segmentation of macular degeneration areas of fundus images, which when executed by a processor implements the steps of:
receiving selection of a user on an eye fundus training image, and carrying out graying processing on the training image to obtain a gray image; respectively sampling the foreground and the background of the gray level image to obtain samples;
obtaining a conversion matrix by adopting a generalized low-rank approximation method, and carrying out dimensionality reduction processing on a sample based on the conversion matrix to obtain a low-rank approximation matrix of the sample;
adding label information into the low-rank approximate matrix of the sample as supervision, and constructing a manifold regularization item based on the low-rank approximate matrix and the label information;
constructing an objective function by combining a generalized low-rank approximation method and the manifold regularization item, and solving the objective function by adopting an iterative optimization method to obtain an optimal conversion matrix and an optimal low-rank approximation matrix of the sample;
and constructing a classification model based on the optimal low-rank approximation matrix and the label information.
The invention has the beneficial effects that:
1. the present invention combines supervised learning with image underlying features to learn new feature descriptors. The generalized low-rank matrix is used for carrying out dimension reduction and is combined with popular regularization to be used as a supervision item for constraint, and the feature descriptors which are low in dimension and high in distinguishability are obtained through iterative optimization. Compared with the traditional manual descriptor, the descriptor is obtained through supervised learning, does not need manual selection and design, and has stronger description capability.
2. In practice, applying this descriptor to segmentation of the macular region of the fundus image can lead to a more accurate segmentation result. The macular degeneration area is quantified by using the segmentation result, thereby assisting the doctor in more accurate diagnosis.
Drawings
FIG. 1 is a flowchart of a fundus image macular region segmentation method according to the present invention;
FIG. 2 is a schematic diagram of the method for sampling, including a whole picture, a foreground sample, and a background sample;
fig. 3 is a graph of the impact of different sample sizes on classification accuracy.
FIG. 4 is a graph of the segmentation results in different types of 3 fundus images using the present invention;
FIG. 5 is a ROC graph of the segmentation results of the above 3 fundus images according to the present invention and two other methods.
Detailed Description
The invention is further described with reference to the following figures and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Example one
A classification model construction method for segmentation of macular degeneration areas of fundus images, as shown in fig. 1, includes the following steps:
step 1: selecting a plurality of fundus images, carrying out graying processing on the fundus images to obtain a plurality of gray level images, and respectively sampling the foreground and the background of the gray level images to obtain samples;
step 2: obtaining a conversion matrix by adopting a generalized low-rank approximation method, and carrying out dimensionality reduction processing on a sample based on the conversion matrix to obtain a low-rank approximation matrix of the sample;
and step 3: adding label information into the low-rank approximate matrix of the sample as supervision, and constructing a regularization item based on the low-rank approximate matrix and the label information;
and 4, step 4: constructing a target function by combining a generalized low-rank approximation method and the regularization item, and acquiring an optimal conversion matrix and an optimal low-rank approximation matrix of the sample by adopting an iterative optimization method;
and 5: and constructing a classification model based on the optimal low-rank approximation matrix and the label information.
The step 1 specifically comprises:
step 101: selecting fundus images containing different types and sizes of macular regions from the STARE data set, and carrying out gray processing on the fundus images;
step 102: manually marking the positions of the foreground point and the background point to be used as image marks;
step 103: respectively sampling the foreground and the background according to the image marks to obtain samples
The sampling takes a k × k square neighborhood with a pixel point as a center as a sample size. In this example, 5 representative fundus images were selected. N/2 foreground point samples and background point samples are taken respectively to jointly form n training samples, wherein n is 10000, as shown in fig. 2. Through experiments, when the sample size k is 15, the classification result is best, as shown in fig. 3.
The step 2 specifically comprises:
step 201: constructing an optimization problem to express the original generalized low-rank approximation problem, wherein the optimization problem minimizes the total reconstruction error of the known components in the input matrix set, and two transformation matrices can be obtainedAndand a matrix of low rank representationThe formula is as follows:
representing the F norm, n representing the number of training samples, SiRepresents the ith training sample, AiRepresenting a low rank approximation matrix corresponding to Si, U and V representing two transformation matrices;andrepresenting an identity matrix;
step 202: if transformation matrices U and V are obtained, A is usedi=USiV to approximate each training sample Si。
The step 3 specifically includes:
step 301: constructing a similarity matrix M, the elements M of the matrixijRepresenting the similarity between training samples i and j;
step 302: sample matrix for the resulting low rank representationAdding a sample label L epsilon (1,0) as supervision, mining the geometric shape of data distribution, and constructing a manifold regularization itemWherein A isiAnd AjLow rank approximation matrices representing the ith and jth samples, respectively; this term can reflect the manifold spatial structure of the training samples.
The method for constructing the similarity matrix M in step 301 includes: a Graph (Graph) structure is constructed with n points, each corresponding to a training sample, and points i and j are connected if i belongs to a point of the kth inner neighbor of j or j belongs to a point of the kth inner neighbor of i. MijIs represented as:
alpha represents a parameter, LiAnd LjLabels of the training samples i and j are respectively represented, if the training samples belong to the foreground, the label L is 1, and if the training samples belong to the background, the label L is 0.
The step 4 specifically includes:
step 401: and (3) combining the formulas of the steps 2 and 3 to obtain the following formula:
Γ ∈ (0, ∞) represents a parameter. In this embodiment, the value of the parameter γ is 1.
Rewrite the formula to:
since the first term of the above equation is constant, its deletion has no effect, resulting in a new equation as follows:
only when A isi=UτSiWhen V, the above formula reaches the minimum value, and A is determinediSubstituting equation (4) and deleting constant terms yields the final optimization equation as follows:
the above formula is rewritten as:
solving the above formula in an iterative optimization mode: given an initial V0=(I0,0)τ,I0Is an identity matrix, and uses the following formula to solve U, i.e. to solve Tr (U)τXUU) of the maximum value, wherein
Only if U contains matrix XUL of1When the feature vector corresponding to each feature value is obtained, the formula reaches the maximum value, and the optimal solution is obtained. Calculating the optimal U by using the above formula by solving for Tr (V)τXVV) is optimized to obtain the maximum value of V), wherein
Only if V contains matrix XVL of2When the feature vector corresponding to each feature value is obtained, the formula reaches the maximum value, and the optimal solution is obtained. Based on the calculated V, by calculating XUUpdating U by the characteristic vector of the matrix, repeating the process until convergence, and finally obtaining the optimal U, V and
the step 5 specifically includes:
step 501: approximating the low rank of the samples obtained in step 4 to a matrixVectorizing operation is carried out to obtain a feature vector;
step 502: and training the SVM classifier by using the feature vectors and the corresponding labels to obtain the trained classifier.
Example two
Based on the classification model in the first embodiment, the present embodiment provides a method for segmenting a macular degeneration area of a fundus image, which adopts the classification model in the first embodiment, and includes:
step 1: classifying the test image based on the classification model to obtain foreground points and background points of the test image;
step 2: and taking the region where the foreground point is as a segmentation result.
Wherein, step 1 specifically includes:
graying the test image, and scanning the whole image by a k multiplied by k sliding window for sampling;
reducing the dimension of the sample of the test image by adopting the optimal conversion matrix to obtain an optimal low-rank approximate matrix of the test image;
and taking the optimal low-rank approximate matrix of the test image as the input of the SVM classifier to obtain a classification result.
If the test sample to which the pixel belongs to the foreground point, the label is 1, otherwise, the label is 0, and the segmentation result is obtained, as shown in fig. 4.
EXAMPLE III
Based on the image segmentation method, the embodiment provides a computer device for constructing a classification model for segmentation of macular degeneration areas of fundus images, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of:
receiving selection of a user on an eye fundus training image, and carrying out graying processing on the training image to obtain a gray image; respectively sampling the foreground and the background of the gray level image to obtain samples;
obtaining a conversion matrix by adopting a generalized low-rank approximation method, and carrying out dimensionality reduction processing on a sample based on the conversion matrix to obtain a low-rank approximation matrix of the sample;
adding label information into the low-rank approximate matrix of the sample as supervision, and constructing a manifold regularization item based on the low-rank approximate matrix and the label information;
constructing an objective function by combining a generalized low-rank approximation method and the manifold regularization item, and solving the objective function by adopting an iterative optimization method to obtain an optimal conversion matrix and an optimal low-rank approximation matrix of the sample;
and constructing a classification model based on the optimal low-rank approximation matrix and the label information.
The sampling is based on image marks of a foreground point and a background point manually marked by a user, and the foreground and the background of the gray-scale image are respectively sampled to obtain samples based on the image marks.
Example four
Based on the image segmentation method described above, the present embodiment provides a computer-readable storage medium having stored thereon a computer program for classification model construction for segmentation of macular degeneration areas of fundus images, wherein the program when executed by a processor implements the steps of:
receiving selection of a user on an eye fundus training image, and carrying out graying processing on the training image to obtain a gray image; respectively sampling the foreground and the background of the gray level image to obtain samples;
obtaining a conversion matrix by adopting a generalized low-rank approximation method, and carrying out dimensionality reduction processing on a sample based on the conversion matrix to obtain a low-rank approximation matrix of the sample;
adding label information into the low-rank approximate matrix of the sample as supervision, and constructing a manifold regularization item based on the low-rank approximate matrix and the label information;
constructing an objective function by combining a generalized low-rank approximation method and the manifold regularization item, and solving the objective function by adopting an iterative optimization method to obtain an optimal conversion matrix and an optimal low-rank approximation matrix of the sample;
and constructing a classification model based on the optimal low-rank approximation matrix and the label information.
The sampling is based on image marks of a foreground point and a background point manually marked by a user, and the foreground and the background of the gray-scale image are respectively sampled to obtain samples based on the image marks.
In the apparatuses of the third and fourth embodiments, the steps correspond to the first embodiment of the method, and the detailed description thereof can be found in the relevant description of the first embodiment. The term "computer-readable storage medium" should be taken to include a single medium or multiple media containing one or more sets of instructions; it should also be understood to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by a processor and that cause the processor to perform any of the methods of the present invention.
The experimental effect is as follows:
the method is adopted to segment the images of different types of macula lutea, and the segmentation result is shown in figure 4. The ROC curve graph of the segmentation result of the same image rendering by the method of the present invention and the HALT method and the method proposed by Liu et al, respectively, is shown in FIG. 5. Table 6 shows the statistical comparison of the present invention with two other methods on 21 arbitrarily chosen graphs on the STARE data set.
TABLE 6
% sensitivity | % specificity | % accuracy | |
The method of the invention | 90.47 | 96.46 | 96.35 |
HALT process | 85.75 | 92.69 | 92.58 |
Liu et al method | 84.04 | 91.75 | 91.69 |
The method combines supervised learning and image bottom layer characteristics to learn a new characteristic descriptor, utilizes a generalized low-rank matrix to perform dimension reduction and combines popular regularization as a supervision item to perform constraint, and obtains the characteristic descriptor which is low in dimension and has strong distinguishability through iterative optimization. Compared with the traditional manual descriptor, the descriptor is obtained through supervised learning, does not need manual selection and design, has stronger description capability and can obtain more accurate segmentation results.
Those skilled in the art will appreciate that the modules or steps of the present invention described above can be implemented using general purpose computer means, or alternatively, they can be implemented using program code that is executable by computing means, such that they are stored in memory means and executed by computing means, or they are separately fabricated into individual integrated circuit modules, or multiple modules or steps of them are fabricated into a single integrated circuit module. The present invention is not limited to any specific combination of hardware and software.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.
Claims (8)
1. A classification model construction method for segmentation of a macular lesion region of a fundus image is characterized by comprising the following steps of:
step 1: selecting a plurality of fundus images, carrying out graying processing on the fundus images to obtain a plurality of gray level images, and respectively sampling the foreground and the background of the gray level images to obtain samples;
step 2: obtaining a conversion matrix by adopting a generalized low-rank approximation method, and carrying out dimensionality reduction processing on a sample based on the conversion matrix to obtain a low-rank approximation matrix of the sample;
and step 3: adding label information into the low-rank approximate matrix of the sample as supervision, and constructing a manifold regularization item based on the low-rank approximate matrix and the label information;
and 4, step 4: constructing an objective function by combining a generalized low-rank approximation method and the manifold regularization item, and solving the objective function by adopting an iterative optimization method to obtain an optimal conversion matrix and an optimal low-rank approximation matrix of the sample;
the step 4 specifically includes:
step 401: and combining a generalized low-rank approximation method and the regularization term to construct an objective function:
representing the F norm, n representing the number of training samples, SiRepresents the ith training sample, AiAnd AjLow rank approximation matrices representing the ith and jth samples, respectively, U and V representing two transformation matrices; i isI1And II2The unit matrix is represented by a matrix of units,a sample matrix representing a low rank representation, L ∈ (1,0) representing a sample label, a manifold regularization termWherein A isiAnd AjLow rank approximation matrices, M, representing the ith and jth samples, respectivelyi,jRepresenting the similarity between training samples i and j,wherein α represents a parameter,LiAnd LjLabels of training samples i and j are respectively represented, if the training samples belong to the foreground, the label L is 1, if the training samples belong to the background, the label L is 0, and gamma belongs to (0, infinity) represents a parameter;
step 402: solving the optimal solution U, V and by adopting an iterative optimization methodThe iterative optimization method specifically comprises the following steps:
the objective function is rewritten as:
tr denotes the trace of the matrix, T in the upper right corner of the formula denotes the transpose of the matrix, UTRepresenting the transpose of the conversion matrix U, VTRepresents the transpose of the transformation matrix V;
given an initial VO=(IO,0)T,IOFor the identity matrix, the optimal U is obtained through the following formula:
XUrepresenting a matrix;
only if U contains matrix XUI of (A)1When the feature vector corresponding to each feature value is obtained, the formula reaches the maximum value, and the optimal solution is obtained; and (3) solving the optimal V by using the optimal U calculated by the formula:
only if V contains matrix XVI of (A)2When the feature vector corresponding to each feature value is obtained, the formula reaches the maximum value, and the optimal solution is obtained;
based on the calculated V, by calculating XUUpdating U by the characteristic vector of the matrix, repeating the process until convergence, and finally obtaining the optimalU, V and
and 5: and constructing a classification model based on the optimal low-rank approximation matrix and the label information.
2. The method for constructing a classification model for segmentation of macular degeneration areas of fundus images according to claim 1, wherein the step 1 specifically comprises:
step 101: selecting fundus images containing different types and sizes of macular regions from the STARE data set, and carrying out gray processing on the fundus images;
step 102: manually marking the positions of the foreground point and the background point to be used as image marks;
step 103: and respectively sampling the foreground and the background according to the image marks to obtain samples.
3. The method for constructing a classification model for segmentation of macular degeneration areas of fundus images according to claim 1, wherein the step 2 specifically comprises:
step 201: constructing an optimization problem to express the original generalized low-rank approximation problem, wherein the optimization problem minimizes the total reconstruction error of the known components in the input matrix set, and two transformation matrices can be obtainedAndand a matrix of low rank representationThe formula is as follows:
representing the F norm, n representing the number of training samples, SiRepresents the ith training sample, AiIndicates a correspondence SiU and V represent two transformation matrices; i isI1And II2Representing an identity matrix;
step 202: solving the transformation matrices U and V, using Ai=USiV represents approximately the sample Si。
4. The method for constructing a classification model for segmentation of macular degeneration areas of fundus images according to claim 1, wherein the step 3 specifically comprises:
step 301: constructing a similarity matrix M, the elements M of the matrixi,jRepresenting the similarity between training samples i and j;
step 302: sample matrix for the resulting low rank representationAdding a sample label L epsilon (1,0) as supervision, mining the geometric shape of data distribution, and constructing a manifold regularization itemWherein A isiAnd AjLow rank approximation matrices representing the ith and jth samples, respectively; the item can reflect the manifold space structure of the training sample; wherein,
the method for constructing the similarity matrix M in step 301 includes: constructing a graph structure from n points, each point corresponding to a sample, connecting points i and j if i belongs to a point in the k-th inner neighbor of j or j belongs to a point in the k-th inner neighbor of i, Mi,jIs represented as:
alpha represents a parameter, LiAnd LjLabels of the training samples i and j are respectively represented, if the training samples belong to the foreground, the label L is 1, and if the training samples belong to the background, the label L is 0.
5. The method for constructing a classification model for segmentation of macular degeneration areas of fundus images according to claim 1, wherein the step 5 specifically comprises:
step 501: vectorizing the optimal low-rank approximate matrix of the sample to obtain a characteristic vector;
step 502: and training the SVM classifier by using the feature vectors and the corresponding labels to obtain the trained classifier.
6. A fundus image macular degeneration area segmentation method based on the classification model construction method for fundus image macular degeneration area segmentation of any one of claims 1-5, comprising: step 1: classifying the test image based on the classification model to obtain foreground points and background points of the test image; step 2: and taking the region where the foreground point is as a segmentation result.
7. A computer device for classification model construction for fundus image macular degeneration region segmentation, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of:
receiving selection of a user on an eye fundus training image, and carrying out graying processing on the training image to obtain a gray image; respectively sampling the foreground and the background of the gray level image to obtain samples;
obtaining a conversion matrix by adopting a generalized low-rank approximation method, and carrying out dimensionality reduction processing on a sample based on the conversion matrix to obtain a low-rank approximation matrix of the sample;
adding label information into the low-rank approximate matrix of the sample as supervision, and constructing a manifold regularization item based on the low-rank approximate matrix and the label information;
constructing an objective function by combining a generalized low-rank approximation method and the manifold regularization item, and solving the objective function by adopting an iterative optimization method to obtain an optimal conversion matrix and an optimal low-rank approximation matrix of the sample;
constructing a classification model based on the optimal low-rank approximation matrix and the label information;
the specific steps of constructing an objective function by combining the generalized low-rank approximation method and the manifold regularization item and solving the objective function by adopting an iterative optimization method to obtain an optimal conversion matrix and an optimal low-rank approximation matrix of the sample comprise:
and combining a generalized low-rank approximation method and the regularization term to construct an objective function:
representing the F norm, n representing the number of training samples, SiRepresents the ith training sample, AiAnd AjLow rank approximation matrices representing the ith and jth samples, respectively, U and V representing two transformation matrices; i isI1And II2The unit matrix is represented by a matrix of units,a sample matrix representing a low rank representation, L ∈ (1,0) representing a sample label, a manifold regularization termWherein A isiAnd AjLow rank approximation matrices, M, representing the ith and jth samples, respectivelyi,jRepresenting the similarity between training samples i and j,wherein α represents a parameter, LiAnd LjAre respectively provided withLabels representing training samples i and j, wherein if the training samples belong to the foreground, the label L is 1, if the training samples belong to the background, the label L is 0, and gamma belongs to (0, infinity) represents a parameter;
solving the optimal solution U, V and by adopting an iterative optimization methodThe iterative optimization method specifically comprises the following steps:
the objective function is rewritten as:
tr denotes the trace of the matrix, T in the upper right corner of the formula denotes the transpose of the matrix, UTRepresenting the transpose of the conversion matrix U, VTRepresents the transpose of the transformation matrix V;
given an initial VO=(IO,0)T,IOFor the identity matrix, the optimal U is obtained through the following formula:
XUrepresenting a matrix;
only if U contains matrix XUI of (A)1When the feature vector corresponding to each feature value is obtained, the formula reaches the maximum value, and the optimal solution is obtained; and (3) solving the optimal V by using the optimal U calculated by the formula:
only if V contains matrix XVI of (A)2When the feature vector corresponding to each feature value is obtained, the formula reaches the maximum value, and the optimal solution is obtained;
8. a computer-readable storage medium on which a computer program for classification model construction for macular lesion region segmentation of a fundus image is stored, the program realizing the following steps when executed by a processor:
receiving selection of a user on an eye fundus training image, and carrying out graying processing on the training image to obtain a gray image; respectively sampling the foreground and the background of the gray level image to obtain samples;
obtaining a conversion matrix by adopting a generalized low-rank approximation method, and carrying out dimensionality reduction processing on a sample based on the conversion matrix to obtain a low-rank approximation matrix of the sample;
adding label information into the low-rank approximate matrix of the sample as supervision, and constructing a manifold regularization item based on the low-rank approximate matrix and the label information;
constructing an objective function by combining a generalized low-rank approximation method and the manifold regularization item, and solving the objective function by adopting an iterative optimization method to obtain an optimal conversion matrix and an optimal low-rank approximation matrix of the sample;
constructing a classification model based on the optimal low-rank approximation matrix and the label information;
the specific steps of constructing an objective function by combining the generalized low-rank approximation method and the manifold regularization item and solving the objective function by adopting an iterative optimization method to obtain an optimal conversion matrix and an optimal low-rank approximation matrix of the sample comprise:
and combining a generalized low-rank approximation method and the regularization term to construct an objective function:
representing the F norm, n representing the number of training samples, SiRepresents the ith training sample, AiAnd AjLow rank approximation matrices representing the ith and jth samples, respectively, U and V representing two transformation matrices; i isI1And II2The unit matrix is represented by a matrix of units,a sample matrix representing a low rank representation, L ∈ (1,0) representing a sample label, a manifold regularization termWherein A isiAnd AjLow rank approximation matrices, M, representing the ith and jth samples, respectivelyi,jRepresenting the similarity between training samples i and j,wherein α represents a parameter, LiAnd LjLabels of training samples i and j are respectively represented, if the training samples belong to the foreground, the label L is 1, if the training samples belong to the background, the label L is 0, and gamma belongs to (0, infinity) represents a parameter;
solving the optimal solution U, V and by adopting an iterative optimization methodThe iterative optimization method specifically comprises the following steps:
the objective function is rewritten as:
tr denotes the trace of the matrix, T in the upper right corner of the formula denotes the transpose of the matrix, UTRepresenting the transpose of the conversion matrix U, VTRepresents the transpose of the transformation matrix V;
given an initial VO=(IO,0)T,IOFor the identity matrix, the optimal U is obtained through the following formula:
XUrepresenting a matrix;
only if U contains matrix XUI of (A)1When the feature vector corresponding to each feature value is obtained, the formula reaches the maximum value, and the optimal solution is obtained; and (3) solving the optimal V by using the optimal U calculated by the formula:
only if V contains matrix XVI of (A)2When the feature vector corresponding to each feature value is obtained, the formula reaches the maximum value, and the optimal solution is obtained;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710661951.7A CN107437252B (en) | 2017-08-04 | 2017-08-04 | Method and device for constructing classification model for macular lesion region segmentation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710661951.7A CN107437252B (en) | 2017-08-04 | 2017-08-04 | Method and device for constructing classification model for macular lesion region segmentation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107437252A CN107437252A (en) | 2017-12-05 |
CN107437252B true CN107437252B (en) | 2020-05-29 |
Family
ID=60459855
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710661951.7A Active CN107437252B (en) | 2017-08-04 | 2017-08-04 | Method and device for constructing classification model for macular lesion region segmentation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107437252B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110032704B (en) * | 2018-05-15 | 2023-06-09 | 腾讯科技(深圳)有限公司 | Data processing method, device, terminal and storage medium |
CN108717696B (en) * | 2018-05-16 | 2022-04-22 | 上海鹰瞳医疗科技有限公司 | Yellow spot image detection method and equipment |
CN109199322B (en) * | 2018-08-31 | 2020-12-04 | 福州依影健康科技有限公司 | Yellow spot detection method and storage device |
CN110675339A (en) * | 2019-09-16 | 2020-01-10 | 山东师范大学 | Image restoration method and system based on edge restoration and content restoration |
CN112435281B (en) * | 2020-09-23 | 2022-06-24 | 山东师范大学 | Multispectral fundus image analysis method and system based on counterstudy |
CN113222998B (en) * | 2021-04-13 | 2022-05-31 | 天津大学 | Semi-supervised image semantic segmentation method and device based on self-supervised low-rank network |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105427296B (en) * | 2015-11-11 | 2018-04-06 | 北京航空航天大学 | A kind of thyroid gland focus image-recognizing method based on ultrasonoscopy low rank analysis |
CN106530283A (en) * | 2016-10-20 | 2017-03-22 | 北京工业大学 | SVM (support vector machine)-based medical image blood vessel recognition method |
-
2017
- 2017-08-04 CN CN201710661951.7A patent/CN107437252B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN107437252A (en) | 2017-12-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107437252B (en) | Method and device for constructing classification model for macular lesion region segmentation | |
US10706333B2 (en) | Medical image analysis method, medical image analysis system and storage medium | |
CN109166130B (en) | Image processing method and image processing device | |
US10991093B2 (en) | Systems, methods and media for automatically generating a bone age assessment from a radiograph | |
Zilly et al. | Glaucoma detection using entropy sampling and ensemble learning for automatic optic cup and disc segmentation | |
Yi et al. | Unsupervised and semi-supervised learning with categorical generative adversarial networks assisted by wasserstein distance for dermoscopy image classification | |
Kumar et al. | Breast cancer classification of image using convolutional neural network | |
Yu et al. | Image quality classification for DR screening using deep learning | |
Zhang et al. | Medical image synthesis with generative adversarial networks for tissue recognition | |
Sridar et al. | Decision fusion-based fetal ultrasound image plane classification using convolutional neural networks | |
CN109344889B (en) | Brain disease classification apparatus, user terminal, and computer-readable storage medium | |
KR20210048523A (en) | Image processing method, apparatus, electronic device and computer-readable storage medium | |
CN112750531A (en) | Automatic inspection system, method, equipment and medium for traditional Chinese medicine | |
Gonçalves et al. | Carcass image segmentation using CNN-based methods | |
CN113298742A (en) | Multi-modal retinal image fusion method and system based on image registration | |
Tasdizen et al. | Improving the robustness of convolutional networks to appearance variability in biomedical images | |
Yadav et al. | Application of deep convulational neural network in medical image classification | |
Wang et al. | Optic disc detection based on fully convolutional neural network and structured matrix decomposition | |
Li et al. | HEp-2 specimen classification with fully convolutional network | |
CN110472694A (en) | A kind of Lung Cancer Images pathological classification method and device | |
CN112801238B (en) | Image classification method and device, electronic equipment and storage medium | |
Johnson et al. | A study on eye fixation prediction and salient object detection in supervised saliency | |
Sadeghzadeh et al. | MLMSign: Multi-lingual multi-modal illumination-invariant sign language recognition | |
Liu et al. | Vessel segmentation using principal component based threshold algorithm | |
Yasaswini Paladugu | End-To-End Gender Determination By Images Of An Human Eye |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |