CN107437252A - Disaggregated model construction method and equipment for ARM region segmentation - Google Patents
Disaggregated model construction method and equipment for ARM region segmentation Download PDFInfo
- Publication number
- CN107437252A CN107437252A CN201710661951.7A CN201710661951A CN107437252A CN 107437252 A CN107437252 A CN 107437252A CN 201710661951 A CN201710661951 A CN 201710661951A CN 107437252 A CN107437252 A CN 107437252A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- matrix
- sample
- low
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The invention discloses a kind of disaggregated model construction method for eye fundus image ARM region segmentation, comprise the following steps:Several eye fundus images are chosen, gray processing is carried out to it and handles to obtain several gray level images, the foreground and background of the gray level image is sampled to obtain sample respectively;Transition matrix is obtained using broad sense low-rank approximation method, dimension-reduction treatment is carried out to sample based on the transition matrix, obtains the low-rank approximate matrix of sample;Label information is added as supervision to the low-rank approximate matrix of the sample, builds manifold regularization item;With reference to broad sense low-rank approximation method and manifold regularization item construction object function, using iterative optimization method solution object function acquisition optimum translation matrix and the optimal low-rank approximate matrix of sample;Based on the optimal low-rank approximate matrix and label information structure disaggregated model.The disaggregated model of the present invention can extract not only low-dimensional but also have the Feature Descriptor of stronger distinction, it is possible to increase segmentation precision.
Description
Technical field
The present invention relates to field of medical image processing, more particularly to it is a kind of for eye fundus image ARM region segmentation
Disaggregated model construction method, equipment and image partition method.
Background technology
Eyes are that the mankind obtain the most important organ of information.Macula lutea is located at eyeball rear portion, is that people experiences ambient and thing
The vital tissue of elephant.It is the important original for causing the elderly's blinding if lesion, which occurs, for the position can cause visual impairment or even blind
Therefore one.When doctor carries out eye fundus image ARM region (drusen) diagnosis, there is the degree of accuracy is low, poor repeatability, master
The shortcomings of sight factor is more.So there is an urgent need to the application of ARM domain decomposition technique and research, to meet clinically to Huang
Examination that pinta becomes, diagnosis, the medical assistance demand such as treatment.
Existing ARM dividing method is much feature based.Feature used in these methods generally comprises two
Kind:One kind is that multiple low-level image features are combined to obtain new feature;Another kind is comparison successfully manual Feature Descriptor.These
Feature is all the extraction to image bottom content, their selection and design waste time and energy and too rely on people professional knowledge,
It can choose and largely lean on experience and fortune, its regulation needs the substantial amounts of time;And the robust of these methods
Property and applicability are all limited.
In view of the ga s safety degree of feature, the feature learning algorithm of ga s safety degree is broadly divided into two classes.One kind is in tradition
Manual description, such as SIFT, LBP, design new algorithm so as to obtain new feature on the basis of HOG etc..Another kind of known with priori
Weight sensing group and parameterize it is existing by hand description son obtain new feature.Although such algorithm is in image classification and face
It is proved to achieve good effect in the research fields such as identification.Supervised learning is to go training to obtain by existing training sample
One optimal models, recycles this model to be mapped as exporting accordingly by all inputs, and output is simply judged
So as to be provided with the ability classified to unknown data.Compared to traditional rule-based method, supervised learning model is in table
All there is significant advantage in sign ability and effect.
However, the feature learning method based on supervision is also fewer in the application study that eye fundus image is split.And other are used
In eye fundus image segmentation hand-designed feature without very strong distinction and descriptive power, it is impossible to obtain more accurately splitting
As a result.How to obtain a kind of stronger feature of ability to express by study is still the emphasis and difficult point studied at present.Cause
How this, obtain the feature of more distinction, realizes that macular region is accurately quickly split, is to need people in the art at present
The technical problem that member urgently solves.
The content of the invention
To solve the above problems, the present invention proposes a kind of eye fundus image ARM region point based on supervision description
Segmentation method.This method is by the way that supervised learning is combined to learn new feature with characteristics of the underlying image.Based on the figure chosen
Picture, gray processing processing is carried out to image and is sampled;Gray feature based on image pattern, it is first approximately square with broad sense low-rank matrix
Method carries out dimensionality reduction to sample, adds the label information of sample as item is supervised, finally draws the low-rank approximate representation of sample;To
Grader is sent into as feature after quantifying, grader is obtained by training;Point of test image pixel is carried out with the grader
Class, so as to complete the segmentation based on classification.The feature that the method for this supervised learning feature obtains has stronger distinction, thus
Lesion region feature can preferably be described, so as to obtain accurate segmentation result.
To achieve these goals, the present invention adopts the following technical scheme that:
A kind of disaggregated model construction method for eye fundus image ARM region segmentation, comprise the following steps:
Step 1:Several eye fundus images are chosen, gray processing is carried out to it and handles to obtain several gray level images, to the gray scale
The foreground and background of image is sampled to obtain sample respectively;
Step 2:Transition matrix is obtained using broad sense low-rank approximation method, dimensionality reduction is carried out to sample based on the transition matrix
Processing, obtains the low-rank approximate matrix of sample;
Step 3:Label information is added as supervision to the low-rank approximate matrix of the sample, based on the low-rank approximation square
Battle array and label information structure manifold regularization item;
Step 4:With reference to broad sense low-rank approximation method and manifold regularization item construction object function, using iteration optimization
Method solves the optimal low-rank approximate matrix that object function obtains optimum translation matrix and sample;
Step 5:Based on the optimal low-rank approximate matrix and label information structure disaggregated model.
Further, the step 1 specifically includes:
Step 101:The eye fundus image comprising different type and size macular region is chosen from STARE data sets, to it
Carry out gray processing processing;
Step 102:Manual mark is carried out to foreground point and background dot position, as image tagged;
Step 103:Marked according to described image, foreground and background is sampled respectively, obtain sample.
Further, the step 2 specifically includes:
Step 201:An optimization problem is constructed to express original broad sense low-rank approximation problem, the optimization problem is most
Total reconstructed error of principal component in smallization input matrix group, it can obtain two transformation matrixsWithAnd
The matrix of low-rank representationFormula is:
F norms are represented, n represents the quantity of training sample, SiRepresent i-th of training sample, AiRepresent corresponding Si's
Low-rank approximate matrix, U and V represent two transition matrixes;WithRepresent unit matrix;
Step 202:Transformation matrix U and V are solved, using Ai=USiV approximate representation samples Si。
Further, the step 3 specifically includes:
Step 301:Build similarity matrix M, the element M of matrixijRepresent the similarity between training sample i and j;
Step 302:To the sample matrix of obtained low-rank representationAdd sample label L ∈ (1,0) and be used as supervision,
The geometry of mining data distribution, builds manifold regularization itemWherein AiAnd AjI-th is represented respectively
With the low-rank approximate matrix of j-th of sample;This can reflect the manifold space structure of training sample;Wherein,
Similarity matrix M construction method is in the step 301:A graph structure is built with n point, each point is corresponding
One sample, if i belongs to the point that the point of phase neighbour or j in k-th of j belong to phase neighbour in k-th of i, just point i and
J connections, MijIt is represented as:
α represents a parameter, LiAnd LjTraining sample i and j label are represented respectively, are marked if training sample belongs to prospect
It is 1 to sign L, and training sample belongs to background, and then label L is 0.
Further, the step 4 specifically includes:
Step 401:With reference to broad sense low-rank approximation method and regularization term construction object function:
Γ ∈ (0, ∞) represent a parameter;
Step 402:Using iterative optimization method solve optimal solution U, V andThe iterative optimization method is specially:
Object function is rewritten as:
Give an initial V0=(I0, 0)T, I0For unit matrix, optimal U is asked by below equation:
Only when U includes matrix XUL1Corresponding to individual characteristic value during characteristic vector, formula reaches maximum, obtains optimal
Solution;The optimal U calculated with above formula, optimal V is asked by below equation:
Only when V includes matrix XVL2Corresponding to individual characteristic value during characteristic vector, formula reaches maximum, obtains optimal
Solution;
Based on the V being calculated, by calculating XUThe characteristic vector of matrix updates U, repeats the process always until receiving
Hold back, finally obtain optimal U, V and
Further, the step 5 specifically includes:
Step 501:The optimal low-rank approximate matrix of sample is subjected to vectorization operation, obtains characteristic vector;
Step 502:With characteristic vector and respective labels training SVM classifier, the grader trained.
Further, the step 6 specifically includes:
Step 601:Test image is subjected to gray processing processing and sampled;
Step 602:Dimensionality reduction is carried out to the sample of test image using optimum translation matrix and obtains the optimal low of test image
Order approximate matrix;
Step 603:Using the optimal low-rank approximate matrix of test image as the input of SVM classifier, classification results are obtained,
And then obtain segmentation result.
Based on the second aspect of the present invention, present invention also offers a kind of eye fundus image based on disaggregated model described above
ARM region segmentation method, it is characterised in that including:Step 1:Test image is divided based on the disaggregated model
Class, obtain foreground point and the background dot of test image;Step 2:Using the foreground point region as segmentation result.
Based on the third aspect of the present invention, present invention also offers a kind of for eye fundus image ARM region segmentation
The computer equipment of disaggregated model structure, including:Memory, processor and storage can be run on a memory and on a processor
Computer program, it is characterised in that realize following steps during the computing device described program:
Selection of the user to eyeground training image is received, carrying out gray processing to the training image handles to obtain gray-scale map
Picture;The foreground and background of the gray level image is sampled to obtain sample respectively;
Transition matrix is obtained using broad sense low-rank approximation method, dimension-reduction treatment is carried out to sample based on the transition matrix,
Obtain the low-rank approximate matrix of sample;
Label information is added as supervision to the low-rank approximate matrix of the sample, based on the low-rank approximate matrix and mark
Sign information architecture manifold regularization item;
With reference to broad sense low-rank approximation method and manifold regularization item construction object function, asked using iterative optimization method
Solve the optimal low-rank approximate matrix that object function obtains optimum translation matrix and sample;
Based on the optimal low-rank approximate matrix and label information structure disaggregated model.
Based on the fourth aspect of the present invention, present invention also offers a kind of computer-readable recording medium, it is stored thereon with
Computer program, the disaggregated model for eye fundus image ARM region segmentation is built, real when the program is executed by processor
Existing following steps:
Selection of the user to eyeground training image is received, carrying out gray processing to the training image handles to obtain gray-scale map
Picture;The foreground and background of the gray level image is sampled to obtain sample respectively;
Transition matrix is obtained using broad sense low-rank approximation method, dimension-reduction treatment is carried out to sample based on the transition matrix,
Obtain the low-rank approximate matrix of sample;
Label information is added as supervision to the low-rank approximate matrix of the sample, based on the low-rank approximate matrix and mark
Sign information architecture manifold regularization item;
With reference to broad sense low-rank approximation method and manifold regularization item construction object function, asked using iterative optimization method
Solve the optimal low-rank approximate matrix that object function obtains optimum translation matrix and sample;
Based on the optimal low-rank approximate matrix and label information structure disaggregated model.
Beneficial effects of the present invention:
1st, supervised learning is combined to learn new Feature Descriptor by the present invention with characteristics of the underlying image.It is low using broad sense
Order matrix carries out dimensionality reduction and enters row constraint as supervision item with reference to popular regularization, obtains not only low-dimensional by iteration optimization but also has stronger
The Feature Descriptor of distinction.Compared with traditional manual description, description is to be obtained by supervised learning, do not need hand
It is dynamic to choose and design, and have stronger descriptive power.
2nd, in practice, description is applied to the segmentation of eye fundus image macular region, can obtain more accurately dividing
Cut result.Quantify ARM region using segmentation result, so as to aid in doctor more accurately to be diagnosed.
Brief description of the drawings
Fig. 1 is the flow chart of eye fundus image macular region dividing method of the present invention;
Fig. 2 is the schematic diagram that this method is sampled, including whole pictures, prospect sample, background sample;
Fig. 3 is influence of the different sample sizes to nicety of grading.
Fig. 4 is the segmentation result figure in different types of 3 width eye fundus image using the present invention;
Fig. 5 is the present invention and the ROC curve figure of other two methods 3 width eye fundus image segmentation results more than.
Embodiment
The invention will be further described with embodiment below in conjunction with the accompanying drawings.
It is noted that described further below is all exemplary, it is intended to provides further instruction to the application.It is unless another
Indicate, all technologies used herein and scientific terminology are with usual with the application person of an ordinary skill in the technical field
The identical meanings of understanding.
It should be noted that term used herein above is merely to describe embodiment, and be not intended to restricted root
According to the illustrative embodiments of the application.As used herein, unless the context clearly indicates otherwise, otherwise singulative
It is also intended to include plural form, additionally, it should be understood that, when in this manual using term "comprising" and/or " bag
Include " when, it indicates existing characteristics, step, operation, device, component and/or combinations thereof.
In the case where not conflicting, the feature in embodiment and embodiment in the application can be mutually combined.
Embodiment one
A kind of disaggregated model construction method for eye fundus image ARM region segmentation, as shown in figure 1, including following
Step:
Step 1:Several eye fundus images are chosen, gray processing is carried out to it and handles to obtain several gray level images, to the gray scale
The foreground and background of image is sampled to obtain sample respectively;
Step 2:Transition matrix is obtained using broad sense low-rank approximation method, dimensionality reduction is carried out to sample based on the transition matrix
Processing, obtains the low-rank approximate matrix of sample;
Step 3:Label information is added as supervision to the low-rank approximate matrix of the sample, based on the low-rank approximation square
Battle array and label information structure regularization term;
Step 4:With reference to broad sense low-rank approximation method and regularization term construction object function, using iterative optimization method
Obtain the optimal low-rank approximate matrix of optimum translation matrix and sample;
Step 5:Based on the optimal low-rank approximate matrix and label information structure disaggregated model.
The step 1 specifically includes:
Step 101:The eye fundus image comprising different type and size macular region is chosen from STARE data sets, to it
Carry out gray processing processing;
Step 102:Manual mark is carried out to foreground point and background dot position, as image tagged;
Step 103:Marked according to described image, foreground and background is sampled respectively, obtain sample
The sampling is that k × k Square Neighborhoods using centered on pixel are used as sample size.The present embodiment have chosen 5
Representative eye fundus image.Foreground point sample and background dot sample take n/2 to collectively constitute n training sample, n=respectively
10000, as shown in Figure 2.By experiment, when taking sample size k=15, classification results are best, as shown in Figure 3.
The step 2 specifically includes:
Step 201:An optimization problem is constructed to express original broad sense low-rank approximation problem, the optimization problem is most
Total reconstructed error of principal component in smallization input matrix group, it can obtain two transformation matrixsWithAnd
The matrix of low-rank representationFormula is:
F norms are represented, n represents the quantity of training sample, SiRepresent i-th of training sample, AiRepresent corresponding Si's
Low-rank approximate matrix, U and V represent two transition matrixes;WithRepresent unit matrix;
Step 202:If obtain transformation matrix U and V, you can use Ai=USiV carrys out each training sample S of approximate representationi。
The step 3 specifically includes:
Step 301:Build similarity matrix M, the element M of matrixijRepresent the similarity between training sample i and j;
Step 302:To the sample matrix of obtained low-rank representationAdd sample label L ∈ (1,0) and be used as supervision,
The geometry of mining data distribution, builds manifold regularization itemWherein AiAnd AjI-th is represented respectively
With the low-rank approximate matrix of j-th of sample;This can reflect the manifold space structure of training sample.
Similarity matrix M construction method is in the step 301:Figure (Graph) structure is built with n point, often
The corresponding training sample of individual point, if i belongs to the point of phase neighbour or j in k-th of j and belongs to phase neighbour in k-th of i
Point, just point i is connected with j.MijIt is represented as:
α represents a parameter, LiAnd LjTraining sample i and j label are represented respectively, are marked if training sample belongs to prospect
It is 1 to sign L, and training sample belongs to background, and then label L is 0.
The step 4 specifically includes:
Step 401:The step 2 and 3 formula are merged, obtain equation below:
Γ ∈ (0, ∞) represent a parameter.Parameter γ values are 1 in the present embodiment.
Step 402:Using iterative optimization method solve optimal solution U, V and
Formula is rewritten as:
Because above formula Section 1 is constant, being deleted does not influence, and it is as follows to obtain new formula:
Only work as Ai=UTSiWhen V, above formula reaches minimum value, by the AiBring formula (4) into and delete constant term and obtain
Final optimization formula is as follows:
Above formula is rewritten as:
Above formula is solved with the mode of iteration optimization:Give an initial V0=(I0, 0)T, I0For unit matrix, use
Below equation seeks U, that is, seeks Tr (UTXUU maximum), wherein
Only when U includes matrix XUL1Corresponding to individual characteristic value during characteristic vector, formula reaches maximum, obtains optimal
Solution.The optimal U calculated with above formula, by seeking Tr (VTXVV maximum) is worth to optimal V, wherein
Only when V includes matrix XVL2Corresponding to individual characteristic value during characteristic vector, formula reaches maximum, obtains optimal
Solution.Based on the V being calculated, by calculating XUThe characteristic vector of matrix updates U, repeats the process always until convergence, most
Obtain optimal U afterwards, V and
The step 5 specifically includes:
Step 501:The low-rank approximate matrix for the sample that step 4 is obtainedCarry out vectorization operation, obtain feature to
Amount;
Step 502:With characteristic vector and respective labels training SVM classifier, the grader trained.
Embodiment two
Based on the disaggregated model in embodiment one, a kind of eye fundus image ARM region segmentation side is present embodiments provided
Method, it uses the disaggregated model in embodiment one, including:
Step 1:Test image is classified based on the disaggregated model, obtains foreground point and the background of test image
Point;
Step 2:Using the foreground point region as segmentation result.
Wherein, step 1 specifically includes:
By test image gray processing, whole image is scanned with k × k sliding window and sampled;
Dimensionality reduction is carried out to the sample of test image using optimum translation matrix, obtains the optimal low-rank approximation square of test image
Battle array;
Using the optimal low-rank approximate matrix of test image as the input of SVM classifier, classification results are obtained.
Label is 1 if the test sample belonging to pixel belongs to foreground point, otherwise label is 0, obtains segmentation result, is such as schemed
Shown in 4.
Embodiment three
Based on above-mentioned image partition method, present embodiments provide a kind of for eye fundus image ARM region segmentation
The computer equipment of disaggregated model structure, including:Memory, processor and storage can be run on a memory and on a processor
Computer program, it is characterised in that realize following steps during the computing device described program:
Selection of the user to eyeground training image is received, carrying out gray processing to the training image handles to obtain gray-scale map
Picture;The foreground and background of the gray level image is sampled to obtain sample respectively;
Transition matrix is obtained using broad sense low-rank approximation method, dimension-reduction treatment is carried out to sample based on the transition matrix,
Obtain the low-rank approximate matrix of sample;
Label information is added as supervision to the low-rank approximate matrix of the sample, based on the low-rank approximate matrix and mark
Sign information architecture manifold regularization item;
With reference to broad sense low-rank approximation method and manifold regularization item construction object function, asked using iterative optimization method
Solve the optimal low-rank approximate matrix that object function obtains optimum translation matrix and sample;
Based on the optimal low-rank approximate matrix and label information structure disaggregated model.
The sampling is the image tagged of the foreground point and background dot marked by hand based on user, based on described image mark
Note, is sampled to obtain sample respectively to the foreground and background of the gray level image.
Example IV
Based on above-mentioned image partition method, a kind of computer-readable recording medium is present embodiments provided, is stored thereon with
Computer program, the disaggregated model for eye fundus image ARM region segmentation are built, it is characterised in that the program is processed
Device realizes following steps when performing:
Selection of the user to eyeground training image is received, carrying out gray processing to the training image handles to obtain gray-scale map
Picture;The foreground and background of the gray level image is sampled to obtain sample respectively;
Transition matrix is obtained using broad sense low-rank approximation method, dimension-reduction treatment is carried out to sample based on the transition matrix,
Obtain the low-rank approximate matrix of sample;
Label information is added as supervision to the low-rank approximate matrix of the sample, based on the low-rank approximate matrix and mark
Sign information architecture manifold regularization item;
With reference to broad sense low-rank approximation method and manifold regularization item construction object function, asked using iterative optimization method
Solve the optimal low-rank approximate matrix that object function obtains optimum translation matrix and sample;
Based on the optimal low-rank approximate matrix and label information structure disaggregated model.
The sampling is the image tagged of the foreground point and background dot marked by hand based on user, based on described image mark
Note, is sampled to obtain sample respectively to the foreground and background of the gray level image.
The device of above example three and example IV, each step is corresponding with embodiment of the method one, embodiment
Reference can be made to the related description part of embodiment one.Term " computer-readable recording medium " is construed as including one or more
The single medium or multiple media of instruction set;Any medium is should also be understood as including, any medium can be stored, compiled
Code carries for the instruction set by computing device and makes the either method in the computing device present invention.
Experiment effect:
The present invention is split using above method to different type macula lutea image, and segmentation result is as shown in Figure 4.Sent out with this
Bright method draws the ROC curve figure of segmentation result such as with the method that HALT methods and Liu et al. propose for identical image respectively
Shown in Fig. 5.Table 6 is given on the 21 width figures that the inventive method is arbitrarily chosen with other two methods on STARE data sets
Statistical result compares.
Table 6
% sensitivity | % specificity | % accuracys rate | |
The inventive method | 90.47 | 96.46 | 96.35 |
HALT methods | 85.75 | 92.69 | 92.58 |
Liu et al. method | 84.04 | 91.75 | 91.69 |
Supervised learning is combined to learn new Feature Descriptor by the present invention with characteristics of the underlying image, utilizes broad sense low-rank
Matrix carries out dimensionality reduction and combines popular regularization entering row constraint as supervision item, obtains not only low-dimensional by iteration optimization but also You compare Qiang areas
Divide the Feature Descriptor of property.Compared with traditional manual description, description is to be obtained, do not needed manually by supervised learning
Choose and design, and have stronger descriptive power, more accurate segmentation result can be obtained.
It will be understood by those skilled in the art that above-mentioned each module of the invention or each step can use general computer
Device realizes that alternatively, they can be realized with the program code that computing device can perform, it is thus possible to they are deposited
Storage performed in the storage device by computing device, either they are fabricated to respectively each integrated circuit modules or by it
In multiple modules or step be fabricated to single integrated circuit module to realize.The present invention is not restricted to any specific hardware
With the combination of software.
Although above-mentioned the embodiment of the present invention is described with reference to accompanying drawing, model not is protected to the present invention
The limitation enclosed, one of ordinary skill in the art should be understood that on the basis of technical scheme those skilled in the art are not
Need to pay various modifications or deformation that creative work can make still within protection scope of the present invention.
Claims (10)
1. a kind of disaggregated model construction method for eye fundus image ARM region segmentation, it is characterised in that including following
Step:
Step 1:Several eye fundus images are chosen, gray processing is carried out to it and handles to obtain several gray level images, to the gray level image
Foreground and background sampled to obtain sample respectively;
Step 2:Transition matrix is obtained using broad sense low-rank approximation method, sample is carried out at dimensionality reduction based on the transition matrix
Reason, obtains the low-rank approximate matrix of sample;
Step 3:Label information is added to the low-rank approximate matrix of the sample as supervision, based on the low-rank approximate matrix and
Label information builds manifold regularization item;
Step 4:With reference to broad sense low-rank approximation method and manifold regularization item construction object function, using iterative optimization method
Solve the optimal low-rank approximate matrix that object function obtains optimum translation matrix and sample;
Step 5:Based on the optimal low-rank approximate matrix and label information structure disaggregated model.
2. a kind of eye fundus image ARM region segmentation method based on supervision description as claimed in claim 1, it is special
Sign is that the step 1 specifically includes:
Step 101:The eye fundus image comprising different type and size macular region is chosen from STARE data sets, it is carried out
Gray processing processing;
Step 102:Manual mark is carried out to foreground point and background dot position, as image tagged;
Step 103:Marked according to described image, foreground and background is sampled respectively, obtain sample.
3. a kind of eye fundus image ARM region segmentation method based on supervision description as claimed in claim 1, it is special
Sign is that the step 2 specifically includes:
Step 201:An optimization problem is constructed to express original broad sense low-rank approximation problem, the optimization problem minimizes
Total reconstructed error of principal component in input matrix group, it can obtain two transformation matrixsWithAnd low-rank
The matrix of expressionFormula is:
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<munder>
<mrow>
<mi>arg</mi>
<mi>min</mi>
</mrow>
<mrow>
<mi>U</mi>
<mo>,</mo>
<mi>V</mi>
<mo>,</mo>
<msubsup>
<mrow>
<mo>{</mo>
<mi>A</mi>
<mo>}</mo>
</mrow>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>n</mi>
</msubsup>
</mrow>
</munder>
<mfrac>
<mn>1</mn>
<mi>n</mi>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>n</mi>
</munderover>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>S</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>UA</mi>
<mi>i</mi>
</msub>
<msup>
<mi>V</mi>
<mi>T</mi>
</msup>
<mo>|</mo>
<msubsup>
<mo>|</mo>
<mi>F</mi>
<mn>2</mn>
</msubsup>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>s</mi>
<mo>.</mo>
<mi>t</mi>
<mo>.</mo>
</mrow>
</mtd>
<mtd>
<mrow>
<msup>
<mi>U</mi>
<mi>T</mi>
</msup>
<mi>U</mi>
<mo>=</mo>
<msub>
<mi>I</mi>
<msub>
<mi>l</mi>
<mn>1</mn>
</msub>
</msub>
<mo>,</mo>
<mi>V</mi>
<mo>=</mo>
<msub>
<mi>I</mi>
<msub>
<mi>l</mi>
<mn>2</mn>
</msub>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
F norms are represented, n represents the quantity of training sample, SiRepresent i-th of training sample, AiRepresent corresponding Si low-rank
Approximate matrix, U and V represent two transition matrixes;WithRepresent unit matrix;
Step 202:Transformation matrix U and V are solved, using Ai=USiV approximate representation samples Si。
4. a kind of eye fundus image ARM region segmentation method based on supervision description as claimed in claim 1, it is special
Sign is that the step 3 specifically includes:
Step 301:Build similarity matrix M, the element M of matrixijRepresent the similarity between training sample i and j;
Step 302:To the sample matrix of obtained low-rank representationSample label L ∈ (1,0) are added as supervision, are excavated
The geometry of data distribution, build manifold regularization itemWherein AiAnd AjI-th and jth are represented respectively
The low-rank approximate matrix of individual sample;This can reflect the manifold space structure of training sample;Wherein,
Similarity matrix M construction method is in the step 301:A graph structure is built with n point, each point is corresponding one
Sample, if i belongs to the point that the point of phase neighbour or j in k-th of j belong to phase neighbour in k-th of i, just point i and j is connected
Connect, MijIt is represented as:
<mrow>
<msub>
<mi>M</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>=</mo>
<msup>
<mi>e</mi>
<mfrac>
<mrow>
<mo>-</mo>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>L</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>L</mi>
<mi>j</mi>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
</mrow>
<mi>&alpha;</mi>
</mfrac>
</msup>
<mo>,</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
<mo>,</mo>
<mn>2</mn>
<mo>,</mo>
<mn>...</mn>
<mo>,</mo>
<mi>n</mi>
</mrow>
α represents a parameter, LiAnd LjTraining sample i and j label are represented respectively, and label L is if training sample belongs to prospect
1, training sample belongs to background, and then label L is 0.
5. a kind of eye fundus image ARM region segmentation method based on supervision description as claimed in claim 1, it is special
Sign is that the step 4 specifically includes:
Step 401:With reference to broad sense low-rank approximation method and regularization term construction object function:
<mrow>
<munder>
<mrow>
<mi>arg</mi>
<mi>min</mi>
</mrow>
<mrow>
<mi>U</mi>
<mo>,</mo>
<mi>V</mi>
<mo>,</mo>
<msubsup>
<mrow>
<mo>{</mo>
<mi>A</mi>
<mo>}</mo>
</mrow>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>n</mi>
</msubsup>
</mrow>
</munder>
<mfrac>
<mn>1</mn>
<mi>n</mi>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>n</mi>
</munderover>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>S</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>UA</mi>
<mi>i</mi>
</msub>
<msup>
<mi>V</mi>
<mi>T</mi>
</msup>
<mo>|</mo>
<msubsup>
<mo>|</mo>
<mi>F</mi>
<mn>2</mn>
</msubsup>
<mo>+</mo>
<mi>&gamma;</mi>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
</mrow>
</munder>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>A</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>A</mi>
<mi>j</mi>
</msub>
<mo>|</mo>
<msubsup>
<mo>|</mo>
<mi>F</mi>
<mn>2</mn>
</msubsup>
<msub>
<mi>M</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mi>s</mi>
<mo>.</mo>
<mi>t</mi>
<mo>.</mo>
<msup>
<mi>U</mi>
<mi>T</mi>
</msup>
<mi>U</mi>
<mo>=</mo>
<msub>
<mi>I</mi>
<msub>
<mi>l</mi>
<mn>1</mn>
</msub>
</msub>
<mo>,</mo>
<mi>V</mi>
<mo>=</mo>
<msub>
<mi>I</mi>
<msub>
<mi>l</mi>
<mn>2</mn>
</msub>
</msub>
</mrow>
Γ ∈ (0, ∞) represent a parameter;
Step 402:Using iterative optimization method solve optimal solution U, V andThe iterative optimization method is specially:
Object function is rewritten as:
<mrow>
<munder>
<mrow>
<mi>arg</mi>
<mi>max</mi>
</mrow>
<mrow>
<mi>U</mi>
<mo>,</mo>
<mi>V</mi>
</mrow>
</munder>
<mfrac>
<mn>1</mn>
<mi>n</mi>
</mfrac>
<mi>T</mi>
<mi>r</mi>
<mrow>
<mo>(</mo>
<msup>
<mi>U</mi>
<mi>T</mi>
</msup>
<msub>
<mi>S</mi>
<mi>i</mi>
</msub>
<msup>
<mi>VV</mi>
<mi>T</mi>
</msup>
<msub>
<mi>S</mi>
<mi>i</mi>
</msub>
<mi>U</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mi>&gamma;</mi>
<mi>T</mi>
<mi>r</mi>
<mo>(</mo>
<mrow>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
</mrow>
</munder>
<msup>
<mi>U</mi>
<mi>T</mi>
</msup>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>S</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>S</mi>
<mi>j</mi>
</msub>
</mrow>
<mo>)</mo>
</mrow>
<msub>
<mi>VM</mi>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
</mrow>
</msub>
<msup>
<mi>V</mi>
<mi>T</mi>
</msup>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>S</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>S</mi>
<mi>j</mi>
</msub>
</mrow>
<mo>)</mo>
</mrow>
<mi>U</mi>
</mrow>
<mo>)</mo>
</mrow>
Give an initial V0=(I0, 0)T, I0For unit matrix, optimal U is asked by below equation:
<mrow>
<msub>
<mi>X</mi>
<mi>U</mi>
</msub>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mi>n</mi>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>n</mi>
</munderover>
<msub>
<mi>S</mi>
<mi>i</mi>
</msub>
<msup>
<mi>VV</mi>
<mi>T</mi>
</msup>
<msubsup>
<mi>S</mi>
<mi>i</mi>
<mi>T</mi>
</msubsup>
<mo>-</mo>
<mi>&gamma;</mi>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
</mrow>
</munder>
<mrow>
<mo>(</mo>
<msub>
<mi>S</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>S</mi>
<mi>j</mi>
</msub>
<mo>)</mo>
</mrow>
<msub>
<mi>VM</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<msup>
<mi>V</mi>
<mi>T</mi>
</msup>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>S</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>S</mi>
<mi>j</mi>
</msub>
<mo>)</mo>
</mrow>
<mi>T</mi>
</msup>
</mrow>
Only when U includes matrix XUL1Corresponding to individual characteristic value during characteristic vector, formula reaches maximum, obtains optimal solution;With
The optimal U that above formula calculates, optimal V is asked by below equation:
<mrow>
<msub>
<mi>X</mi>
<mi>V</mi>
</msub>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mi>n</mi>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>n</mi>
</munderover>
<msub>
<mi>S</mi>
<mi>i</mi>
</msub>
<msup>
<mi>UU</mi>
<mi>T</mi>
</msup>
<msubsup>
<mi>S</mi>
<mi>i</mi>
<mi>T</mi>
</msubsup>
<mo>-</mo>
<mi>&gamma;</mi>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
</mrow>
</munder>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>S</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>S</mi>
<mi>j</mi>
</msub>
<mo>)</mo>
</mrow>
<mi>T</mi>
</msup>
<msub>
<mi>UM</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<msup>
<mi>U</mi>
<mi>T</mi>
</msup>
<mrow>
<mo>(</mo>
<msub>
<mi>S</mi>
<mi>i</mi>
</msub>
<mo>-</mo>
<msub>
<mi>S</mi>
<mi>j</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
Only when V includes matrix XVL2Corresponding to individual characteristic value during characteristic vector, formula reaches maximum, obtains optimal solution;
Based on the V being calculated, by calculating XUThe characteristic vector of matrix updates U, repeats the process always until convergence, most
Obtain optimal U afterwards, V and
6. a kind of eye fundus image ARM region segmentation method based on supervision description as claimed in claim 1, it is special
Sign is that the step 5 specifically includes:
Step 501:The optimal low-rank approximate matrix of sample is subjected to vectorization operation, obtains characteristic vector;
Step 502:With characteristic vector and respective labels training SVM classifier, the grader trained.
7. a kind of eye fundus image ARM region segmentation method based on supervision description as claimed in claim 1, it is special
Sign is that the step 6 specifically includes:
Step 601:Test image is subjected to gray processing processing and sampled;
Step 602:The optimal low-rank for obtaining test image to the sample progress dimensionality reduction of test image using optimum translation matrix is near
Like matrix;
Step 603:Using the optimal low-rank approximate matrix of test image as the input of SVM classifier, classification results are obtained, and then
Obtain segmentation result.
8. a kind of eye fundus image ARM region segmentation method based on any one of the claim 1-7 disaggregated models, its
It is characterised by, including:Step 1:Test image is classified based on the disaggregated model, obtain test image foreground point and
Background dot;Step 2:Using the foreground point region as segmentation result.
A kind of 9. computer equipment that disaggregated model for eye fundus image ARM region segmentation is built, it is characterised in that
Including:Memory, processor and storage are on a memory and the computer program that can run on a processor, it is characterised in that
Following steps are realized during the computing device described program:
Selection of the user to eyeground training image is received, carrying out gray processing to the training image handles to obtain gray level image;It is right
The foreground and background of the gray level image is sampled to obtain sample respectively;
Transition matrix is obtained using broad sense low-rank approximation method, dimension-reduction treatment is carried out to sample based on the transition matrix, obtained
The low-rank approximate matrix of sample;
Label information is added as supervision to the low-rank approximate matrix of the sample, believed based on the low-rank approximate matrix and label
Breath structure manifold regularization item;
With reference to broad sense low-rank approximation method and manifold regularization item construction object function, mesh is solved using iterative optimization method
Scalar functions obtain the optimal low-rank approximate matrix of optimum translation matrix and sample;
Based on the optimal low-rank approximate matrix and label information structure disaggregated model.
10. a kind of computer-readable recording medium, is stored thereon with computer program, for eye fundus image ARM region point
The disaggregated model structure cut, it is characterised in that the program realizes following steps when being executed by processor:
Selection of the user to eyeground training image is received, carrying out gray processing to the training image handles to obtain gray level image;It is right
The foreground and background of the gray level image is sampled to obtain sample respectively;
Transition matrix is obtained using broad sense low-rank approximation method, dimension-reduction treatment is carried out to sample based on the transition matrix, obtained
The low-rank approximate matrix of sample;
Label information is added as supervision to the low-rank approximate matrix of the sample, believed based on the low-rank approximate matrix and label
Breath structure manifold regularization item;
With reference to broad sense low-rank approximation method and manifold regularization item construction object function, mesh is solved using iterative optimization method
Scalar functions obtain the optimal low-rank approximate matrix of optimum translation matrix and sample;
Based on the optimal low-rank approximate matrix and label information structure disaggregated model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710661951.7A CN107437252B (en) | 2017-08-04 | 2017-08-04 | Method and device for constructing classification model for macular lesion region segmentation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710661951.7A CN107437252B (en) | 2017-08-04 | 2017-08-04 | Method and device for constructing classification model for macular lesion region segmentation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107437252A true CN107437252A (en) | 2017-12-05 |
CN107437252B CN107437252B (en) | 2020-05-29 |
Family
ID=60459855
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710661951.7A Active CN107437252B (en) | 2017-08-04 | 2017-08-04 | Method and device for constructing classification model for macular lesion region segmentation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107437252B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108717696A (en) * | 2018-05-16 | 2018-10-30 | 上海鹰瞳医疗科技有限公司 | Macula lutea image detection method and equipment |
CN109199322A (en) * | 2018-08-31 | 2019-01-15 | 福州依影健康科技有限公司 | A kind of macula lutea detection method and a kind of storage equipment |
CN110032704A (en) * | 2018-05-15 | 2019-07-19 | 腾讯科技(深圳)有限公司 | Data processing method, device, terminal and storage medium |
CN110675339A (en) * | 2019-09-16 | 2020-01-10 | 山东师范大学 | Image restoration method and system based on edge restoration and content restoration |
CN112435281A (en) * | 2020-09-23 | 2021-03-02 | 山东师范大学 | Multispectral fundus image analysis method and system based on counterstudy |
CN113222998A (en) * | 2021-04-13 | 2021-08-06 | 天津大学 | Semi-supervised image semantic segmentation method and device based on self-supervised low-rank network |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105427296A (en) * | 2015-11-11 | 2016-03-23 | 北京航空航天大学 | Ultrasonic image low-rank analysis based thyroid lesion image identification method |
CN106530283A (en) * | 2016-10-20 | 2017-03-22 | 北京工业大学 | SVM (support vector machine)-based medical image blood vessel recognition method |
-
2017
- 2017-08-04 CN CN201710661951.7A patent/CN107437252B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105427296A (en) * | 2015-11-11 | 2016-03-23 | 北京航空航天大学 | Ultrasonic image low-rank analysis based thyroid lesion image identification method |
CN106530283A (en) * | 2016-10-20 | 2017-03-22 | 北京工业大学 | SVM (support vector machine)-based medical image blood vessel recognition method |
Non-Patent Citations (1)
Title |
---|
赵扬扬: "基于矩阵低秩近似的人脸识别方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110032704A (en) * | 2018-05-15 | 2019-07-19 | 腾讯科技(深圳)有限公司 | Data processing method, device, terminal and storage medium |
CN110032704B (en) * | 2018-05-15 | 2023-06-09 | 腾讯科技(深圳)有限公司 | Data processing method, device, terminal and storage medium |
CN108717696A (en) * | 2018-05-16 | 2018-10-30 | 上海鹰瞳医疗科技有限公司 | Macula lutea image detection method and equipment |
CN108717696B (en) * | 2018-05-16 | 2022-04-22 | 上海鹰瞳医疗科技有限公司 | Yellow spot image detection method and equipment |
CN109199322A (en) * | 2018-08-31 | 2019-01-15 | 福州依影健康科技有限公司 | A kind of macula lutea detection method and a kind of storage equipment |
CN110675339A (en) * | 2019-09-16 | 2020-01-10 | 山东师范大学 | Image restoration method and system based on edge restoration and content restoration |
CN112435281A (en) * | 2020-09-23 | 2021-03-02 | 山东师范大学 | Multispectral fundus image analysis method and system based on counterstudy |
CN113222998A (en) * | 2021-04-13 | 2021-08-06 | 天津大学 | Semi-supervised image semantic segmentation method and device based on self-supervised low-rank network |
Also Published As
Publication number | Publication date |
---|---|
CN107437252B (en) | 2020-05-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107437252A (en) | Disaggregated model construction method and equipment for ARM region segmentation | |
Al-Haija et al. | Breast cancer diagnosis in histopathological images using ResNet-50 convolutional neural network | |
US10706333B2 (en) | Medical image analysis method, medical image analysis system and storage medium | |
CN107203999B (en) | Dermatoscope image automatic segmentation method based on full convolution neural network | |
CN106682616B (en) | Method for recognizing neonatal pain expression based on two-channel feature deep learning | |
JP6522161B2 (en) | Medical data analysis method based on deep learning and intelligent analyzer thereof | |
CN108648191B (en) | Pest image recognition method based on Bayesian width residual error neural network | |
Li et al. | Deep convolutional neural networks for imaging data based survival analysis of rectal cancer | |
CN109584254A (en) | A kind of heart left ventricle's dividing method based on the full convolutional neural networks of deep layer | |
CN106971198A (en) | A kind of pneumoconiosis grade decision method and system based on deep learning | |
CN106296699A (en) | Cerebral tumor dividing method based on deep neural network and multi-modal MRI image | |
CN108960289B (en) | Medical image classification device and method | |
CN106056595A (en) | Method for automatically identifying whether thyroid nodule is benign or malignant based on deep convolutional neural network | |
CN112508110A (en) | Deep learning-based electrocardiosignal graph classification method | |
CN111090764B (en) | Image classification method and device based on multitask learning and graph convolution neural network | |
CN108053398A (en) | A kind of melanoma automatic testing method of semi-supervised feature learning | |
CN109191445A (en) | Bone deformation analytical method based on artificial intelligence | |
CN112465905A (en) | Characteristic brain region positioning method of magnetic resonance imaging data based on deep learning | |
CN110263880A (en) | Construction method, device and the intelligent terminal of cerebral disease disaggregated model | |
Yonekura et al. | Improving the generalization of disease stage classification with deep CNN for glioma histopathological images | |
CN112750531A (en) | Automatic inspection system, method, equipment and medium for traditional Chinese medicine | |
Bhimavarapu et al. | Analysis and characterization of plant diseases using transfer learning | |
CN110472694A (en) | A kind of Lung Cancer Images pathological classification method and device | |
Fayyadh et al. | Brain tumor detection and classifiaction using CNN algorithm and deep learning techniques | |
Jose et al. | Liver Tumor Classification using Optimal Opposition-Based Grey Wolf Optimization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |