CN101667289A - Retinal image segmentation method based on NSCT feature extraction and supervised classification - Google Patents
Retinal image segmentation method based on NSCT feature extraction and supervised classification Download PDFInfo
- Publication number
- CN101667289A CN101667289A CN200810232337A CN200810232337A CN101667289A CN 101667289 A CN101667289 A CN 101667289A CN 200810232337 A CN200810232337 A CN 200810232337A CN 200810232337 A CN200810232337 A CN 200810232337A CN 101667289 A CN101667289 A CN 101667289A
- Authority
- CN
- China
- Prior art keywords
- image
- retinal
- retinal images
- training
- split
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a retinal image segmentation method based on NSCT feature extraction and supervised classification, which relates to medical image processing. The method comprises the following steps: (1) obtaining interest regions of a retinal training image and a retinal image to be segmented by utilizing red components of the retinal training image and the retinal image to be segmented;(2) respectively carrying out edge iterative expansion of the regions of interest on green components of the retinal training image and the retinal image to be segmented; (3) respectively carrying out NSCT transformation on the expanded images to decompose the images into i layers; (4) extracting a one-dimensional features by utilizing sub-band coefficients in j directions of each layer, extracting the features layer by layer to form feature vectors and normalizing the feature vectors; (5) establishing a training sample of the feature vectors of the normalized retinal training image; (6) selecting a classifier, training the classifier by utilizing the training sample and inputting the feature vectors of the normalized retinal image to be segmented into the classifier so as to segment theretinal image to be segmented. The invention has the advantages of clear image segmentation edge and high precision and is used for medical image retina segmentation.
Description
Technical field
The invention belongs to technical field of image processing, relate to the application that retina detects, be used in medically and extract retinal vessel from retinal images.
Background technology
The development and the human beings'health of medical science are closely related, so digital image processing techniques have caused the great interest on biomedical boundary from the beginning.As far back as late nineteen seventies just have a ten minutes that document statistics points out Flame Image Process widely the application scenario be Medical Image Processing.No matter medically in basic subject or clinical practice, all be the extremely many fields of Flame Image Process kind.But, make a lot of the processing be difficult to reach the clinical practice degree because the treatment technology difficulty of medical image is big.In recent years, along with the reduction of digital image processing apparatus cost, improve all kinds of medical image quality with digital image processing techniques and reached the practical stage.
Cardiovascular and cerebrovascular diseases such as hypertension, cerebrovascular sclerosis, coronary sclerosis are present China the elderly death and major cause of morbidity, and the level of organizing of this type of disease injury at first is variation at microcirculation and blood capillary level.The eyeground retinal microvascular is the unique blood capillary than deep layer that can the atraumatic Direct observation of human body, the course of disease of diseases such as its change degree and hypertension, the order of severity and more the back situation is closely related.Can find diseases such as hypertension, diabetes, artery sclerosis by inspection to the retinal blood guard system.In retinal images, the dominant and Stability Analysis of Structures of blood vessel, thereby vessel extraction is the condition precedent of retinal images analysis and processing reliably.
Existing retina is cut apart and is mainly contained two kinds, and a kind of is to utilize Gaussian filter, and the Hessian matrix waits the contrast that strengthens blood vessel and background, and then further operates, and perhaps carries out threshold process, perhaps takes region growing.People such as Jiang in 2003 in Adaptive local thresholding by verification-basedmultithreshold probing with application to vessel detection in retinal images one literary composition, proposed a kind of many threshold values and detected the adaptive local Threshold Segmentation of scheme based on checking, promptly carry out the target that binaryzation obtains supposing by the threshold value of supposition earlier, accept still to abandon this target by the proving program decision then.
Another kind is to utilize sorter to cut apart blood vessel.In this method, key issue is how to set up proper vector.People such as J.Staal 2004 in Ridge based vessel segmentation in color images of theretina one literary composition, propose a kind ofly to utilize ridge to detect earlier to obtain proper vector, cut apart the method for blood vessel then.People such as Soares et al. 2006 are in Retinal Vessel Segmentation Using the 2-D Gabor Waveletand Supervised Classification one literary composition, proposed a kind of multiple dimensioned and multi-direction characteristic and carried out feature extraction, again the method for cutting apart by supervised classification based on the Gabor small echo.
In the retinal images blood vessel segmentation, most important is exactly the raising of blood vessel verification and measurement ratio, but because the retinal images gray scale overall situation is unbalanced, blood vessel and background contrasts are relatively poor, there is the influence of noise and various lesion region, thereby all there are certain error in above-mentioned these methods to the result of blood vessel segmentation, and its accuracy rate also needs further raising.
Summary of the invention
The purpose of this invention is to provide a kind of retinal image segmentation method based on Nonsubsampled Contourlet Transform (NSCT) and supervised classification.Overcome the deficiency of prior art, further improve the accuracy rate of cutting apart.
For achieving the above object, technical scheme of the present invention comprises the steps:
One. characteristic extraction step
(1), utilize its red component to obtain its region of interest ROI to retina training image and retinal images to be split;
(2), carry out the iteration expansion of area-of-interest edge respectively to the green component of retina training image and retinal images to be split;
(3) retina training image and retinal images to be split after the expansion are carried out the NSCT conversion respectively, it is decomposed into the i layer, every layer of sub-band coefficients that j direction arranged;
(4) utilize every layer of j direction sub-band coefficients to extract one-dimensional characteristic, and successively extract feature, the composition characteristic vector, and carry out normalization;
Two. the training of sorter and segmentation procedure
1) proper vector of the retina training image after the normalization is set up training sample;
2) select sorter for use, and utilize training sample that sorter is trained, in the proper vector input category device with the retinal images to be split after the normalization, retinal images to be split is cut apart.
In the above-mentioned retinal image segmentation method, step (3) is described utilizes every layer of j direction sub-band coefficients to extract one-dimensional characteristic, and successively extracts the composition characteristic vector, carries out as follows:
(3a) for j direction sub-band coefficients of every layer of decomposition, with the comparative result of retinal images medium vessels gray scale and background gray scale as the selected characteristic object, if retinal images medium vessels gray scale is less than the background gray scale, choose wherein minimum coefficient as feature, if retinal images medium vessels gray scale, is chosen wherein maximum coefficient greater than the background gray scale as feature;
(3b) according to step (3a), successively carry out same characteristic features and extract operation, the feature composition characteristic vector that obtains;
(3c) gray-scale value with the green component of retina training image and retinal images to be split adds proper vector as one-dimensional characteristic, obtains final proper vector v.
The present invention is owing to the multiple dimensioned property at NSCT, multidirectional, on the translation invariance basis, concrete manifestation form in conjunction with retinal images NSCT conversion coefficient, a kind of new feature extracting method has been proposed, promptly with extract on the different scale the retinal vessel of feature description different in width, the feature composition characteristic vector that obtains on comprehensive some yardsticks, thereby comparatively accurate to cutting apart of retinal images vessel boundary place; Simultaneously because the present invention takes first training, the supervised classification method of back classification carries out retinal images and cuts apart having under the situation of supervision, thereby less to retinal images segmentation result error.Simulation result shows that the present invention has than existing retinal image segmentation method cuts apart accuracy rate preferably.
Description of drawings
Fig. 1 is a realization flow synoptic diagram of the present invention;
Fig. 2 is each intermediate steps result schematic diagram that obtains retinal images ROI zone among the present invention;
Fig. 3 is the result schematic diagram of the retina green component being carried out the expansion of ROI edge;
Fig. 4 is the feature extraction principle schematic;
Fig. 5 is two width of cloth images of selecting from the result that the DRIVE database is experimentized;
Fig. 6 is two width of cloth images of selecting from the result that the STARE database is experimentized;
Fig. 7 is the ROC curve map that has provided the result that the DRIVE database is tested;
Fig. 8 is the inventive method and existing Soares et al. method simulation result comparison diagram.
Specific implementation method
With reference to Fig. 1, specific implementation process of the present invention is as follows:
(1.1) to the red component of the retinal images shown in Fig. 2 (a) divided by 255, as Fig. 2 (b), and Fig. 2 (b) is carried out rim detection by Gaussian filter LOG, obtain Fig. 2 (c);
(1.2) Fig. 2 (c) is carried out first expansion post-etching, fracture place at edge is coupled together, obtain Fig. 2 (d);
(1.3) in Fig. 2 (d), add a profile along the image border;
(1.4) passing threshold is determined the perimeter, in Fig. 2 (b), find its gray scale maximal value max red, gray scale is labeled as 1 less than the point of max red * 0.15, so just obtained a width of cloth bianry image, from wherein area being removed less than 1 the part of being labeled as of 10 pixels, obtain Fig. 2 (e) then;
(1.5) in Fig. 2 (e), the profile that adds in the step (3) is removed, then Fig. 2 (c) is filled, fill starting point and be and be labeled as 1 point among Fig. 2 (e), as Fig. 2 (f);
(1.6) Fig. 2 (f) is carried out inversion operation, again it is corroded, obtain Fig. 2 (g);
(1.7) Fig. 2 (g) is carried out inversion operation, then area is removed less than 1 the object of being labeled as of 5000 pixels, negate again reaches the purpose of filling the disappearance zone, result such as Fig. 2 (h);
(1.8) Fig. 2 (h) is carried out ON operation,, remove be labeled as 1 the object of area again, finally obtain region of interest ROI, as Fig. 2 (i) less than 5000 pixels to remove false profile.
Step 2 is carried out the expansion of region of interest ROI edge to the green component of retina training image and retinal images to be split, to eliminate the strong contrast of retina substrate and aperture exterior domain, avoids producing at the aperture edge place a large amount of erroneous judgements.Concrete steps are as follows:
(2.1) find the pixel of the outer boundary of ROI, these pixels are positioned at the outside of ROI, but and the pixel in the zone be 4 neighborhoods;
(2.2) the outer boundary pixel that step (2.1) is obtained, it is the center that its gray-scale value is made as with this pixel, and belongs to the less mean value of areas of ROI, area size can elect 5 * 5 as;
(2.3) the outer boundary pixel that step (2.2) is obtained also adds ROI, and iteration is carried out certain number of times., carry out iteration usually 80 times.
Fig. 3 is the result schematic diagram of the retinal images green component being carried out the expansion of ROI edge.Fig. 3 (a) is the green component of retinal images shown in Fig. 2 (a), and Fig. 3 (b) is the result who Fig. 3 (a) is carried out the expansion of ROI edge.
Step 3 is carried out Nonsubsampled Contourlet Transfrom (NSCT) conversion respectively to retina training image and retinal images to be split after the expansion.
NSCT is a kind of multiresolution, local, multidirectional graphical representation method.It has not only inherited the multiresolution time frequency analysis characteristic of wavelet transformation, and has good anisotropic character, can be with representing smooth curve than small echo coefficient still less.NSCT is made up of multistage decomposition of non-lower sampling and the multistage anisotropic filter of non-lower sampling, has translation invariance.The multistage decomposition of non-lower sampling is taked non-lower sampling Laplce tower to decompose and is realized that the multistage anisotropic filter of non-lower sampling adopts non-lower sampling anisotropic filter group to realize.Retina training image after the expansion and retinal images to be split are carried out the NSCT decomposition, and the present invention is divided into 4 layers, every layer of 8 direction.
Step 4 is carried out feature extraction to retinal images.The principle of feature is: in the NSCT territory of image, choose the 3rd layer of coefficient figure that goes up any one direction, as Fig. 4 (a).In this coefficient figure, the edge of blood vessel is a zero crossing, i.e. the place of the positive and negative generation conversion of coefficient.The width of supposing blood vessel is very big, and it is wide that hundred pixels are for example arranged into, the coefficient shape such as the Fig. 4 (b) at blood vessel place on the then vertical vessel directions.At this moment, if the width of blood vessel constantly reduces, two zero crossings are constantly close, and final Fig. 4 (b) just trends towards Fig. 4 (c).In retinal images, the width of blood vessel is the wideest also with regard to tens pixels, so the performance of its NSCT coefficient just tends to be this form of Fig. 4 (c).
The process of feature extraction is as follows:
(4.1) choose the 2nd layer that retinal images NSCT decomposes, the 3rd layer and the 4th layer produces proper vector;
(4.2) for 8 direction sub-band coefficients of every layer of decomposition, with the comparative result of retinal images medium vessels gray scale and background gray scale as the selected characteristic object, if retinal images medium vessels gray scale, is chosen wherein minimum coefficient less than the background gray scale as feature, as Fig. 4 (d); If retinal images medium vessels gray scale, is chosen wherein maximum coefficient greater than the background gray scale as feature.
(4.3), successively carry out same characteristic features and extract operation, the feature composition characteristic vector that obtains according to step (4.2);
(4.4) gray-scale value with the green component of retina training image and retinal images to be split adds proper vector as one-dimensional characteristic, obtains final proper vector v={v
i| i=1,2,3,4}.
Step 5 is selected sorter for use, and utilizes training sample that sorter is trained, and in the proper vector input category device with the retinal images to be split after the normalization, retinal images to be split is cut apart.
The present invention selects for use Bayes classifier to classify, and its conditional probability density function is represented with gauss hybrid models, and the pixel in the retinal images is divided into two classes: C
1={ blood vessel pixel }, C
2={ background pixel };
Bayes decision rule is as follows:
if p(v|C
1)P(C
1)>p(v|C
2)P(C
2)
then?decide?C
1;otherwise?C
2;
Wherein, p (v|C
i) be the class conditional probability density function, be also referred to as likelihood score, p (C
i) be C
iPrior probability, v is a proper vector;
Estimate p (C
i)=N
i/ N, just C
iSample ratio in training set, class conditional probability density p (v|C
i) represent by gauss hybrid models, obtain by some Gaussian function linear combinations, promptly
Wherein, k
iBe expression p (v|C
i) the number of Gaussian function, p (v|j, C
i) be the multidimensional Gaussian distribution, P
IjBe weight;
To each class C
i, given k
iIndividual Gaussian distribution is estimated the parameter and the weight of each Gaussian distribution by expectation maximization EM algorithm;
When carrying out image segmentation to be split with sorter, at first obtain probability graph, wherein the value of each picture element is
P
v=p(C
1|v)/(p(C
2|v)+p(C
1|v));
Select threshold value p=0.5 that the probability graph pixel is divided into two classes, finally finish classification.
Effect of the present invention further specifies by following emulating image and data.
One, emulating image
Retinal images used in the present invention is from two open color image data storehouses, DRIVE and STARE.The DRIVE database is made up of 40 width of cloth coloured images, and wherein, there is pathology in 7 width of cloth, and their artificial segmentation result is arranged.40 width of cloth images are divided into two groups: training set and test set, and every group comprises 20 width of cloth images, and test set contains 3 width of cloth pathology images.The personnel of the professional oculist's of process training have manually been cut apart these images.The result that training set figure is cut apart by first group of personnel is kept at setA.The result that the test set image is cut apart by first group of personnel is kept at setA, and simultaneously, the test set image is also cut apart by second group of personnel, is kept among the set B.In setA, 12.7% pixel is marked as blood vessel, and in set B, 12.3% pixel is marked as blood vessel.
The STARE database is made up of 20 digitized lantern slides.Wherein there are ten width of cloth to strive for survival in pathology.Cut apart these images respectively by two observers, be kept at respectively among ah set and the vk set.In set A (ah set), just among first observer's the result, 10.4% pixel is marked as blood vessel, and in set B (vk set), just among second observer's the result, 14.9% pixel is marked as blood vessel.The result's that these two observers are cut apart difference is quite big.Second observer marked more superfine blood vessel than first observer.This explanation, first observer is more conservative than second observer.
For the DRIVE database, training sample is produced by the training image of 20 width of cloth marks in the training set, and training obtains sorter then, with it 20 images to be split of test set is classified.
For 20 width of cloth figure in the STARE database, the present invention tests with leaving-one method, promptly uses piece image as image to be split, and other images are all as training image.
When these two databases are got training sample set, the set A that all is to use.Because the training sample scale is quite big, in all experiments, picked at random 1,000,000 samples come training classifier.In test, for the GMM sorter the class conditional probability density, the Gauss model number of blood vessel and background is taken as identical value k=k
1=k
2
Two, the objective evaluation index
Provide two quantitative measurement values: ROC area under curve Az and accuracy rate accuracy.The ROC curve is the probability graph P that obtains in the sorter classification
vOn, when threshold value p from 0 to 1 changes, the ratio of correct verification and measurement ratio and false detection rate.Correct verification and measurement ratio is the ratio of quantity and actual vessel sum of all pixels that belongs to the pixel of real blood vessel in the blood vessel pixel told of sorter.False detection rate is meant the ratio of quantity and the non-blood vessel sum of all pixels of reality that non-blood vessel pixel is divided into the pixel of blood vessel.For the ROC curve, curve is more near the upper left corner, and the performance of method is just good more.That is to say that ROC area under curve Az value is more near 1, the performance of method is just good more.Accuracy rate is meant that at considered pixel point not be that blood vessel also is under the situation of non-blood vessel, the sum of the pixel of correct classification and the ratio of the sum of retinal images pixel.
Three, simulation result and analysis
Fig. 5 is from the result of two width of cloth figure when the k=20 to selecting the DRIVE database, and they manually cut apart figure.Fig. 5 (a) is a probability graph, and Fig. 5 (b) is a segmentation result, and Fig. 5 (c) is for manually cutting apart figure among the Set A, and Fig. 5 (d) is manually cut apart figure among the Set B.
Fig. 6 is the experimental result of two width of cloth images to the k=20 that selects the STARE database time, and their artificial split image.The image source of second row is from width of cloth morbid state retinal images, and the image source of first row is from the normal retinal images of a width of cloth.Fig. 6 (a) is a probability graph, and Fig. 6 (b) is a segmentation result, and Fig. 6 (c) is manually cut apart figure among the set A, and Fig. 6 (d) is manually cut apart figure among the Set B.
Fig. 7 (a) has provided the result's that the DRIVE database is tested ROC curve map.Fig. 7 (b) has provided the result's that the STARE database is tested ROC curve map.For set A and these two the different results' of manually cutting apart of set B difference relatively, provided probability graph when p=0.5, respectively with set A and set B during as the standard segmentation result, the correct verification and measurement ratio that obtains and the ratio of false detection rate.
Can find out intuitively that from Fig. 6 and Fig. 7 dividing method of the present invention can reach effect preferably.
Table 1 has provided the comparison of the inventive method and existing method performance.
At k=10, experimentize respectively under 15,20 the situation, experimental result sees Table 1.Simultaneously, with result of the present invention and Jiang et al., the method for extractions such as Staal et al compares.
The performance of table 1 distinct methods relatively
From the result of table 1 as can be seen, the inventive method has all obtained good effect to DRIVE and two databases of STARE.By comparing, we find out that the effect of the inventive method is suitable with Soares et al. method, are better than the method for Jiang et al.Staal et al..The validity that this has illustrated the feature of utilizing the NSCT extraction that the present invention is given has provided a kind of new retina feature.
Fig. 8 is that result that certain width of cloth image in the DRIVE database is cut apart by the present invention and the result of Soares et al. compare.Fig. 8 (a) is the artificial segmentation result of set A, Fig. 8 (b) is a vessel segmentation of the present invention, the result of Fig. 8 (c) Soares et al., Fig. 8 (d) is that the result of Soares et al. deducts this paper vessel segmentation, Fig. 8 (e) is that the result of Soares et al. deducts the artificial segmentation result of set A, and Fig. 8 (f) this paper segmentation result deducts the artificial segmentation result of set A.In Fig. 8 (d), gray area is represented the same section of result and the thick blood vessel testing result of this paper of Soares et al., white portion represents that Soares et al. method detects and the nd zone of the thick blood vessel method of this paper, and black region represents that Soares et al.. method does not detect and the detected zone of this paper method.
As can be seen, general thick partially than the blood vessel in the legitimate reading from Fig. 8 (e) and Fig. 8 (f) with the result that existing Soares et al. method is cut apart, and the result that the inventive method is cut apart is better than the method for Soares et al. to the locating effect at edge.Location to the edge of thick blood vessel in this explanation Soares et al. method is inaccurate.Also as can be seen, the result's of result that the inventive method is cut apart and Soares et al. difference mainly concentrates on thick vessel boundary place from Fig. 8 (d).
Comparison diagram 8 (b) (c) in the result of red elliptic region among Fig. 8 (a), can find that in the result of Soares et al. the space between two next-door neighbours' the thick blood vessel also is judged as blood vessel, then do not have this problem among the result of the inventive method.
Claims (2)
1, a kind of retinal image segmentation method based on NSCT feature extraction and supervised classification comprises the steps:
One. characteristic extraction step
(1), utilize its red component to obtain its area-of-interest to retina training image and retinal images to be split;
(2), carry out the iteration expansion of area-of-interest edge respectively to the green component of retina training image and retinal images to be split;
(3) retina training image and retinal images to be split after the expansion are carried out the NSCT conversion respectively, it is decomposed into the i layer, every layer of sub-band coefficients that j direction arranged;
(4) utilize every layer of j direction sub-band coefficients to extract one-dimensional characteristic, and successively extract feature, the composition characteristic vector, and carry out normalization;
Two. the training of sorter and segmentation procedure
1) proper vector of the retina training image after the normalization is set up training sample;
2) select sorter for use, and utilize training sample that sorter is trained, in the proper vector input category device with the retinal images to be split after the normalization, retinal images to be split is cut apart.
2, method according to claim 1, wherein step (3) is described utilizes every layer of j direction sub-band coefficients to extract one-dimensional characteristic, and successively extracts the composition characteristic vector, carries out as follows:
(3a) for j direction sub-band coefficients of every layer of decomposition, with the comparative result of retinal images medium vessels gray scale and background gray scale as the selected characteristic object, if retinal images medium vessels gray scale is less than the background gray scale, choose wherein minimum coefficient as feature, if retinal images medium vessels gray scale, is chosen wherein maximum coefficient greater than the background gray scale as feature;
(3b) according to step (3a), successively carry out same characteristic features and extract operation, the feature composition characteristic vector that obtains;
(3c) gray-scale value with the green component of retina training image and retinal images to be split adds proper vector as one-dimensional characteristic, obtains final proper vector v.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200810232337XA CN101667289B (en) | 2008-11-19 | 2008-11-19 | Retinal image segmentation method based on NSCT feature extraction and supervised classification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200810232337XA CN101667289B (en) | 2008-11-19 | 2008-11-19 | Retinal image segmentation method based on NSCT feature extraction and supervised classification |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101667289A true CN101667289A (en) | 2010-03-10 |
CN101667289B CN101667289B (en) | 2011-08-24 |
Family
ID=41803901
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN200810232337XA Expired - Fee Related CN101667289B (en) | 2008-11-19 | 2008-11-19 | Retinal image segmentation method based on NSCT feature extraction and supervised classification |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101667289B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102411715A (en) * | 2010-09-21 | 2012-04-11 | 张云超 | Automatic cell image classification method and system with learning monitoring function |
CN102567734A (en) * | 2012-01-02 | 2012-07-11 | 西安电子科技大学 | Specific value based retina thin blood vessel segmentation method |
CN103069455A (en) * | 2010-07-30 | 2013-04-24 | 皇家飞利浦电子股份有限公司 | Organ-specific enhancement filter for robust segmentation of medical images |
CN103514605A (en) * | 2013-10-11 | 2014-01-15 | 南京理工大学 | Choroid layer automatic partitioning method based on HD-OCT retina image |
CN103544491A (en) * | 2013-11-08 | 2014-01-29 | 广州广电运通金融电子股份有限公司 | Optical character recognition method and device facing complex background |
CN106097340A (en) * | 2016-06-12 | 2016-11-09 | 山东大学 | A kind of method automatically detecting and delineating Lung neoplasm position based on convolution grader |
CN106408562A (en) * | 2016-09-22 | 2017-02-15 | 华南理工大学 | Fundus image retinal vessel segmentation method and system based on deep learning |
CN107316013A (en) * | 2017-06-14 | 2017-11-03 | 西安电子科技大学 | Hyperspectral image classification method with DCNN is converted based on NSCT |
CN108027969A (en) * | 2015-09-04 | 2018-05-11 | 斯特拉克斯私人有限公司 | The method and apparatus for identifying the gap between objects in images |
CN108986127A (en) * | 2018-06-27 | 2018-12-11 | 北京市商汤科技开发有限公司 | The training method and image partition method of image segmentation neural network, device |
CN109993757A (en) * | 2019-04-17 | 2019-07-09 | 山东师范大学 | A kind of retinal images lesion region automatic division method and system |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100555325C (en) * | 2007-08-29 | 2009-10-28 | 华中科技大学 | A kind of image interfusion method based on wave transform of not sub sampled contour |
CN101303764B (en) * | 2008-05-16 | 2010-08-04 | 西安电子科技大学 | Method for self-adaption amalgamation of multi-sensor image based on non-lower sampling profile wave |
-
2008
- 2008-11-19 CN CN200810232337XA patent/CN101667289B/en not_active Expired - Fee Related
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103069455A (en) * | 2010-07-30 | 2013-04-24 | 皇家飞利浦电子股份有限公司 | Organ-specific enhancement filter for robust segmentation of medical images |
CN102411715A (en) * | 2010-09-21 | 2012-04-11 | 张云超 | Automatic cell image classification method and system with learning monitoring function |
CN102567734A (en) * | 2012-01-02 | 2012-07-11 | 西安电子科技大学 | Specific value based retina thin blood vessel segmentation method |
CN103514605A (en) * | 2013-10-11 | 2014-01-15 | 南京理工大学 | Choroid layer automatic partitioning method based on HD-OCT retina image |
US9613266B2 (en) | 2013-11-08 | 2017-04-04 | Grg Banking Equipment Co., Ltd. | Complex background-oriented optical character recognition method and device |
CN103544491A (en) * | 2013-11-08 | 2014-01-29 | 广州广电运通金融电子股份有限公司 | Optical character recognition method and device facing complex background |
CN108027969A (en) * | 2015-09-04 | 2018-05-11 | 斯特拉克斯私人有限公司 | The method and apparatus for identifying the gap between objects in images |
CN108027969B (en) * | 2015-09-04 | 2021-11-09 | 斯特拉克斯私人有限公司 | Method and apparatus for identifying gaps between objects in an image |
CN106097340A (en) * | 2016-06-12 | 2016-11-09 | 山东大学 | A kind of method automatically detecting and delineating Lung neoplasm position based on convolution grader |
CN106408562A (en) * | 2016-09-22 | 2017-02-15 | 华南理工大学 | Fundus image retinal vessel segmentation method and system based on deep learning |
CN106408562B (en) * | 2016-09-22 | 2019-04-09 | 华南理工大学 | Eye fundus image Segmentation Method of Retinal Blood Vessels and system based on deep learning |
CN107316013A (en) * | 2017-06-14 | 2017-11-03 | 西安电子科技大学 | Hyperspectral image classification method with DCNN is converted based on NSCT |
CN107316013B (en) * | 2017-06-14 | 2020-04-07 | 西安电子科技大学 | Hyperspectral image classification method based on NSCT (non-subsampled Contourlet transform) and DCNN (data-to-neural network) |
CN108986127A (en) * | 2018-06-27 | 2018-12-11 | 北京市商汤科技开发有限公司 | The training method and image partition method of image segmentation neural network, device |
CN109993757A (en) * | 2019-04-17 | 2019-07-09 | 山东师范大学 | A kind of retinal images lesion region automatic division method and system |
Also Published As
Publication number | Publication date |
---|---|
CN101667289B (en) | 2011-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101667289B (en) | Retinal image segmentation method based on NSCT feature extraction and supervised classification | |
CN109493954B (en) | SD-OCT image retinopathy detection system based on category distinguishing and positioning | |
Akram et al. | Detection of neovascularization in retinal images using multivariate m-Mediods based classifier | |
CN107909566A (en) | A kind of image-recognizing method of the cutaneum carcinoma melanoma based on deep learning | |
CN104809480B (en) | A kind of eye fundus image Segmentation Method of Retinal Blood Vessels based on post-class processing and AdaBoost | |
CN102855491B (en) | A kind of central brain functional magnetic resonance image classification Network Based | |
CN102509123B (en) | Brain function magnetic resonance image classification method based on complex network | |
CN110136149A (en) | Leucocyte positioning and dividing method based on deep neural network | |
CN109635846A (en) | A kind of multiclass medical image judgment method and system | |
Mahapatra | Graph cut based automatic prostate segmentation using learned semantic information | |
CN112001928B (en) | Retina blood vessel segmentation method and system | |
CN110415234A (en) | Brain tumor dividing method based on multi-parameter magnetic resonance imaging | |
CN103020653B (en) | Structure and function magnetic resonance image united classification method based on network analysis | |
CN107507197A (en) | A kind of pulmonary parenchyma extracting method based on clustering algorithm and convolutional neural networks | |
CN110120056A (en) | Blood leucocyte dividing method based on self-adapting histogram threshold value and contour detecting | |
CN112465905A (en) | Characteristic brain region positioning method of magnetic resonance imaging data based on deep learning | |
CN109461163A (en) | A kind of edge detection extraction algorithm for magnetic resonance standard water mould | |
Jayanthi et al. | Automatic diagnosis of retinal diseases from color retinal images | |
CN115147600A (en) | GBM multi-mode MR image segmentation method based on classifier weight converter | |
CN115546605A (en) | Training method and device based on image labeling and segmentation model | |
Huang et al. | Automatic Retinal Vessel Segmentation Based on an Improved U‐Net Approach | |
CN117853505A (en) | Pancreatic CT image segmentation method based on Unet and U2net cascade convolutional neural network | |
CN113506284A (en) | Fundus image microangioma detection device and method and storage medium | |
Akram et al. | Detection of neovascularization for screening of proliferative diabetic retinopathy | |
Desiani et al. | A robust techniques of enhancement and segmentation blood vessels in retinal image using deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20110824 Termination date: 20141119 |
|
EXPY | Termination of patent right or utility model |