CN101540047A - Texture image segmentation method based on independent Gaussian hybrid model - Google Patents
Texture image segmentation method based on independent Gaussian hybrid model Download PDFInfo
- Publication number
- CN101540047A CN101540047A CN200910022288A CN200910022288A CN101540047A CN 101540047 A CN101540047 A CN 101540047A CN 200910022288 A CN200910022288 A CN 200910022288A CN 200910022288 A CN200910022288 A CN 200910022288A CN 101540047 A CN101540047 A CN 101540047A
- Authority
- CN
- China
- Prior art keywords
- texture image
- theta
- sigma
- layer
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Abstract
The invention discloses a texture image segmentation method based on an independent Gaussian hybrid model, which comprises the following segmentation steps: simultaneously performing three-layer wavelet transformation, dual-tree complex wavelet transformation and Contourlet transformation to training texture images; extracting the characteristics of the corresponding training texture images; selecting the characteristics by adopting an immunity clone algorithm on each layer; performing unsupervised learning of the Gaussian hybrid model to each layer of each training image, adaptively obtaining the corresponding component number, and thus obtaining the parameter of the Gaussian hybrid model; simultaneously performing wavelet transformation, dual-tree complex wavelet transformation and Contourlet transformation to test texture images; calculating the corresponding final likelihood value of each layer according to the transformation coefficient and the component number; obtaining the primary segmentation result through comparing the corresponding likelihood value of each texture; and obtaining the segmentation result through multi-scale fusion of the primary segmentation result. The invention has the characteristics of good consistence of segmentation area, complete information retaining, and accurate edge positioning, and can be used for the image texture recognition.
Description
Technical field
The invention belongs to image processing field, relate to a kind of method of Study Of Segmentation Of Textured Images, can be used for image understanding and identification.
Background technology
The texture image analysis is one of research topic the most classical in Flame Image Process and the computer vision with cutting apart, all play an important role in national defence and national economy, it has all played critical effect in problems such as image classification, image retrieval, image understanding, Target Recognition.The purpose of Texture Segmentation is that image division is become border between homogeneous area and the definite zone.And the regional consistance in the texture image is to be represented by the consistance of some feature of texture in the zone, cuts apart to carry out on certain or some features.Therefore the extraction of textural characteristics is a vital factor that influences Study Of Segmentation Of Textured Images.
At present the extracting method of textural characteristics mainly reduce based on statistics, based on space/frequency domain, based on three classes of model.The statistical property of texture is considered the space distribution of gray level in the texture, can obtain good effect on the regional consistance of expression; Based on the multiple dimensioned hyperchannel texture analysis of space/frequency domain method is corresponding to the human vision process, can be on different scale analysis image, thereby improve the accuracy of edge of image location; Method hypothesis texture based on model is to form in the distributed model mode that certain parameter is controlled, and the widespread use of Markov random field model in Study Of Segmentation Of Textured Images in recent years proved absolutely its validity.Calendar year 2001 Raula combines Markov random field and gauss hybrid models, proposed to utilize Markov random field parameter portrayal texture, utilize gauss hybrid models to carry out the new method of Texture Segmentation, these class methods all are formerly to obtain on the basis of initial segmentation, initial results is carried out the back merge the method that arrives better net result.But these class methods are because utilization EM algorithm to the model training, to the initialization sensitivity, makes the poor robustness of segmentation result.
Summary of the invention
The objective of the invention is to overcome above the deficiencies in the prior art, propose a kind of texture image segmenting method, to improve the robustness of image segmentation result based on independent Gaussian hybrid model.
Realize that purpose technical scheme of the present invention is: make full use of based on multiple dimensioned characteristic the accuracy on edge location of statistical method, adopt model method to realize finally cutting apart of texture image at the good result on the regional consistance of expression and space/frequency domain method.Its specific implementation step is as follows:
(1) the training texture image is carried out wavelet transformation simultaneously, dual-tree complex wavelet conversion and Contourlet conversion, and on each layer, extract corresponding training texture image feature;
(2) on each layer, adopt immune clone algorithm that the feature of being extracted is selected;
(3) each training texture image is carried out the unsupervised learning of limited gauss hybrid models at each layer, obtain the package count k corresponding adaptively with it, 1≤k≤10, and draw the parameter of gauss hybrid models thus;
(4) the test texture image is carried out wavelet transformation, dual-tree complex wavelet conversion and Contourlet conversion; And, calculate the final likelihood value of each layer correspondence according to conversion coefficient and described package count k;
(5) by maximum posteriori criterion, the likelihood value by more various texture correspondences draws initial segmentation result;
(6), initial segmentation result is obtained final segmentation result through multiple dimensioned fusion according to bayesian criterion.
The present invention has following advantage compared with prior art:
1. owing to adopt the immune clone feature selecting algorithm that the approximate feature that satisfies Gaussian distribution of marginal distribution in the initial characteristics is selected, can access the training characteristics of effective token image characteristic;
2. owing to adopt the alternative traditional EM algorithm of gauss hybrid models unsupervised learning method that feature is trained, obtain better model parameter, can overcome the shortcoming of EM algorithm, improve and cut apart robustness the initialization sensitivity;
3. owing to comprehensively extract statistical nature and multiple dimensioned frequency domain character and, can keep good regional consistance of segmentation result and edge positional accuracy in conjunction with gauss hybrid models.
Description of drawings
Fig. 1 is a process flow diagram of the present invention;
Fig. 2 is four width of cloth training texture image that the present invention adopts;
Fig. 3 is the simulation result figure of the present invention to first width of cloth test texture image;
Fig. 4 is the simulation result figure of the present invention to second width of cloth test texture image;
Fig. 5 is the simulation result figure of the present invention to the 3rd width of cloth test texture image;
Fig. 6 is the simulation result figure of the present invention to the 4th width of cloth test texture image.
Embodiment
With reference to Fig. 1, specific implementation step of the present invention is as follows:
Step 1, the training texture image is carried out 3 layers of wavelet transformation simultaneously, 3 layers of dual-tree complex wavelet conversion, be DTCWT and 3 layers of Contourlet conversion, each layer go up to extract 15 required features, i.e. 3 of wavelet transformation high-frequency sub-band features, 6 direction mould value tags of dual-tree complex wavelet conversion, 4 high-frequency sub-band features of Contourlet conversion, average and the variance feature in wavelet transformation low frequency sub-band 3 * 3 windows.Wherein, the wavelet basis that wavelet transformation adopts is the haar small echo, and the base that the dual-tree complex wavelet conversion is adopted is near_sym_b and qshift_b, and the Contourlet conversion selects ' 9-7 ' turriform to decompose and the anisotropic filter group.
Step 2, adopting immune clone algorithm on each layer j on each layer j, selecting length from D feature is the characteristics combination feature{j} of u, makes the affinity maximum, i.e. mean distance maximum between C each sample of classification that calculates according to this antibody.Population scale is 10 in the immune clone feature selecting algorithm, and individual code length is the dimension of feature, gets u=15 here, the variation Probability p
m=1/u, clone's scale is taken as 5 times of population scale, and end condition is taken as 100 generations of maximum evolutionary generation.The selection step of this characteristics combination is as follows:
(2a) generate initial population: produce N at random
pPlant characteristics combination as the A of initial antibodies colony (0), each antibody is represented a kind of characteristics combination.Adopt the binary coding mode, the gene string length is proper vector length u, be encoded to (
).Wherein, gene position
Being that the corresponding characteristic component of 1 expression is selected, is that the corresponding characteristic component of 0 expression is not selected.
(2b) calculate affinity: each antibody is decoded as corresponding characteristics combination, obtains new training sample set, find the solution corresponding affinity { J (A (0)) } with the affinity computing formula.
(2c) judge whether to satisfy stopping criterion for iteration: end condition is set at iterations, 100 generations of promptly maximum evolutionary generation.
(2d) clone: current this population A of k godfather (k) is cloned.Obtain
Clone's scale of each antibody can also can be set at a fixing integer according to the pro-rata of the affinity of antibody and antigen size, and clone's scale is elected 5 times of population scale as herein.
(2e) clonal vaviation: to A ' (k) with the variation Probability p
m=1/d carries out mutation operation, obtains A " (k).
" each individuality (k) is decoded as corresponding characteristics combination, thereby obtains new training sample, calculates each individual affinity { J (A " (k)) } according to the affinity computing formula (2f) to calculate affinity: with current population A.
(2g) Immune Clone Selection: in sub-population, if there is variation back antibody b=max{J (a
Ij) | j=2,3 ..., q
i-1} makes J (a
i)<J (b), a
i∈ A (k) selects individual b to enter new parent colony, promptly selects the bigger individuality of affinity as population A of future generation (k+1) with certain proportion.
(2h) calculate affinity: the coding according to individual in the population, obtain new character subset, calculate the affinity { J (A (k+1)) } of population A (k+1) according to the affinity computing formula.
(2i) k=k+1 returns (2c).
Described affinity formula is shown below
Wherein,
μ
i, μ
jBe i, the characteristic mean vector of j class, ∑
i, ∑
jThe representation class covariance matrix.
Select C feature by above-mentioned steps (2a)-(2i) at each layer, every layer characteristic number is separate.
Step 3 is carried out the unsupervised learning of gauss hybrid models to each training image on every layer, obtain the package count k corresponding with it adaptively, 1≤k≤10, and draw the estimates of parameters of gauss hybrid models thus
Independent Gaussian hybrid model supposes that characteristic coefficient is independent uncorrelated between all yardsticks, and meets a gauss hybrid models on each yardstick.
Concrete implementation step is as follows:
(3a) according to training texture image data y=[y
1..., y
d], d is the dimension of data, the probability density function of setting the gauss hybrid models of k assembly is
α wherein
1..., α
kFor mixing probability, θ ≡ { θ
1..., θ
k, α
1..., α
kBe to determine the required entire parameter collection of mixture model, each θ in the parameter set
mForm by one group of parameter, determine the m order component of θ jointly;
(3b) according to Gaussian probability-density function, set m order component θ by θ
mThe dimension of determining is that the probability density function of the gauss hybrid models of d is:
θ wherein
m=(μ
m, C
m), μ
mAnd C
mBe respectively the mean vector and the covariance matrix of training texture image data, N (μ
m, C
m) be respectively with μ
mAnd C
mGauss of distribution function for mean vector and covariance matrix;
(3c) according to the minimum code length criteria, parameter θ to gauss hybrid models estimates, be optimization Length (θ, Y)=Length (θ)+Length (Y| θ), wherein Length is a code length, with Length (θ, Y) brief note is for L (θ, Y), the estimation model that obtains parameter θ is:
In the following formula, α
mFor mixing probability, k
NzFor making α
mNon-vanishing package count, n is the data length of each dimension texture image data, N is θ
mDimension ,-logp (Y| θ) is the code length of training texture image data, n α
mBe m order component θ by mixture model
mThe training texture image data point number that obtains,
Be the optimum coding length of each θ m,
Be all θ
mCode length;
(3d) given non-zero groups number of packages k
Nz, the maximization expectation step of adopting the EM algorithm with L (θ Y) minimizes, and the iterative formula of each parameter that obtains gauss hybrid models is as follows:
M=1 wherein, 2 ..., k, 1≤k≤10
Wherein,
For the iteration step number is the estimated value of the mixing probability of t+1 during the step,
For the iteration step number is the mean vector estimated value of t+1 during the step,
For the iteration step number is the covariance matrix value of t+1 during the step, according to bayesian criterion,
EM algorithm hypothesis texture image data y is incomplete, and the part of disappearance is z, the complete view data of the common formation of y and z, w
m (i)Be illustrated under the condition of the estimated value that obtains parameter θ, texture image data y belongs to the probability of complete view data, z
m (i)Expression data y
(i)M order component by θ produces.
Step 4 is carried out wavelet transformation, dual-tree complex wavelet conversion and Contourlet conversion simultaneously to the test texture image, and the decomposition layer number average is taken as 3 layers, and establishing test texture image conversion coefficient is y=[y
(1)..., y
(n)], n is a data length, according to independent Gaussian hybrid model, and independent same distribution between the conversion coefficient, the log-likelihood function that can get the independent Gaussian hybrid model of k assembly is:
Wherein y is a conversion coefficient,
Be the model parameter of step 3 gained, calculate the likelihood value of each layer according to following formula.
Step 5 by maximum posteriori criterion, is calculated initial segmentation result on each yardstick according to multiple dimensioned likelihood value.
The likelihood value of each yardstick can be write as set
Lhood
c J, lOn yardstick j, l place in position is marked as the likelihood value of the coefficient of c, c ∈ 1,2 ..., N
c, N
cBe total texture number.According to maximum posteriori criterion, with the classification of the pairing maximum likelihood value of each coefficient on this yardstick classification mark MLseg as this coefficient
c J, l, promptly
All categories tag set on the same yardstick has just constituted the initial segmentation result on this yardstick
Calculate the coefficient classification mark on all yardsticks, and write the initial segmentation result on each yardstick as a parameter sets form { MLseg
1, MLseg
2, MLseg
3, obtain the initial segmentation result on each yardstick.
Step 6 is carried out multiple dimensioned fusion, and the initial segmentation result on each yardstick is fused into final segmentation result by bayesian criterion etc.
Based in the cutting apart of bayesian criterion, with each class label regard as value be 1,2 ..., N
cStochastic variable, the class label MLseg of level j
c J, lJoint probability distribution fully by the class label MLseg of last layer j-1
c J-1, lDecision.According to hidden Markov model, to each class label MLseg
c J, l, assign a background vector V, it is by MLseg
c J-1, lIn information determine.The specific implementation step is as follows:
(6a) determine background vector V: the maximum class scale value of classification in eight neighborhoods of father node on the j yardstick of node correspondence on the j-1 yardstick is composed to background vector V;
(6b) multiple dimensioned likelihood value { Lhood that obtains by step 5
1, Lhood
2, Lhood
3And background vector V, obtain the condition posterior probability on the yardstick j:
c
iBe the class mark of pixel i, d
i jFor yardstick j goes up the coefficient at pixel i place, v
i jFor yardstick j goes up the background vector V at pixel i place, e
J, cExpression yardstick j goes up the probability that the class mark is taken as c,
It is v that expression yardstick j goes up the background vector
iThe time class mark be taken as the probability of c;
(6c) upgrade e
J, cWith
(6d) repeating step (6b)~step (6c) promptly reaches till the permissible error up to the iteration stopping condition that reaches the condition posterior probability;
(6e) repeating step (6a)~step (6d) obtains final segmentation result up to yardstick j=0.
Effect of the present invention can further specify by following emulation:
1. emulation content
Using method of the present invention is that 256 * 256 synthetic texture image carries out emulation to size shown in Figure 2, and training image is that size is the image of 64 * 64 pixels on the cut-away view picture.
2 The simulation experiment result
A. as shown in Figure 3, wherein (3a) corresponds to the segmentation result of three subband features training of existing wavelet field method to test pattern shown in Figure 2 segmentation result on yardstick 3; (3b) correspond to many feature selecting+EM training algorithm that the present invention proposes figure as a result; (3c) correspond to many features+independent Gaussian hybrid model training algorithm that the present invention proposes figure as a result.
B. as shown in Figure 4, wherein (4a) corresponds to the segmentation result of three subband features training of existing wavelet field method to test pattern shown in Figure 2 segmentation result on yardstick 2; (4b) correspond to many feature selecting+EM training algorithm that the present invention proposes figure as a result; (4c) correspond to many features+independent Gaussian hybrid model training algorithm that the present invention proposes figure as a result.
C. as shown in Figure 5, wherein (5a) corresponds to the segmentation result of three subband features training of existing wavelet field method to test pattern shown in Figure 2 segmentation result on yardstick 1; (5b) correspond to many feature selecting+EM training algorithm that the present invention proposes figure as a result; (5c) correspond to many features+independent Gaussian hybrid model training algorithm that the present invention proposes figure as a result.
D. to test pattern shown in Figure 2 in conjunction with Pixel-level cut apart with the final segmentation result of multiple dimensioned fusion as shown in Figure 6, wherein (6a) corresponds to the segmentation result of three subband features training of existing wavelet field method; (6b) correspond to many feature selecting+EM training algorithm that the present invention proposes figure as a result; (6c) correspond to many features+independent Gaussian hybrid model training algorithm that the present invention proposes figure as a result.
By Fig. 3, Fig. 4, Fig. 5 and Fig. 6 as can be seen, the present invention has better segmentation result than three subband features training of wavelet field method, the feature selecting+EM training method that manys on retaining zone consistance and edge accuracy.
Claims (4)
1. the texture image segmenting method based on independent Gaussian hybrid model comprises the steps:
(1) the training texture image is carried out wavelet transformation simultaneously, dual-tree complex wavelet conversion and Contourlet conversion, and on each layer, extract corresponding training texture image feature;
(2) on each layer, adopt immune clone algorithm that the feature of being extracted is selected;
(3) to each training texture image carry out the unsupervised learning of limited gauss hybrid models at each layer, obtain the package count k corresponding adaptively, 1≤k≤10, and draw the parameter of gauss hybrid models thus with it;
(4) the test texture image is carried out wavelet transformation, dual-tree complex wavelet conversion and Contourlet conversion; And, calculate the final likelihood value of each layer correspondence according to conversion coefficient and described package count k;
(5) by maximum posteriori criterion, the likelihood value by more various texture correspondences draws initial segmentation result;
(6), initial segmentation result is obtained final segmentation result through multiple dimensioned fusion according to bayesian criterion.
2. image partition method according to claim 1, wherein step (1) is described extracts corresponding training texture image feature on each layer, be 4 high-frequency sub-band features, the average in wavelet transformation low frequency sub-band 3 * 3 windows and variance feature 15 features altogether of 6 direction mould value tags, the Contourlet conversion of 3 high-frequency sub-band features extracting wavelet transformation on every layer, dual-tree complex wavelet conversion.
3. image partition method according to claim 1, wherein the wavelet basis of wavelet transformation employing is the haar small echo, the base that the dual-tree complex wavelet conversion is adopted is near_sym_b and qshift_b, the Contourlet conversion selects ' 9-7 ' turriform to decompose and the anisotropic filter group, and the decomposition layer number average of these three kinds of conversion is taken as 3 layers.
4. image partition method according to claim 1, wherein step (3) is carried out as follows:
(4a) according to training texture image data y=[y
1..., y
d], d is dimension 3. image partition methods according to claim 1 of data, step (3) wherein, and the probability density function of setting the gauss hybrid models of k assembly is
α wherein
1..., α
kFor mixing probability, θ ≡ { θ
1..., θ
k, α
1..., α
kBe to determine the required entire parameter collection of mixture model, each θ in the parameter set
mForm by one group of parameter, determine the m order component of θ jointly;
(4b) according to Gaussian probability-density function, set m order component θ by θ
mThe dimension of determining is that the probability density function of the gauss hybrid models of d is:
θ wherein
m=(μ
m, C
m), μ
mAnd C
mBe respectively the mean vector and the covariance matrix of training texture image data, N (μ
m, C
m) be respectively with μ
mAnd C
mGauss of distribution function for mean vector and covariance matrix;
(4c) according to the minimum code length criteria, parameter θ to gauss hybrid models estimates, be optimization Length (θ, Y)=Length (θ)+Length (Y| θ), wherein Length is a code length, with Length (θ, Y) brief note is for L (θ, Y), the estimation model that obtains parameter θ is:
In the following formula, α
mFor mixing probability, k
NzFor making α
mNon-vanishing package count, n is the data length of each dimension texture image data, N is θ
mDimension ,-log p (Y| θ) is the code length of training texture image data, n α
mBe m order component θ by mixture model
mThe training texture image data point number that obtains,
Be each θ
mOptimum coding length,
Be all θ
mCode length;
(4d) given non-zero groups number of packages k
Nz, the maximization expectation step of adopting the EM algorithm with L (θ Y) minimizes, and the iterative formula of each parameter that obtains gauss hybrid models is as follows:
M=1 wherein, 2 ..., k, 1≤k≤10
Wherein,
For the iteration step number is the estimated value of the mixing probability of t+1 during the step,
In the formula, z
m (i)Expression data y
(i)M order component by θ produces,
For obtaining θ
mT step during iterative value, data y
(i)Conditional probability, w
m (i)Be illustrated under the condition of the estimated value that obtains parameter θ, texture image data y belongs to the probability of complete view data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200910022288A CN101540047A (en) | 2009-04-30 | 2009-04-30 | Texture image segmentation method based on independent Gaussian hybrid model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200910022288A CN101540047A (en) | 2009-04-30 | 2009-04-30 | Texture image segmentation method based on independent Gaussian hybrid model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN101540047A true CN101540047A (en) | 2009-09-23 |
Family
ID=41123223
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN200910022288A Pending CN101540047A (en) | 2009-04-30 | 2009-04-30 | Texture image segmentation method based on independent Gaussian hybrid model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101540047A (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101866490A (en) * | 2010-06-30 | 2010-10-20 | 西安电子科技大学 | Image segmentation method based on differential immune clone clustering |
CN101937566A (en) * | 2010-09-20 | 2011-01-05 | 西安电子科技大学 | SAR image segmentation method combining background information and maximum posterior marginal probability standard |
CN101964108A (en) * | 2010-09-10 | 2011-02-02 | 中国农业大学 | Real-time on-line system-based field leaf image edge extraction method and system |
CN101980286A (en) * | 2010-11-12 | 2011-02-23 | 西安电子科技大学 | Method for reducing speckles of synthetic aperture radar (SAR) image by combining dual-tree complex wavelet transform with bivariate model |
CN102063627A (en) * | 2010-12-31 | 2011-05-18 | 宁波大学 | Method for recognizing natural images and computer generated images based on multi-wavelet transform |
CN102096819A (en) * | 2011-03-11 | 2011-06-15 | 西安电子科技大学 | Method for segmenting images by utilizing sparse representation and dictionary learning |
CN102236898A (en) * | 2011-08-11 | 2011-11-09 | 魏昕 | Image segmentation method based on t mixed model with infinite component number |
CN102637298A (en) * | 2011-12-31 | 2012-08-15 | 辽宁师范大学 | Color image segmentation method based on Gaussian mixture model and support vector machine |
CN104392458A (en) * | 2014-12-12 | 2015-03-04 | 哈尔滨理工大学 | Image segmentation method based on space limitation neighborhood hybrid model |
CN104463222A (en) * | 2014-12-20 | 2015-03-25 | 西安电子科技大学 | Polarimetric SAR image classification method based on feature vector distribution characteristic |
CN105765562A (en) * | 2013-12-03 | 2016-07-13 | 罗伯特·博世有限公司 | Method and device for determining a data-based functional model |
CN107067359A (en) * | 2016-06-08 | 2017-08-18 | 电子科技大学 | Contourlet area image sharing methods based on Brownian movement and DNA encoding |
CN105869178B (en) * | 2016-04-26 | 2018-10-23 | 昆明理工大学 | A kind of complex target dynamic scene non-formaldehyde finishing method based on the convex optimization of Multiscale combination feature |
CN109886944A (en) * | 2019-02-02 | 2019-06-14 | 浙江大学 | A kind of white matter high signal intensity detection and localization method based on multichannel chromatogram |
CN110298855A (en) * | 2019-06-17 | 2019-10-01 | 上海大学 | A kind of sea horizon detection method based on gauss hybrid models and texture analysis |
CN113241177A (en) * | 2021-05-19 | 2021-08-10 | 上海宝藤生物医药科技股份有限公司 | Method, device and equipment for evaluating immunity level and storage medium |
-
2009
- 2009-04-30 CN CN200910022288A patent/CN101540047A/en active Pending
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101866490A (en) * | 2010-06-30 | 2010-10-20 | 西安电子科技大学 | Image segmentation method based on differential immune clone clustering |
CN101866490B (en) * | 2010-06-30 | 2012-02-08 | 西安电子科技大学 | Image segmentation method based on differential immune clone clustering |
CN101964108B (en) * | 2010-09-10 | 2013-01-23 | 中国农业大学 | Real-time on-line system-based field leaf image edge extraction method and system |
CN101964108A (en) * | 2010-09-10 | 2011-02-02 | 中国农业大学 | Real-time on-line system-based field leaf image edge extraction method and system |
CN101937566A (en) * | 2010-09-20 | 2011-01-05 | 西安电子科技大学 | SAR image segmentation method combining background information and maximum posterior marginal probability standard |
CN101980286A (en) * | 2010-11-12 | 2011-02-23 | 西安电子科技大学 | Method for reducing speckles of synthetic aperture radar (SAR) image by combining dual-tree complex wavelet transform with bivariate model |
CN101980286B (en) * | 2010-11-12 | 2012-02-08 | 西安电子科技大学 | Method for reducing speckles of synthetic aperture radar (SAR) image by combining dual-tree complex wavelet transform with bivariate model |
CN102063627A (en) * | 2010-12-31 | 2011-05-18 | 宁波大学 | Method for recognizing natural images and computer generated images based on multi-wavelet transform |
CN102063627B (en) * | 2010-12-31 | 2012-10-24 | 宁波大学 | Method for recognizing natural images and computer generated images based on multi-wavelet transform |
CN102096819A (en) * | 2011-03-11 | 2011-06-15 | 西安电子科技大学 | Method for segmenting images by utilizing sparse representation and dictionary learning |
CN102096819B (en) * | 2011-03-11 | 2013-03-20 | 西安电子科技大学 | Method for segmenting images by utilizing sparse representation and dictionary learning |
CN102236898A (en) * | 2011-08-11 | 2011-11-09 | 魏昕 | Image segmentation method based on t mixed model with infinite component number |
CN102637298A (en) * | 2011-12-31 | 2012-08-15 | 辽宁师范大学 | Color image segmentation method based on Gaussian mixture model and support vector machine |
CN105765562A (en) * | 2013-12-03 | 2016-07-13 | 罗伯特·博世有限公司 | Method and device for determining a data-based functional model |
CN105765562B (en) * | 2013-12-03 | 2022-01-11 | 罗伯特·博世有限公司 | Method and device for obtaining a data-based function model |
CN104392458A (en) * | 2014-12-12 | 2015-03-04 | 哈尔滨理工大学 | Image segmentation method based on space limitation neighborhood hybrid model |
CN104392458B (en) * | 2014-12-12 | 2017-02-22 | 哈尔滨理工大学 | Image segmentation method based on space limitation neighborhood hybrid model |
CN104463222A (en) * | 2014-12-20 | 2015-03-25 | 西安电子科技大学 | Polarimetric SAR image classification method based on feature vector distribution characteristic |
CN105869178B (en) * | 2016-04-26 | 2018-10-23 | 昆明理工大学 | A kind of complex target dynamic scene non-formaldehyde finishing method based on the convex optimization of Multiscale combination feature |
CN107067359A (en) * | 2016-06-08 | 2017-08-18 | 电子科技大学 | Contourlet area image sharing methods based on Brownian movement and DNA encoding |
CN109886944A (en) * | 2019-02-02 | 2019-06-14 | 浙江大学 | A kind of white matter high signal intensity detection and localization method based on multichannel chromatogram |
CN110298855A (en) * | 2019-06-17 | 2019-10-01 | 上海大学 | A kind of sea horizon detection method based on gauss hybrid models and texture analysis |
CN110298855B (en) * | 2019-06-17 | 2023-05-16 | 上海大学 | Sea-sky-line detection method based on Gaussian mixture model and texture analysis |
CN113241177A (en) * | 2021-05-19 | 2021-08-10 | 上海宝藤生物医药科技股份有限公司 | Method, device and equipment for evaluating immunity level and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101540047A (en) | Texture image segmentation method based on independent Gaussian hybrid model | |
CN110443143B (en) | Multi-branch convolutional neural network fused remote sensing image scene classification method | |
CN101329736B (en) | Method of image segmentation based on character selection and hidden Markov model | |
CN105389550B (en) | It is a kind of based on sparse guide and the remote sensing target detection method that significantly drives | |
CN108876796A (en) | A kind of lane segmentation system and method based on full convolutional neural networks and condition random field | |
CN108875816A (en) | Merge the Active Learning samples selection strategy of Reliability Code and diversity criterion | |
CN102169584A (en) | Remote sensing image change detection method based on watershed and treelet algorithms | |
CN104915676A (en) | Deep-level feature learning and watershed-based synthetic aperture radar (SAR) image classification method | |
CN101493935B (en) | Synthetic aperture radar image segmentation method based on shear wave hidden Markov model | |
CN106611423B (en) | SAR image segmentation method based on ridge ripple filter and deconvolution structural model | |
Zhang et al. | A GANs-based deep learning framework for automatic subsurface object recognition from ground penetrating radar data | |
CN101447080A (en) | Method for segmenting HMT image on the basis of nonsubsampled Contourlet transformation | |
CN102945553B (en) | Remote sensing image partition method based on automatic difference clustering algorithm | |
CN107527023A (en) | Classification of Polarimetric SAR Image method based on super-pixel and topic model | |
CN111079847B (en) | Remote sensing image automatic labeling method based on deep learning | |
CN102122353A (en) | Method for segmenting images by using increment dictionary learning and sparse representation | |
CN104252625A (en) | Sample adaptive multi-feature weighted remote sensing image method | |
CN108647682A (en) | A kind of brand Logo detections and recognition methods based on region convolutional neural networks model | |
CN106203373B (en) | A kind of human face in-vivo detection method based on deep vision bag of words | |
CN106651884A (en) | Sketch structure-based mean field variational Bayes synthetic aperture radar (SAR) image segmentation method | |
CN105631469A (en) | Bird image recognition method by multilayer sparse coding features | |
CN103456017B (en) | Image partition method based on the semi-supervised weight Kernel fuzzy clustering of subset | |
CN105956610B (en) | A kind of remote sensing images classification of landform method based on multi-layer coding structure | |
Wang et al. | A novel sparse boosting method for crater detection in the high resolution planetary image | |
CN102074013A (en) | Wavelet multi-scale Markov network model-based image segmentation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Open date: 20090923 |