CN101320467B - Multi-dimension texture image partition method based on self-adapting window fixing and propagation - Google Patents

Multi-dimension texture image partition method based on self-adapting window fixing and propagation Download PDF

Info

Publication number
CN101320467B
CN101320467B CN2008100182381A CN200810018238A CN101320467B CN 101320467 B CN101320467 B CN 101320467B CN 2008100182381 A CN2008100182381 A CN 2008100182381A CN 200810018238 A CN200810018238 A CN 200810018238A CN 101320467 B CN101320467 B CN 101320467B
Authority
CN
China
Prior art keywords
yardstick
class
mark
image
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2008100182381A
Other languages
Chinese (zh)
Other versions
CN101320467A (en
Inventor
侯彪
刘凤
王爽
焦李成
张向荣
马文萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Discovery Turing Technology Xi'an Co ltd
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN2008100182381A priority Critical patent/CN101320467B/en
Publication of CN101320467A publication Critical patent/CN101320467A/en
Application granted granted Critical
Publication of CN101320467B publication Critical patent/CN101320467B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The present invention discloses a multi-scale grain image segmentation based on self-adaptive window fixation and spread. The process comprises the steps as follows: an image block n corresponding tothe grain of an image to be segmented is picked up for wavelet transform, and a corresponding HMT model parameter theta n thereof is determined; the corresponding likelihood value of corresponding data block at each wavelet decomposition scale of the image to be segmented and the corresponding likelihood value of the pixel of the image to be segmented are determined respectively and are combined together to find out the likelihood value n<k> required by finial fusion; the likelihood value on fusion widest scale (k is 4) is found out, and the corresponding segmentation result plotting on the scale is determined also; a marking field on the fusion scale k and the physical clustering center of each grain are confirmed; the multi-scale segmentation of the self-adaptive window fixation and spread is used for find out the segmentation result plotting on next fusion scale k minus 4; the finial segmentation result is confirmed by judging whether the fusion scale of the segmentation result plotting is zero or not. The multi-scale grain image segmentation has the advantages of area consistency and good edge positioning performance and can be used for image segmentation comprising grain information.

Description

Fixing based on self-adapting window and propagate method for segmenting multi-dimensional texture image
Technical field
The invention belongs to technical field of image processing, particularly relate to method for segmenting multi-dimensional texture image.The image that this method can be used for texture image and comprises texture information is as in the cutting apart of synthetic-aperture radar SAR, remote sensing images or medical image.
Background technology
The occurring in nature existence has the things of certain texture features in a large number, as large stretch of milpa, forest, building construction district or the like.In practice, mark being divided in these zones, be very important for the layout and the planning in city, and Study Of Segmentation Of Textured Images is exactly the process that the same texture region in the piece image is divided.All dividing methods all are devoted to good treatment homogeneity and uncontinuity in this course, and the border determines that this is to contradiction between promptly regional homogeneity and the different texture zone.And be similar to the human visual system owing to it based on the method for multiple dimensioned thought, and can on different scale, image be characterized and analyze, opened up a new thinking for solving this a pair of contradiction in the image segmentation process.Multiple dimensioned Hidden Markov Model (HMM) is as a kind of method of novelty, because of it can describe coefficient in transform domain between yardstick, in the yardstick and the statistic correlation between direction effectively, thereby is subjected to art of image analysis scholar's extensive concern.
The people such as Crouse of U.S. rice university in 1996 connect wavelet transformation and hidden Markov model, have proposed Wavelet-Domain Hidden Markov Model.The people such as C.A.Bouman that were born in the U.S. in 1994 have been proposed based on the Bayesian dividing method of multiple dimensioned random field by Bayes's reconstruct of image, from then on are devoted to scholar that multiple dimensioned hidden Markov model is used for image segmentation and all adopt bayes method to be used as fusion method between the yardstick of initial segmentation image.Calendar year 2001 is born in the wavelet domain concealed markov tree that the Choiet al of Korea S proposes and cuts apart the HMTseg framework, referring to H.Choi, R.G.Baraniuk.MultiscaleImage Segmentation Using Wavelet-Domain Hidden Markov Models.IEEE Transactions onImage Processing, 2001,10 (9): 1309-1321.This framework constitutes a kind of background according to class mark on the last yardstick to following yardstick class target influence, and the posterior probability SMAP method of employing maximization sequence is finished the fusion segmentation strategy of image; Background that this framework adopts and convergence strategy do not have with respect to the edge accuracy when having kept thick yardstick segmentation area conforming.The people such as Fan Guoliang that are born in China have summarized Bayes's fusion of adopting the SMAP method to carry out image based on the diversity of settings in Bayes's fusion method to be cut apart, referring to Fan G.L., Xia X.G..A joint multi-context and multi-scale approachto Bayesian image segmentation.IEEE Transactions on Geoscience and Remote Sensing, 2001,39 (12): 2680-2688.Though the fusion background that adopts in the literary composition is considered when multiple dimensioned fusion is cut apart, the regional high conformity of segmentation result on the thick yardstick and thin good these characteristics of yardstick coboundary accuracy, but, thereby fail to obtain good regional unanimity and edge segmentation result accurately because it adopts the SMAP method to obtain the influence of background to segmentation result on next yardstick.After this general Bayes's fusion of all adopting improvement background weighted strategy to be used for image of the scholar in this field is cut apart.
Texture image is the characteristic with aggregation, and promptly belonging to of a sort texture is a panel region that is communicated with, and exists geometric center, the present invention to be referred to as the physics cluster centre.The Katkovnik V.et al of birth Korea S has proposed a kind of based on polynomial expansion and fiducial interval intersection LPA_ICI algorithm, referring to Katkovnik V., Egiazarian K., Astola J..Adaptivewindow size image de-noising based on intersection of confidence intervals (ICI) rule.Journal ofMathematical Imaging andVision, 2002,16 (3): 223-235.This algorithm often is applied in the image denoising, and it carries out purpose that approximation process reach image denoising to pixel by each noisy pixel of image is found a plurality of adaptive windows in each window.
These are traditional to merge dividing method based on multiple dimensioned Bayes and all utilizes in the yardstick between the class mark and the contact between the class mark between yardstick up and down, structure obtains the context fusion background, contact between all class marks is handled equally, promptly adopt the SWAP method to obtain the weights that yardstick influences following yardstick, can not make full use of the thick yardstick segmentation area of multi-scale division high conformity, thin good these characteristics of yardstick segmentation result edge accuracy fail well to solve regional consistance and this a pair of contradiction of edge accuracy.These class methods are when the structure context simultaneously, all only pay close attention to of the influence of class target neighborhood information to segmentation result, do not notice that texture image has the characteristic of aggregation, promptly belonging to of a sort texture is a panel region that is communicated with, and has the physics cluster centre.
The content of invention
The objective of the invention is to overcome the deficiency of above-mentioned prior art, proposed a kind of based on the fixing method for segmenting multi-dimensional texture image of propagating that reaches of self-adapting window, when making full use of multi-scale division, regional high conformity on the thick yardstick, the thin good characteristics of yardstick coboundary polarization reach regional consistance and the mutually unified effect of edge accuracy.
The technical scheme that realizes the object of the invention is: according to the characteristic based on the method combined with texture image-region aggregation of self-adapting multi-dimension window fixed and propagation, utilize multi-scale wavelet territory hidden Markov tree HMT method to obtain the initial segmentation result of texture image, get the thickest yardstick of just cutting apart again, the promptly corresponding minimal graph of just cutting apart, as propagating benchmark, utilize its corresponding mark field and all kinds of physics cluster centres, adopt self-adapting window method fixing and that propagate that information is transmitted, the yardstick information that will go up is delivered to next yardstick, thereby influences the segmentation result of next yardstick.Its specific implementation process is as follows:
(1) imports image to be split, from image to be split, extract the training image blocks n of each texture correspondence, carry out wavelet transformation, obtain wavelet coefficient, adopt expectation maximization EM algorithm, obtain the pairing HMT model parameter of this wavelet coefficient θ n, wherein: n ∈ N n, N nRepresent texture classes number corresponding in the image to be split;
(2) according to the model parameter θ that is asked n, treat split image and carry out wavelet transformation, obtain the likelihood value likelihood of image to be split corresponding data block d correspondence on each yardstick of wavelet decomposition n j, wherein: j={1,2,3,4} represents the yardstick of wavelet decomposition from thin to thick correspondence; The training image blocks n that step (1) is extracted carries out Gauss's modeling, obtains the likelihood value likelihood of image slices vegetarian refreshments correspondence to be split n 0, the yardstick of 0 remarked pixel level correspondence;
(3) with the likelihood value likelihood of described image to be split corresponding data block d correspondence on each yardstick of wavelet decomposition n j, with the corresponding likelihood value likelihood of described image slices vegetarian refreshments to be split n 0Combined, finally merged required likelihood value likelihood n k, k={0,1,2,3,4}, wherein k=0 represents to merge the thinnest yardstick of yardstick correspondence, and k=1 represents to merge last one thick yardstick of thin yardstick of yardstick, by that analogy, represents to merge the thickest yardstick of yardstick correspondence up to k=4;
(4) the thickest yardstick of taking-up fusion yardstick correspondence is the likelihood value likelihood on the k=4 n 4, use maximization likelihood value formula
Figure G2008100182381D00031
Can obtain merging on the thickest yardstick the corresponding segmentation result class { c that marks on a map s| c s∈ 1,2 ... C}}, wherein: s presentation video corresponding physical coordinate, C=N n
(5) mark on a map according to the segmentation result class, determine to merge yardstick k and go up the mark field { a that weighs its class mark reliability s, a s∈ A, A={0,1}, and adopt the LPA_ICI algorithm to determine physics the cluster centre { (i that fusion yardstick k goes up all kinds of textures 1, j 1, (i 2, j 2) ..., (i C, j C);
(6) according to the physics cluster centre of mark field and all kinds of textures, adopt the multi-scale division method that self-adapting window is fixing and propagate, obtain next and merge yardstick k-1 and go up the corresponding segmentation result class { c that marks on a map s| c s∈ 1,2 ... C}};
(7) whether be the result that merge yardstick k=0 on to the segmentation result class of judging gained if marking on a map, if merge the result on the yardstick k=0, then Study Of Segmentation Of Textured Images finishes; Otherwise repeating step (6)~step (7) is till the segmentation result that obtains on the k=0.
The present invention has the following advantages compared with prior art:
1, because the present invention has added the physics cluster centre that characterizes this characteristic of texture aggregation, the reliability of context weights background vector and the precision of segmentation result have been improved;
2, because the present invention marks the unified zone and the fringe region of image by the mark field of weighing class mark reliability, thereby can be in multiple dimensioned fusion cutting procedure from coarse to fine, the homogeneity of both having guaranteed on the thick yardstick unified zone propagates on the thin yardstick, can make fringe region pass through the background vector adjustment of context weights again and keeps the accuracy of thin yardstick coboundary;
3, simulation result shows, the inventive method is by carrying out different processing to unified zone with fringe region, solved regional consistance and this a pair of contradiction of edge accuracy in the image segmentation, it is also more accurate to make segmentation result not only have good zone but also an edge.
Description of drawings
Fig. 1 is a schematic flow sheet of the present invention;
Fig. 2 is the hidden Markov tree-model synoptic diagram of a subband of wavelet field of the present invention;
Fig. 3 is to merging segmentation result on each yardstick with the mark field pattern of correspondence with the inventive method;
Fig. 4 is that the present invention carries out the figure as a result that emulation obtains on the width of cloth image synthetic by two class textures;
Fig. 5 is that the present invention carries out the figure as a result that emulation obtains on the width of cloth image synthetic by three class textures;
Fig. 6 is that the present invention carries out the figure as a result that emulation obtains on the width of cloth image synthetic by four class textures;
Fig. 7 is that the present invention carries out the figure as a result that emulation obtains on the width of cloth image synthetic by five class textures.
Embodiment
With reference to Fig. 1, specific implementation process of the present invention is as follows:
One, obtains N nEach self-corresponding wavelet field HMT model parameter θ of class texture n, n ∈ N n
Fig. 2 is the hidden Markov tree-model synoptic diagram of a subband of wavelet field, and wherein solid dot is represented wavelet coefficient, and hollow dots is represented the residing state of wavelet coefficient.The latent state problem identificatioin of unknown wavelet coefficient the question resolves itself into that will distribute of HMT model, latent state is in case determine that the distribution of each wavelet coefficient is also determined thereupon.Suppose that each layer wavelet coefficient w meets the mixed model GM M of a Gauss, if give a latent state ss for each wavelet coefficient, then by trying to achieve the probability matrix PMF p of this latent state Ss(m) and gaussian probability function g (w; μ m, σ m 2) and the wavelet coefficient yardstick between the state-transition matrix of correlativity
Figure G2008100182381D00041
Then can determine the distribution of latent state ss, thereby obtain the distribution of wavelet coefficient; Wavelet field subband B goes up corresponding parameters HMT model parameter and is expressed as:
&theta; B = { p ss B ( m ) , ( &epsiv; i , &rho; i l , m ) B , &mu; i , m B , ( &sigma; i , m 2 ) B | i = 1 , . . . . . N ; m , l = 0,1 ; B = LH , HL , HH } - - - ( 1 )
In the formula (1), N is the number of wavelet coefficient, m, and l=0,1 value for latent state ss, B=LH, HL, HH represent three subbands of wavelet transformation.
Described wavelet field HMT model parameter θ nAsk process as follows:
(1) training image blocks of each texture correspondence of extraction from image to be split, size is 64 * 64 pixels;
(2) adopt the haar wavelet basis that each training image blocks is carried out 4 layers of two-dimensional quadrature wavelet transformation, obtain B=LH, HL, the wavelet coefficient w on these three subbands of HH and four yardsticks;
(3), adopt expectation maximization EM algorithm to obtain the pairing HMT model parameter of this wavelet coefficient { θ for the wavelet coefficient on each subband j B, j={1,2,3,4}}, in the formula: j={1,2,3,4} represents the wavelet decomposition yardstick;
(4) be the HMT model parameter { θ of the wavelet coefficient correspondence of simple each class texture n of expression on each subband j B, j={12,3,4}} uses θ here nRepresent wavelet field HMT model parameter corresponding on all subbands of each texture and all yardsticks.
Two, obtain the likelihood value likelihood of each the data block d correspondence of n class texture on wavelet decomposition yardstick j n jAnd the likelihood value likelihood of pixel correspondence n 0
Because the coefficient of two-dimensional quadrature wavelet transformation has three subbands, then for all corresponding HMT model { θ of each subband B j B, j={1,2,3,4}} for simplified model, thinks three subbands separate, then obtains earlier the likelihood value (likelihood of each wavelet coefficient w correspondence among the subband B under the model parameter of subband B n j) BLikelihood value with each wavelet coefficient w correspondence multiplies each other again, draws each data block d at model parameter θ nFinal likelihood value under the portrayal, that is:
likelihood n j = ( likelihood n j ) LH &CenterDot; ( likelihood n j ) HL &CenterDot; ( likelihood n j ) HH - - - ( 2 )
In the formula: j={1,2,3,4}, the yardstick of expression wavelet decomposition from thin to thick correspondence; At last the training image blocks of each class texture n correspondence is carried out Gauss's modeling, obtain the likelihood value likelihood of image slices vegetarian refreshments correspondence to be split n 0, the yardstick of 0 represent pixel level correspondence.
Three, try to achieve the required likelihood value likelihood of final fusion n k, k={0,1,2,3,4}.
The likelihood value likelihood of the image to be split that step 2 is tried to achieve corresponding data block d correspondence on each yardstick of wavelet decomposition n j, with the corresponding likelihood value likelihood of image slices vegetarian refreshments to be split that tries to achieve n 0Combined, finally merged required likelihood value likelihood n k, k={0,1,2,3,4}, wherein k=0 represents to merge the thinnest yardstick of yardstick correspondence, and k=1 represents to merge last one thick yardstick of thin yardstick of yardstick, by that analogy, represents to merge the thickest yardstick of yardstick correspondence up to k=4.
Four, obtain segmentation result class on the thickest yardstick that the merges the yardstick correspondence { c that marks on a map s, c s∈ C.
Take out the likelihood value likelihood on the thickest yardstick k=4 that merges the yardstick correspondence n 4, use maximization likelihood value formula
Figure G2008100182381D00052
Then can obtain merging segmentation result class on the thickest yardstick of the yardstick correspondence { c that marks on a map s, c s∈ C, wherein: s presentation video corresponding physical coordinate, the set of C representation class mark.
Five, determine to merge yardstick k and go up the segmentation result class corresponding mark field { a that marks on a map s, a s∈ A, A={0,1}.
Multi-dimension texture image co-registration cutting procedure be one from thick fusion yardstick to thin fusion yardstick, successively propagate fusion process from top to bottom.It is strong that self-adapting window method fixing and that propagate makes full use of the thick yardstick of multi-scale division zone consistance, thin yardstick edge characteristic of accurate, pressing fixedly consistance zone of certain principle in the transmittance process from level to level, the corresponding class of its node is marked four child nodes that are directly passed on next fusion yardstick, guarantee the homogeneity in consistent zone; Then also come out for fringe region, write down out it next is merged the influence of yardstick, form context, pass to next and merge yardstick, thereby improve the accuracy of fringe region by this principle mark.Principle of the present invention is determines a mark field { a who weighs the segmentation result reliability s, a s∈ A, A={0,1} determines according to the value of mark field whether the class mark of node on the yardstick is reliable; For reliable node class mark, with four child nodes passing under its class mark on next fusion yardstick; For insecure node class mark, then only write down this node class mark next is merged the influence of four child nodes on the yardstick.The detailed process of determining this mark field is as follows:
(1) in marking on a map with the segmentation result class to cut apart pixel be the center, determine a window, and add up the class that belongs to all kinds of textures in this window and mark number, find out the maximal value of the class mark number of all kinds of texture correspondences, when this determines that the big or small Vs * Vs of window deducts threshold value Vs and is less than or equal to this maximal value, give current central point with the class mark of this maximal value correspondence, and establish a s=1, otherwise, a established s=0, the mark equation expression of this point is:
a s = 1 if labe ln max > Vs &times; Vs - Vs 0 otherwise
In the formula, Vs is the length or the width of window, labeln MaxBe the maximal value of class mark statistical number labeln in the window, from merging the thickest yardstick to merging the thinnest yardstick, i.e. k={4,3,2,1,0}, corresponding Vs={3,3,5,5,7};
(2) determine that each cuts apart the window of pixel during the segmentation result class is marked on a map, and add up the class that belongs to all kinds of textures in these windows and mark number, find out the maximal value of the class mark number of all kinds of texture correspondences, obtain merging yardstick k and go up mark on a map the mark field { a of all pixel correspondences of segmentation result class s, a s∈ A, A={0,1}.
Six, adopt the LPA_ICI algorithm to determine and merge the physics cluster centre that yardstick k goes up all kinds of textures.
In order fully to be applied to the characteristics that texture image has aggregation, improve the precision of segmentation result, the present invention finds out the physics cluster centre of all kinds of textures on the segmentation result class is marked on a map, write down this node to next influence of merging four child nodes on the yardstick by the physical location of decision node and the physics distances of clustering centers of all kinds of textures, its process is as follows:
(1) direction set of definition pixel is combined into: { θ r| θ r=2r π/D, r=0 ..., D-1}, wherein: r=0 ..., D-1 represents each direction of search, D represents total direction of search number;
(2) mark on a map for cutting apart class, suppose c sBe the current class mark pixel of cutting apart, to each c s, use the LPA-ICI algorithm earlier along direction θ rFind and the identical pixel number of current class mark, i.e. optimization length h on this direction r *, try to achieve the optimization length on all directions again, can obtain optimization length set { h r *, r=0 ..., D-1}, total here direction of search number D=8;
The LPA_ICI algorithm is a kind of algorithm that often is applied in the image denoising, and it carries out purpose that approximation process reach image denoising to pixel by each noisy picture element of image is found a plurality of adaptive windows in each window.The present invention utilizes it promptly to seek the length h of smooth region on this direction along finding on a certain direction and the immediate number of pixels of center pixel *This characteristics and obtain the physical location center of all kinds of veins clusterings.
(3) with current some c sOptimization length h on D direction r *Addition obtains c sArea be:
Figure G2008100182381D00062
(4) determine that all cut apart the area of class mark pixel, and in the area that belongs to same class target point correspondence, get the physics cluster centre of the coordinate of area maximum point as corresponding texture, the physics cluster centre that obtains all kinds of textures is: { (i 1, j 1), (i 2, j 2) ..., (i C, j C).
Seven, adopt the multi-scale division method that self-adapting window is fixing and propagate, obtain the segmentation result on next fusion yardstick.
Fixing and the multi-scale division method propagated of self-adapting window of the present invention is weighed initial segmentation class mark reliability according to the value that merges mark field on the yardstick, and reliable class is marked with and unreliable class mark carries out different processing, and process is as follows:
(1) to merging the mark field { a on the yardstick k s, a s∈ A, A={0,1} scans, as the mark value a of certain point s=1 o'clock, the class mark that shows this point is reliably, belong in the image homogeneous area a bit, therefore the class that directly will put is marked and is passed to its four child nodes on next fusion yardstick, guaranteed that next merges the homogeneity in the unified zone of yardstick, obtained the class mark of these four child nodes:
Figure G2008100182381D00071
ρ wherein sGo up mark value a for merging yardstick k s=1 some corresponding physical position;
(2) as the mark value a of certain point s=0 o'clock, the class mark that shows this point was unreliable class mark, to the window area that is the center with this point, determines the context weights background vector Ws of this point according to the concrete condition of window area internal labeling value; Here choosing of window area size merged yardstick k={4, and 3,2,1, on the 0}, correspond to [7 * 7,11 * 11,11 * 11,15 * 15,15 * 15] respectively; The context weights background vector Ws that this point is corresponding obtains as follows:
(a) as window area internal labeling value a sBe congruent at 0 o'clock, obtain window area central point corresponding physical position s and N nThe Euclidean distance of the physics cluster centre of class texture
Figure G2008100182381D00072
Obtain the corresponding context weights background vector Ws=[Ws of this point p], p=1,2 ... C is in the formula
Ws p = 1 if dist s - > ( i p , j p ) < = dist _ th 0 otherwise ;
(b) as regional internal labeling value a sBe not congruent at 0 o'clock, the central point of window area be divided into the context weights background vector Ws that uncertain point and edge two kinds of situations of uncertain point in the zone are determined this point:
First kind of situation: will satisfy all mark value a in the window area sThe class mark of=1 some correspondence is c ' s, and
Figure G2008100182381D00074
Point as the zone in uncertain point, here
Figure G2008100182381D00075
The Euclidean distance of representing the physics cluster centre of class under the physical location s of this window area central point and this central point.In this case, earlier with c ' sReplace central point promptly should the zone under the uncertain some class of class mark c sAsk the context weights background vector of place near the steps window area central point correspondence again, be uncertain point in the zone because this is put, then it also should be the point in the homogeneous area in four child nodes that next merges correspondence on the yardstick, therefore for guaranteeing that next class that merges four child nodes of yardstick mark is c ' s, then with c ' sThe value of corresponding position is made as 1 in context weights background vector, and other are 0, promptly this window area central point to next merge on yardstick the hereinafter weights background vector Ws=[δ that four child nodes exert an influence (c ' s, p)], p=1,2 ... C is in the formula
&delta; ( c s &prime; , p ) = 1 if c s &prime; = p 0 otherwise .
Second kind of situation: with the uncertain pixel outside the uncertain point in the zone as the uncertain point in edge.In this case, then according to window area internal labeling value a sThe class scale value of=1 some correspondence and the physical location s and the N of this window area central point nThe Euclidean distance of class physics cluster centre determines that this edge is uncertain to four context weights background vector that child nodes exerts an influence on next fusion yardstick.Detailed process is:
At first, obtain by window area internal labeling value a sA part of context weights background vector W that the class scale value information of=1 some correspondence is determined 1, promptly earlier with window area internal labeling value a sThe class scale value of=1 some correspondence is made as c s q, q=1,2 ... Q, wherein Q is mark value a sThe number of the class scale value of=1 some correspondence; Again with c s qThe value of corresponding position is made as 1 in context weights background vector, and other are 0, then obtain this a part of context weights background vector
Figure G2008100182381D00082
Q=1,2 ... Q, p=1,2 ... C;
Then, obtain the physical location s and the N of window area central point nAnother part context weights background vector W that the Euclidean distance information of class physics cluster centre is determined 2, promptly obtain the physical location s and the N of this window area central point earlier nThe Euclidean distance of class physics cluster centre; To be made as 1 apart from the value that is less than or equal to apart from thresholding dist_th corresponding position in context weights background vector again, other are 0 to ask, and then obtain another part context weights background vector W 2=[Ws p], p=1,2 ... C is in the formula
Figure G2008100182381D00083
Then, described two parts context weights background vector is multiplied each other, obtain context weights background vector: Ws=W 1W 2
(c), merge the required likelihood value likelihood of final fusion corresponding on the yardstick in conjunction with next according to described context weights background vector Ws n k, and obtain next by the maximization likelihood value and merge that four corresponding classes of child nodes are designated as on yardstick
c s = arg max c s &Element; { 1,2 , . . . , C } ( ( likelihood n k - 1 ) s &CenterDot; Ws ) ;
(3) according to all a in the scanning process sThe class that=1 next that determine merges child nodes corresponding on the yardstick is marked with and all a s=0 next that determine merges the class mark of child nodes corresponding on yardstick, obtains next and merges that there is a class mark c in institute on yardstick s, i.e. next segmentation result class that merges yardstick { c that marks on a map s| c s∈ 1,2 ... C}}.
Eight, whether be the result that merge yardstick k=0 on to the segmentation result class of judging gained if marking on a map, if merge the result on the yardstick k=0, then Study Of Segmentation Of Textured Images finishes; Otherwise repeating step six~step 7 is till the segmentation result that obtains on the k=0.
Effect of the present invention can further specify by following simulation result:
Simulation result 1 is cut apart by the synthetic texture image of two class textures a width of cloth with said process of the present invention, is respectively merged mark field and segmentation result figure corresponding on the yardstick, as shown in Figure 3.Wherein Fig. 3 (a) is the original image by the synthetic width of cloth texture of two class textures that example adopted; Fig. 3 (b) is that the thickest yardstick of fusion is the first segmentation result on the k=4, and its corresponding mark field is Fig. 3 (g); Fig. 3 (c) is for merging the fusion segmentation result on the yardstick k=3, and its corresponding mark field is Fig. 3 (h); Fig. 3 (d) is for merging the fusion segmentation result on the yardstick k=2, and its corresponding mark field is Fig. 3 (i); Fig. 3 (e) is for merging the fusion segmentation result on the yardstick k=1, and its corresponding mark field is Fig. 3 (j); Fig. 3 (f) promptly finally merges segmentation result for merging the fusion segmentation result on the yardstick k=0.From merge yardstick k value be the mark field 3 (g)~3 (j) corresponding on 4~1 as can be seen, the zone of uncertain point is from coarse to fine and more and more concentrate on the marginal point zone with merging yardstick.From merge yardstick k value be fusion segmentation result Fig. 3 (b)~3 (f) corresponding on 4~0 as can be seen, segmentation result is with merging yardstick in the process of transmission from coarse to fine, its homogeneous area consistance can keep, and its edge proves that more and more near true edge the inventive method can be good at compromise and solves cut zone consistance and this a pair of contradiction of edge accuracy.
Simulation result 2 is cut apart the synthetic texture image of being made up of 2 class textures with different dividing methods, and its effect more as shown in Figure 4.Wherein Fig. 4 (a) is the synthetic texture original image that 2 class textures are formed; Fig. 4 (b) is cut apart the segmentation result that obtains for using traditional WHMTseg method based on merging Korea S author Choi et al proposition in the dividing method behind the WHMT+ Bayes to Fig. 4 (a); Fig. 4 (c) is cut apart the segmentation result that obtains for using traditional WHMT+JMCMS method based on people's propositions such as Chinese author Fan Guoliang in the fusion dividing method behind the WHMT+ Bayes to Fig. 4 (a); Fig. 4 (d) is cut apart the segmentation result that obtains for using the inventive method to Fig. 4 (a).Fig. 4 (e) is the corresponding real split image of Fig. 4 (a); Fig. 4 (f) subtracts each other the differential chart that obtains for the segmentation result image real split image corresponding with Fig. 4 (e) that Fig. 4 (b) is corresponding, white point presentation video segmentation result in the differential chart with really cut apart the point that does not match, stain presentation video segmentation result with really cut apart the point that matches.Fig. 4 (g) subtracts each other the differential chart that obtains for the segmentation result image real split image corresponding with Fig. 4 (e) that Fig. 4 (c) is corresponding; Fig. 4 (h) subtracts each other the differential chart that obtains for the segmentation result image real split image corresponding with Fig. 4 (e) that Fig. 4 (d) is corresponding.Segmentation result Fig. 4 (b)~4 (d) of three kinds of methods of contrast with and corresponding differential chart 4 (f)~4 (h) as can be seen, these three kinds of methods are seen it on the whole and are really cut apart the point that do not match all in edge, but the inventive method more approaches true edge in edge.
Simulation result 3 is cut apart the synthetic texture image of being made up of 3 class textures with different dividing methods, and its effect more as shown in Figure 5.Wherein Fig. 5 (a) is the synthetic texture original image that 3 class textures are formed; Fig. 5 (b) is cut apart the segmentation result that obtains for using traditional WHMTseg method based on merging Korea S author Choi et al proposition in the dividing method behind the WHMT+ Bayes to Fig. 5 (a); Fig. 5 (c) is cut apart the segmentation result that obtains for using traditional WHMT+JMCMS method based on people's propositions such as Chinese author Fan Guoliang in the fusion dividing method behind the WHMT+ Bayes to Fig. 5 (a); Fig. 5 (d) is cut apart the segmentation result that obtains for using the inventive method to Fig. 5 (a).Fig. 5 (e) is the corresponding real split image of Fig. 5 (a); Fig. 5 (f) subtracts each other the differential chart that obtains for the segmentation result image real split image corresponding with Fig. 5 (e) that Fig. 5 (b) is corresponding, white point presentation video segmentation result in the differential chart with really cut apart the point that does not match, stain presentation video segmentation result with really cut apart the point that matches.Fig. 5 (g) subtracts each other the differential chart that obtains for the segmentation result image real split image corresponding with Fig. 5 (e) that Fig. 5 (c) is corresponding; Fig. 5 (h) subtracts each other the differential chart that obtains for the segmentation result image real split image corresponding with Fig. 5 (e) that Fig. 5 (d) is corresponding.Segmentation result Fig. 5 (b)~5 (d) of three kinds of methods of contrast with and corresponding differential chart 5 (f)~5 (h) as can be seen, preceding two kinds of traditional dividing methods are seen it on the whole and are really cut apart the point that does not match and are distributed in the homogeneous area and the junction, edge, but the result that the inventive method obtains with really cut apart the point that does not match and only be the junction, edge, the regional consistance of its correspondence all is better than traditional dividing method, and cuts apart the edge that obtains and more approach true edge.
Simulation result 4 is cut apart the synthetic texture image of being made up of 4 class textures with different dividing methods, and its effect more as shown in Figure 6.Wherein Fig. 6 (a) is the synthetic texture original image that 4 class textures are formed; Fig. 6 (b) is cut apart the segmentation result that obtains for using traditional WHMTseg method based on merging Korea S author Choi et al proposition in the dividing method behind the WHMT+ Bayes to Fig. 6 (a); Fig. 6 (c) is cut apart the segmentation result that obtains for using traditional WHMT+JMCMS method based on people's propositions such as Chinese author Fan Guoliang in the fusion dividing method behind the WHMT+ Bayes to Fig. 6 (a); Fig. 6 (d) is cut apart the segmentation result that obtains for using the inventive method to Fig. 6 (a).Fig. 6 (e) is the corresponding real split image of Fig. 6 (a); Fig. 6 (f) subtracts each other the differential chart that obtains for the segmentation result image real split image corresponding with Fig. 6 (e) that Fig. 6 (b) is corresponding, white point presentation video segmentation result in the differential chart with really cut apart the point that does not match, stain presentation video segmentation result with really cut apart the point that matches.Fig. 6 (g) subtracts each other the differential chart that obtains for the segmentation result image real split image corresponding with Fig. 6 (e) that Fig. 6 (c) is corresponding; Fig. 6 (h) subtracts each other the differential chart that obtains for the segmentation result image real split image corresponding with Fig. 6 (e) that Fig. 6 (d) is corresponding.Segmentation result Fig. 6 (b)~6 (d) of three kinds of methods of contrast with and corresponding differential chart 6 (f)~6 (h) as can be seen, preceding two kinds of traditional dividing methods are seen it on the whole and are really cut apart the point that does not match and are distributed in the homogeneous area and the junction, edge, but the result that the inventive method obtains with really cut apart the point that does not match and only be the junction, edge, the regional consistance of its correspondence all is better than traditional dividing method, and cuts apart the edge that obtains and more approach true edge.
Simulation result 5 is cut apart the synthetic texture image of being made up of 5 class textures with different dividing methods, and its effect more as shown in Figure 7.Wherein Fig. 7 (a) is the synthetic texture original image that 5 class textures are formed; Fig. 7 (b) is cut apart the segmentation result that obtains for using traditional WHMTseg method based on merging Korea S author Choi et al proposition in the dividing method behind the WHMT+ Bayes to Fig. 7 (a); Fig. 7 (c) is cut apart the segmentation result that obtains for using traditional WHMT+JMCMS method based on people's propositions such as Chinese author Fan Guoliang in the fusion dividing method behind the WHMT+ Bayes to Fig. 7 (a); Fig. 7 (d) is cut apart the segmentation result that obtains for using the inventive method to Fig. 7 (a); Fig. 7 (e) is the corresponding real split image of Fig. 7 (a); Fig. 7 (f) subtracts each other the differential chart that obtains for the segmentation result image real split image corresponding with Fig. 7 (e) that Fig. 7 (b) is corresponding, white point presentation video segmentation result in the differential chart with really cut apart the point that does not match, stain presentation video segmentation result with really cut apart the point that matches.Fig. 7 (g) subtracts each other the differential chart that obtains for the segmentation result image real split image corresponding with Fig. 7 (e) that Fig. 7 (c) is corresponding; Fig. 7 (h) subtracts each other the differential chart that obtains for the segmentation result image real split image corresponding with Fig. 7 (e) that Fig. 7 (d) is corresponding.Segmentation result Fig. 7 (b)~7 (d) of three kinds of methods of contrast with and corresponding differential chart 7 (f)~7 (h) as can be seen, preceding two kinds of traditional dividing methods are seen it on the whole and are really cut apart the point that does not match and are distributed in the homogeneous area and the junction, edge, but the result that the inventive method obtains with really cut apart the point that does not match and only be the junction, edge, the regional consistance of its correspondence all is better than traditional dividing method, and cuts apart the edge that obtains and more approach true edge.

Claims (3)

1. fixing based on self-adapting window and propagate method for segmenting multi-dimensional texture image comprises following process:
(1) imports image to be split, from image to be split, extract the training image blocks n of each texture correspondence, carry out wavelet transformation, obtain wavelet coefficient, adopt expectation maximization EM algorithm, obtain the pairing HMT model parameter of this wavelet coefficient θ n, wherein: n ∈ N n, N nRepresent texture classes number corresponding in the image to be split;
(2) according to the model parameter θ that is asked n, treat split image and carry out wavelet transformation, obtain the likelihood value likelihood of image to be split corresponding data block d correspondence on each yardstick of wavelet decomposition n j, wherein: j={1,2,3,4} represents the yardstick of wavelet decomposition from thin to thick correspondence; The training image blocks n that step (1) is extracted carries out Gauss's modeling, obtains the likelihood value likelihood of image slices vegetarian refreshments correspondence to be split n 0, the yardstick of 0 remarked pixel level correspondence;
(3) with the likelihood value likelihood of described image to be split corresponding data block d correspondence on each yardstick of wavelet decomposition n j, with the corresponding likelihood value likelihood of described image slices vegetarian refreshments to be split n 0Combined, finally merged required likelihood value likelihood n k, k={0,1,2,3,4}, wherein k=0 represents to merge the thinnest yardstick of yardstick correspondence, and k=1 represents to merge last one thick yardstick of thin yardstick of yardstick, by that analogy, represents to merge the thickest yardstick of yardstick correspondence up to k=4;
(4) the thickest yardstick of taking-up fusion yardstick correspondence is the likelihood value likelihood on the k=4 n 4, use maximization likelihood value formula
Figure F2008100182381C00011
Can obtain merging on the thickest yardstick the corresponding segmentation result class { c that marks on a map s| c s∈ 1,2 ... C}}, wherein: s presentation video corresponding physical coordinate, C=N n
(5) mark on a map according to the segmentation result class, determine to merge yardstick k and go up the mark field { a that weighs its class mark reliability s, a s∈ A, A={0,1}, and adopt the LPA_ICI algorithm to determine physics the cluster centre { (i that fusion yardstick k goes up all kinds of textures 1, j 1), (i 2, j 2) ..., (i C, j C);
(6), obtain the upward corresponding segmentation result class of next fusion yardstick k-1 as follows and mark on a map according to the physics cluster centre of mark field and all kinds of textures:
6a) to merging the mark field { a on the yardstick k s, a s∈ A, A={0,1} scans, as the mark value a of certain point s=1 o'clock, the class of this some mark is passed to it merge four child nodes on yardstick at next, obtain the class mark of these four child nodes;
Figure F2008100182381C00012
ρ wherein sGo up mark value a for merging yardstick k s=1 some corresponding physical position; Mark value a when certain point s=0 o'clock, determine the context weights background vector Ws of this point according to the concrete condition of this window area internal labeling value, the window area size chooses here, merging yardstick k={4,3,2,1, on the 0}, correspond to [7 * 7 respectively, 11 * 11,11 * 11,15 * 15,15 * 15], according to described background vector, merge likelihood value corresponding on yardstick in conjunction with next, obtain next by the maximization likelihood value and merge that four corresponding classes of child nodes are designated as on yardstick
c s = arg max c s &Element; { 1,2 , . . . , C } ( ( likelihood n k - 1 ) s &CenterDot; Ws ) ;
Described context weights background vector Ws, it is defined as follows:
As window area internal labeling value a sBe congruent at 0 o'clock, vector Ws=[Ws p], p=1,2 ... C,
In the formula
Ws p = 1 if dist s - > ( i p , j p ) < = dist _ th 0 otherwise ;
As regional internal labeling value a sBe not congruent at 0 o'clock, the central point of window area be divided into the context weights background vector Ws that uncertain point and edge two kinds of situations of uncertain point in the zone are determined this point:
Uncertain point in the center point area for window area, its background vector
Ws=[δ (c ' s, p)], p=1,2 ... C is in the formula
&delta; ( c s &prime; , p ) = 1 if c s &prime; = p 0 otherwise
For the uncertain point in the central point edge of window area, its background vector
Ws=W 1·W 2
Q=1,2 ... Q, p=1,2 ... C, Q are mark value a sThe number of the class scale value of=1 some correspondence,
W 2=[Ws p],p=1,2,…C
Ws p = 1 if dist s - > ( i p , j p ) < = dist _ th 0 otherwise ;
6b) according to all a in the scanning process sThe class that=1 next that determine merges child nodes corresponding on the yardstick is marked with and all a s=0 next that determine merges the class mark of child nodes corresponding on yardstick, obtains next and merges that there is a class mark c in institute on yardstick s, promptly next merges segmentation result class on yardstick { c that marks on a map s| c s∈ 1,2 ... C}};
(7) whether be the result that merge yardstick k=0 on to the segmentation result class of judging gained if marking on a map, if merge the result on the yardstick k=0, then Study Of Segmentation Of Textured Images finishes; Otherwise repeating step (6)~step (7) is till the segmentation result that obtains on the k=0.
2. image partition method according to claim 1, wherein the described mark of step (5) field is definite, carries out according to the following procedure:
(5a) in marking on a map with the segmentation result class to cut apart pixel be the center, determine a window, and add up the class that belongs to all kinds of textures in this window and mark number, find out the maximal value of the class mark number of all kinds of texture correspondences, when this determines that the big or small Vs * Vs of window deducts threshold value Vs and is less than or equal to this maximal value, give current central point with the class mark of this maximal value correspondence, and establish a s=1, otherwise, a established s=0, the mark equation expression of this point is:
a s = 1 if labe ln max > Vs &times; Vs - Vs 0 otherwise
In the formula, Vs is the length or the width of window, labeln MaxBe the maximal value of class mark statistical number labeln in the window, from merging the thickest yardstick to merging the thinnest yardstick, i.e. k={4,3,2,1,0}, corresponding Vs={3,3,5,5,7};
(5b) determine that each cuts apart the window of pixel during the segmentation result class is marked on a map, and add up the class that belongs to all kinds of textures in these windows and mark number, find out the maximal value of the class mark number of all kinds of texture correspondences, obtain mark on a map the mark field { a of all pixel correspondences of segmentation result class s, a s∈ A, A={0,1}.
3. image partition method according to claim 1, wherein the described employing of step (5) LPA_ICI algorithm is determined and is merged the physics cluster centre that yardstick k goes up all kinds of textures, carries out according to the following procedure:
(5d) direction set of definition pixel is combined into: { θ r| θ r=2r π/D, r=0 ..., D-1}, wherein: r=0 ..., D-1 represents each direction of search, D represents total direction of search number;
(5e) mark on a map, suppose c for cutting apart class sBe the current class mark pixel of cutting apart, to each c s, use the LPA-ICI algorithm earlier along direction θ rFind and the identical pixel number of current class mark, i.e. optimization length h on this direction r *, try to achieve the optimization length on all directions again, can obtain optimization length set { h r *, r=0 ..., D-1}, total here direction of search number D=8;
(5f) with current some c sOptimization length h on D direction r *Addition obtains c sArea be: Size = &Sigma; r = 0 D - 1 h r * ;
(5g) determine that all cut apart the area of class mark pixel, and in the area that belongs to same class target point correspondence, get the physics cluster centre of the coordinate of area maximum point as corresponding texture, the physics cluster centre that obtains all kinds of textures is: { (i 1, j 1), (i 2, j 2) ..., (i C, j C).
CN2008100182381A 2008-05-16 2008-05-16 Multi-dimension texture image partition method based on self-adapting window fixing and propagation Expired - Fee Related CN101320467B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008100182381A CN101320467B (en) 2008-05-16 2008-05-16 Multi-dimension texture image partition method based on self-adapting window fixing and propagation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008100182381A CN101320467B (en) 2008-05-16 2008-05-16 Multi-dimension texture image partition method based on self-adapting window fixing and propagation

Publications (2)

Publication Number Publication Date
CN101320467A CN101320467A (en) 2008-12-10
CN101320467B true CN101320467B (en) 2010-06-02

Family

ID=40180499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008100182381A Expired - Fee Related CN101320467B (en) 2008-05-16 2008-05-16 Multi-dimension texture image partition method based on self-adapting window fixing and propagation

Country Status (1)

Country Link
CN (1) CN101320467B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510310B (en) * 2009-02-19 2010-12-29 上海交通大学 Method for segmentation of high resolution remote sensing image based on veins clustering constrain
CN101673398B (en) * 2009-10-16 2011-08-24 西安电子科技大学 Method for splitting images based on clustering of immunity sparse spectrums
CN101799916A (en) * 2010-03-16 2010-08-11 刘国传 Biologic chip image wavelet de-noising method based on Bayesian estimation
CN102222339B (en) * 2011-06-17 2013-03-20 中国科学院自动化研究所 Multi-dimension background modeling method based on combination of textures and intensity characteristics
CN102446350A (en) * 2011-09-16 2012-05-09 西安电子科技大学 Anisotropic non-local mean value-based speckle suppression method for polarized SAR (Specific Absorption Rate) data
CN102368331B (en) * 2011-10-31 2014-04-09 陈建裕 Image multi-scale segmentation method integrated with edge information
CN109993753B (en) * 2019-03-15 2021-03-23 北京大学 Method and device for segmenting urban functional area in remote sensing image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6631212B1 (en) * 1999-09-13 2003-10-07 Eastman Kodak Company Twostage scheme for texture segmentation based on clustering using a first set of features and refinement using a second set of features
CN101030297A (en) * 2007-03-29 2007-09-05 杭州电子科技大学 Method for cutting complexity measure image grain

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6631212B1 (en) * 1999-09-13 2003-10-07 Eastman Kodak Company Twostage scheme for texture segmentation based on clustering using a first set of features and refinement using a second set of features
CN101030297A (en) * 2007-03-29 2007-09-05 杭州电子科技大学 Method for cutting complexity measure image grain

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
Hyeokho Choi,Richard G. Baraniuk.Multiscale image segmentation using wavelet-domain hiddenmarkov models.IEEE transactions on image processing10 9.2001,10(9),1309-1321.
Hyeokho Choi,Richard G.Baraniuk.Multiscale image segmentation using wavelet-domain hiddenmarkov models.IEEE transactions on image processing10 9.2001,10(9),1309-1321. *
Vladimir Katkovnik,Karen Egiazarian,Jaakko Astola.Adaptive window size image de-noising based on intersectionof confidence intervals(ICI) rule.Journal of mathematical imaging and vision16 3.2002,16(3),223-235.
Vladimir Katkovnik,Karen Egiazarian,Jaakko Astola.Adaptive window size image de-noising based on intersectionof confidence intervals(ICI) rule.Journal of mathematical imaging and vision16 3.2002,16(3),223-235. *
彭玲.基于小波域隐马尔可夫树模型的遥感图像纹理分类研究.中国博士学位论文全文数据库(信息科技辑)2005年 7.2005,2005年(7),I140-37.
彭玲.基于小波域隐马尔可夫树模型的遥感图像纹理分类研究.中国博士学位论文全文数据库(信息科技辑)2005年 7.2005,2005年(7),I140-37. *
汪西莉,焦李成.基于对偶树复小波和MRF模型的纹理图像分割.计算机科学34 1.2007,34(1),187-190.
汪西莉,焦李成.基于对偶树复小波和MRF模型的纹理图像分割.计算机科学34 1.2007,34(1),187-190. *

Also Published As

Publication number Publication date
CN101320467A (en) 2008-12-10

Similar Documents

Publication Publication Date Title
CN101320467B (en) Multi-dimension texture image partition method based on self-adapting window fixing and propagation
CN102169584B (en) Remote sensing image change detection method based on watershed and treelet algorithms
CN101551905B (en) Method for segmenting multi-dimensional texture image on basis of fuzzy C-means clustering and spatial information
CN101493935B (en) Synthetic aperture radar image segmentation method based on shear wave hidden Markov model
CN101329736B (en) Method of image segmentation based on character selection and hidden Markov model
CN102402685B (en) Method for segmenting three Markov field SAR image based on Gabor characteristic
CN103810704B (en) Based on support vector machine and the SAR image change detection of discriminative random fields
CN102968790B (en) Remote sensing image change detection method based on image fusion
CN104915676A (en) Deep-level feature learning and watershed-based synthetic aperture radar (SAR) image classification method
CN101853509B (en) SAR (Synthetic Aperture Radar) image segmentation method based on Treelets and fuzzy C-means clustering
CN102426700B (en) Level set SAR image segmentation method based on local and global area information
CN103208115B (en) Based on the saliency method for detecting area of geodesic line distance
CN105069796B (en) SAR image segmentation method based on small echo both scatternets
CN102903102A (en) Non-local-based triple Markov random field synthetic aperture radar (SAR) image segmentation method
CN103366371B (en) Based on K distribution and the SAR image segmentation method of textural characteristics
CN106651865B (en) A kind of optimum segmentation scale automatic selecting method of new high-resolution remote sensing image
CN102663724B (en) Method for detecting remote sensing image change based on adaptive difference images
CN107527023A (en) Classification of Polarimetric SAR Image method based on super-pixel and topic model
CN109242889A (en) SAR image change detection based on context conspicuousness detection and SAE
CN101685158B (en) Hidden Markov tree model based method for de-noising SAR image
CN104732552B (en) SAR image segmentation method based on nonstationary condition
CN102074013B (en) Wavelet multi-scale Markov network model-based image segmentation method
CN102867187A (en) NSST (NonsubsampledShearlet Transform) domain MRF (Markov Random Field) and adaptive threshold fused remote sensing image change detection method
CN102496142B (en) SAR (synthetic aperture radar) image segmentation method based on fuzzy triple markov fields
CN104346814B (en) Based on the SAR image segmentation method that level vision is semantic

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210406

Address after: 710076 room 104, block B2, software new town phase II, tianguba Road, Yuhua Street office, high tech Zone, Xi'an City, Shaanxi Province

Patentee after: Discovery Turing Technology (Xi'an) Co.,Ltd.

Address before: 710071 No. 2 Taibai Road, Shaanxi, Xi'an

Patentee before: XIDIAN University

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100602