CN107464247A - One kind is based on G0Stochastic gradient variation Bayes's SAR image segmentation method of distribution - Google Patents
One kind is based on G0Stochastic gradient variation Bayes's SAR image segmentation method of distribution Download PDFInfo
- Publication number
- CN107464247A CN107464247A CN201710702367.1A CN201710702367A CN107464247A CN 107464247 A CN107464247 A CN 107464247A CN 201710702367 A CN201710702367 A CN 201710702367A CN 107464247 A CN107464247 A CN 107464247A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msup
- msub
- gamma
- alpha
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/29—Graphical models, e.g. Bayesian networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention discloses one kind to be based on G0Stochastic gradient variation Bayes's SAR image segmentation method of distribution, according to the sketch map of initial sketch model extraction SAR image;SAR image is divided into by mixed pixel subspace, homogeneous pixel subspace and structure-pixel subspace according to administrative division map;Estimate its G for each extremely not homogeneous region in mixed pixel subspace0Distributed constant, using based on G0The stochastic gradient variation Bayesian model of distribution learns its architectural feature, it is achieved thereby that the non-formaldehyde finishing of mixed pixel subspace;Split for homogeneous pixel subspace is corresponding with the progress of structure-pixel subspace, merge the segmentation result of three sub-spaces, finally give SAR image segmentation result.Meet the G in extremely not homogeneous region because hidden variable prior distribution in model is assumed to be by the present invention with approximate Posterior distrbutionp0Distribution, derives that corresponding analytic expression is learnt, therefore improve the accuracy of extremely not homogeneous region clustering in mixed pixel subspace.
Description
Technical field
The invention belongs to technical field of image processing, and in particular to one kind is based on G0The stochastic gradient variation Bayes of distribution
SAR image segmentation method, can be applied to split synthetic aperture radar SAR different zones exactly and SAR image
In object detection and recognition.
Background technology
Synthetic aperture radar SAR is the impressive progress in remote sensing technology field, for obtaining the full resolution pricture of earth surface.
Compared with other kinds of imaging technique, SAR has very important advantage, and it is not by air such as cloud layer, rainfall or dense fogs
The influence of condition and intensity of illumination, can round-the-clock, round-the-clock obtain high resolution remote sensing data.SAR technologies for it is military,
Many fields such as agricultural, geography have great importance.
Image segmentation refers to divide an image into several mutually disjoint regions according to color, gray scale and Texture eigenvalue
Process.The conventional method of image segmentation at present has:Method based on rim detection, the method based on threshold value, based on region give birth to
Long and watershed method and the method based on cluster etc..Due to the unique imaging mechanisms of SAR, contain many phases in SAR image
Dry spot noise, the conventional method of many optical imagerys is caused all to cannot be directly used to the segmentation of SAR image.The tradition of SAR image
Dividing method includes someing methods based on cluster such as K-means, FCM, and some other has supervision and semi-supervised side
Method.They generally require manually experience and carry out feature extraction, but segmentation knot of the quality for SAR image for the feature extracted
Fruit important.For having supervision and semi-supervised method, it is necessary to there is label data, the label data of SAR image is seldom,
The cost for obtaining label data is very high.Bayesian network has unique advantage in terms of the expression and reasoning of uncertainty knowledge,
Variation Bayesian inference network can both carry out training unsupervisedly without label data, it is possibility to have effect ground learns each pixel
The implicit architectural feature in space, effective segmentation for SAR image have great significance.
Paper " a kind of effective MSTAR SAR image segmentation methods " (Wuhan University Journal that Wuhan University delivers at it:
Information science version page 1377-page 1380 of in October, 2015) in propose a kind of MSTAR SAR image segmentation methods.This method
Over-segmentation operation is carried out to pending image first, obtains over-segmentation image-region.Secondly figure is carried out to the image after over-segmentation
As the feature extraction of region class and Pixel-level, the characteristic vector for representing image is obtained, space is used to MSTARSAR images
Latent dirichlet allocation model (sLDA) and markov random file (MRF) establish the model that this method is proposed, obtain energy
Measure functional.Finally energy functional is optimized with Graph-Cut algorithms and Branch-and-Bound algorithms, obtained final
Segmentation result.Weak point is existing for this method, when trying to achieve the characteristic vector of SAR image, uses the pixel of SAR image
Level feature, without automatically going to learn in SAR image the distinctive architectural feature due to the correlation between pixel so that true
The positive architectural feature for representing SAR image atural object feature causes segmentation result not accurate enough using insufficient.
The content of the invention
In view of the above-mentioned deficiencies in the prior art, the technical problem to be solved by the present invention is that providing one kind is based on G0Point
Stochastic gradient variation Bayes's SAR image segmentation method of cloth, to improve the accuracy of SAR image segmentation.
The present invention uses following technical scheme:
One kind is based on G0Stochastic gradient variation Bayes's SAR image segmentation method of distribution, is carried according to initial sketch model
Take the sketch map of SAR image;Then according to administrative division map by SAR image be divided into mixed pixel subspace, homogeneous pixel subspace with
Structure-pixel subspace;Estimate its G for each extremely not homogeneous region in mixed pixel subspace0Distributed constant, utilize base
In G0The stochastic gradient variation Bayesian model of distribution learns its architectural feature, realize mixed pixel subspace unsupervised point
Cut;Split for homogeneous pixel subspace is corresponding with the progress of structure-pixel subspace, merge the segmentation result of three sub-spaces,
Finally give SAR image segmentation result.
Preferably, comprise the following steps that:
S1, input synthetic aperture radar SAR image, establish the sketch model of synthetic aperture radar SAR image;
S2, using sketch line fields method, the sketch map to synthetic aperture radar SAR image carries out compartmentalization processing,
Obtain including aggregation zone, the administrative division map of synthetic aperture radar SAR image without sketch line region and structural region;
S3, by the administrative division map including aggregation zone, without sketch line region and structural region, be mapped to synthetic aperture radar
In SAR image, mixed pixel subspace, homogeneous pixel subspace and structure-pixel of synthetic aperture radar SAR image are obtained
Space;
S4, to the extremely not homogeneous region in mixed pixel subspace, using the intensity level of each region all pixels point, adopt
With the method for parameter estimation of Mellin transform, obtain each region and meet G0Three parameter alphas needed for distribution, γ, n estimate;
S5, for mixed pixel subspace, build stochastic gradient variation Bayesian network model;
S6, feature learning is carried out to mixed pixel subspace;
S7, segmentation SAR image mixed pixel subspace;
S8, using vision semantic rules extract line target, then with the knot of the hidden model of multinomial based on geometry window
Structure region segmentation method, structure-pixel subspace is split, obtain the segmentation result of structure-pixel subspace;
S9, the homogenous region dividing method using the hidden model of the multinomial selected based on self-adapting window, to homogeneous pixel
Subspace is split, and obtains the segmentation result of homogeneous pixel subspace;
S10, the segmentation result merging by mixed pixel subspace, homogeneous pixel subspace and structure-pixel subspace, are obtained
To the final segmentation result of synthetic aperture radar SAR image.
Preferably, step S1 specifically includes following steps:
S101, in the range of [100,150], arbitrarily choose a number, the sum as template;
S102, the side being made up of pixel with different directions and yardstick, a template of line are constructed, utilize template
Direction and dimensional information structural anisotropy's Gaussian function, by the Gaussian function, the weighting of each pixel in calculation template
Coefficient, the weight coefficient of all pixels point in statistical mask, wherein, yardstick number value is 3~5, and direction number value is 18;
The average of pixel in S103, calculating the synthetic aperture radar SAR image corresponding with template area coordinate:
Wherein, μ represents the equal of all pixels point in the synthetic aperture radar SAR image corresponding with template area coordinate
Value, ∑ represent sum operation, and g represents coordinate corresponding to any one pixel in the Ω region of template, and ∈ represents to belong to symbol
Number, wgRepresent weight coefficient of the pixel at coordinate g in the Ω region of template, wgSpan be wg∈ [0,1], Ag
Represent the value with pixel of the pixel in corresponding synthetic aperture radar SAR image at coordinate g in the Ω region of template;
The variance yields of pixel in S104, calculating the synthetic aperture radar SAR image corresponding with template area coordinate:
Wherein, ν represents the variance of all pixels point in the synthetic aperture radar SAR image corresponding with template area coordinate
Value;
S105, calculate the response that each pixel in synthetic aperture radar SAR image is directed to ratio operator:
Wherein, R represents that each pixel is for the response of ratio operator, min { } in synthetic aperture radar SAR image
Minimum Value Operations are represented, a and b represent two different regions in template, μ respectivelyaRepresent all pixels point in a of template area
Average, μbRepresent the average of all pixels point in the b of template area;
S106, calculate the response that each pixel in synthetic aperture radar SAR image is directed to correlation operator:
Wherein, C represents that each pixel is directed to the response of correlation operator in synthetic aperture radar SAR image,Represent
Square root functions, a and b represent two different zones, ν in template respectivelyaRepresent the variance of all pixels point in a of template area
Value, νbRepresent the variance yields of all pixels point in the b of template area, μaRepresent the average of all pixels point in a of template area, μbTable
Show the average of all pixels point in the b of template area;
S107, calculate the response that each pixel in synthetic aperture radar SAR image is directed to each template:
Wherein, F represents that each pixel is directed to the response of each template in synthetic aperture radar SAR image,Represent
Square root functions, R and C represent that pixel is directed to ratio operator and synthetic aperture radar in synthetic aperture radar SAR image respectively
Pixel is directed to the response of correlation operator in SAR image;
S108, judge whether constructed template is equal to the sum of selected template, if so, then execution step S102, no
Then, S109 is performed;
S109, template of the selection with maximum response from each template, the mould as synthetic aperture radar SAR image
Plate, and the intensity using the maximum response of the template as pixel in synthetic aperture radar SAR image, by the direction of the template
As the direction of pixel in synthetic aperture radar SAR image, the sideline response diagram and ladder of acquisition synthetic aperture radar SAR image
Degree figure;
S110, the intensity level for calculating synthetic aperture radar SAR image intensity map, obtain intensity map:
Wherein, I represents the intensity level of synthetic aperture radar SAR image intensity map, and r represents synthetic aperture radar SAR image
Value in the response diagram of sideline, t represent the value in synthetic aperture radar SAR image gradient map;
S111, using non-maxima suppression method, intensity map is detected, obtains suggestion sketch;
S112, the pixel for suggesting that there is maximum intensity in sketch is chosen, the picture in sketch with the maximum intensity will be suggested
The pixel of vegetarian refreshments connection connects to form suggestion line segment, obtains suggestion sketch map;
S113, calculate the code length gain for suggesting sketch line in sketch map:
Wherein, CLG represents to suggest the code length gain of sketch line in sketch map, and ∑ represents sum operation, and J represents current
The number of pixel, A in sketch line neighborhoodjRepresent the observation of j-th of pixel in current sketch line neighborhood, Aj,0Represent
In the case that current sketch line can not represent structural information, the estimate of j-th of pixel, ln () table in the sketch line neighborhood
Show the log operations using e the bottom of as, Aj,1Represent in the case where current sketch line can represent structural information, the sketch line neighborhood
In j-th of pixel estimate;
S114, in the range of [5,50], arbitrarily choose a number, as threshold value T;
S115, select CLG in all suggestion sketch lines>T suggestion sketch line, is combined into synthetic aperture radar SAR
The sketch map of image, from the sketch map of sketch model extraction synthetic aperture radar SAR image.
Preferably, step S2 is specially:
S201, the concentration class according to sketch line segment in the sketch map of synthetic aperture radar SAR image, sketch line is divided into
Represent aggregation atural object aggregation sketch line and represent border, line target, the border sketch line of isolated target, line target sketch line,
Isolated target sketch line;
S202, the statistics with histogram according to sketch line segment concentration class, choose the sketch line that concentration class is equal to optimal concentration class
Duan Zuowei seed line-segment sets { Ek, k=1,2 ..., m }, wherein, EkRepresent any bar sketch line segment in seed line-segment sets, k tables
Show the label of any bar sketch line segment in seed line-segment sets, m represents the total number of seed line segment, and { } represents set operation;
S203, using the unselected line segment for being added to seed line-segment sets sum as basic point, with this basic point recursive resolve line segment
Set;
S204, the circular primitive that one radius of construction is the optimal concentration class section upper bound, with the circular primitive to line-segment sets
Line segment in conjunction is expanded, and the line segment aggregate ecto-entad after expansion is corroded, and is obtained in sketch map with sketch point
For the aggregation zone of unit;
S205, to represent border, line target and isolated target sketch line, using each sketch point of each sketch line as
Central configuration size is 5 × 5 geometry window, obtains structural region;
S206, using the part removed in sketch map beyond aggregation zone and structural region as can not sketch region;
S207, by the aggregation zone in sketch map, can not sketch region and structural region merge, obtain including accumulation regions
Domain, the administrative division map of synthetic aperture radar SAR image without sketch line region and structural region.
Preferably, step S5 is specific as follows:
Intermediate variable of the input layer of stochastic gradient variation Bayesian network model to hidden layer:
Wherein, hφRepresent stochastic gradient variation Bayesian network model input layer to hidden layer intermediate variable,Table
Show the input layer of stochastic gradient variation Bayesian network model to intermediate variable hφConnection weight, m represent hidden layer neuron
Number, the neuron number of m=50, n expression input layer, n=441,RepresentCorresponding bias vector;
The approximate posterior probability of stochastic gradient variation Bayesian network model:
qφ(z | x)~G0(αφ,γφ)
Wherein, qφ(z | x) represents the approximate posterior probability of stochastic gradient variation Bayesian network model, G0(αφ,γφ) table
It is α to show the uniformityφ, yardstick γφG0Distribution, G0The probability density formula of distribution
- α, γ, n, I (x, y) > 0
Wherein, I (x, y) is image pixel intensity value, and n is equivalent number;γ is scale parameter;α is the uniformity;Γ(x)
It is Gamma functions, is defined as in real number field:
As equivalent number n=1, distribution reforms into Beta-Prime distributions, and its expression formula is:
- α, γ, I (x, y) > 0
Wherein,Represent the input layer of stochastic gradient variation Bayesian network model to the intermediate variable h of hidden layerφWith-
αφConnection weight,RepresentCorresponding bias vector,Represent input layer to the intermediate variable h of hidden layerφWith γφ
Connection weight,RepresentCorresponding bias vector;
Intermediate variable of the hidden layer of stochastic gradient variation Bayesian network model to reconstruction of layer:
Wherein, hθRepresent stochastic gradient variation Bayesian network model hidden layer to reconstruction of layer intermediate variable,It is hidden
Layer arrives intermediate variable hθConnection weight,RepresentCorresponding bias vector;
The conditional probability of stochastic gradient variation Bayesian network model:
Wherein,Represent the conditional probability of stochastic gradient variation Bayesian network model, G0(αθ,γθ) represent equal
Evenness is αθ, yardstick γθNormal distribution,Represent hidden layer to the intermediate variable h of reconstruction of layerθWith-αθConnection weight,RepresentCorresponding bias vector,Represent hidden layer to the intermediate variable h of reconstruction of layerθWith γθConnection weight,
RepresentCorresponding bias vector;
The variation lower bound of stochastic gradient variation Bayesian network model:
Wherein, L (θ, φ) represents the variation lower bound of stochastic gradient variation Bayesian network model, and φ represents that stochastic gradient becomes
Divide the variational parameter of Bayesian network model,θ represents that stochastic gradient becomes decibel
The generation parameter of this network model of leaf,DKL(qφ(z|x)||pθ(z) q) is representedφ
(z | x) and pθ(z) relative entropy between, z represent the hidden layer variable of stochastic gradient variation Bayesian network model, pθ(z) represent hidden
Layer variable z prior probability, Σ () represent sum operation, and L represents that hidden layer variable z carries out the number of Gauss sampling, log ()
Represent log operations, zlThe l times Gauss sampled result to z is represented, its value is by formulaObtain, its
In, ⊙ represents point multiplication operation, εlRepresent that Gauss samples auxiliary variable, εl~N (0, I) represents that Gauss samples auxiliary variable and meets mark
Quasi normal distribution.
Preferably, equivalent number n=1 and n ≠ 1 two kinds of situations are considered respectively:
As equivalent number n=1, it is calculated as follows:
-DKL(qφ(z|x)||pθ(z))=∫ Pβ'(z;-α,γ)logPβ'(z;c,γ1)-Pβ'(z;-α,γ)logPβ'(z;-
α,γ)dz
Wherein, priori pθ(z) satisfaction-α=c, γ=γ1Beta-Primer distribution, c, γ1It is positive number, is basis
The known quantity that image is drawn, approximate posteriority qφ(z | x) meet that Beta-Primer is distributed, wherein, z ∈ [a, b], 0 < a < b≤1
Represent the image pixel intensity value after normalization;
As equivalent number n ≠ 1, it is calculated as follows:
Wherein, priori pθ(z) satisfaction-α=c, γ=γ1G0Distribution, c, γ1It is positive number, is drawn according to image
Known quantity;Approximate posteriority qφ(z | x) meet G0Distribution, wherein, z ∈ [a, b], 0 < a < b≤1 represent the image slices after normalization
Plain intensity level.
Preferably, step S6 comprises the following steps:
S601, the mixed pixel subspace to synthetic aperture radar SAR image, spatially on the connective region that carries out draw
Point, if only one mutual not connected region, performs step S7;
S602, to each mutually not connected region, by 21 × 21 window every a sampling, obtain corresponding to each region
Multiple images block sample;
S603, to each mutually not connected region, produce corresponding to each region one group and meet uneven atural object distribution G0
The random number of distribution;
S604, to each mutually not connected region, one group of random number is to stochastic gradient variation pattra leaves corresponding to each region
The connection weight of this network is initialized, the stochastic gradient variation Bayesian network after being initialized;
S605, to it is each mutually not connected region initialization after the variation Bayesian network of gradient immediately, by image block sample
As the input layer of stochastic gradient variation Bayesian network, according to following steps, with the side of stochastic gradient variation Bayesian inference
Method, the stochastic gradient variation Bayesian network after initialization is trained, the stochastic gradient variation Bayes after being trained
Network;
S606, for each mutually not connected region, take the weights of its stochastic gradient variation Bayesian network after trainingCharacteristic set as the region.
Preferably, step S603 is specifically included:
The first step:Calculate the uneven atural object distribution G of synthetic aperture radar SAR image0The probability density of distribution:
Wherein, the probability density of the uneven atural object distribution of P (I (x, y)) expressions synthetic aperture radar SAR image, I (x,
Y) denotation coordination is the intensity level of the pixel of (x, y), and n represents the equivalent number of synthetic aperture radar SAR image, and α represents to close
Into the form parameter of aperture radar SAR image, γ represents the scale parameter of synthetic aperture radar SAR image, and Γ () represents gal
Horse function, its value are obtained by following formula:
Wherein, u represents independent variable, and ∫ represents integration operation, and t represents integration variable, randomly selects mixed pixel subspace area
Domain Ri50 image block samples, composition 441 × 50 matrix A;
Second step:Using matrix A, G is distributed by the uneven atural object of synthetic aperture radar SAR image0The probability of distribution
Density function generates the matrix B of one 441 × 50, and the data in matrix B meet synthetic aperture radar SAR image unevenly
Thing is distributed G0Distribution.
Preferably, step S604 is specifically included:
The first step:Input layer x to intermediate variable h using matrix B as stochastic gradient variation Bayesian network modelφCompany
Connect weights
Second step:50 row are randomly selected from matrix B, the Matrix C of composition 50 × 50, are become Matrix C as stochastic gradient
Divide the intermediate variable h of Bayesian network modelφTo-αφConnection weightUsing Matrix C as stochastic gradient variation Bayes
The intermediate variable h of network modelφTo γφConnection weightUsing Matrix C as stochastic gradient variation Bayesian network model
Hidden layer z to intermediate variable hθConnection weight
3rd step:Intermediate variable h using the transposition of matrix B as stochastic gradient variation Bayesian network modelθTo-αθ's
Connection weightIntermediate variable h using the transposition of matrix B as stochastic gradient variation Bayesian network modelθTo γθCompany
Connect weights
Preferably, step S605 is specifically included:
The first step, the prior probability of stochastic gradient variation Bayesian network model hidden layer is initialized as G0Distribution probability,
The approximate posterior probability of stochastic gradient variation Bayesian network model is also initialized as G0Distribution probability, obtain stochastic gradient change
Divide the analytic expression of the variation lower bound of Bayesian network model as follows:
(a) as equivalent number n=1,
(b) as equivalent number n ≠ 1,
Wherein F 'm=(1/ (α-(m-1)) ((nb)m-1log(nb+γ1)(nb+γ1)m-1-α
-(na)m-1log(na+γ1)(na+γ1)m-1-α-(m-1)F′m-1-G'm-1))
G'm=1/ (α-m) ((nb)m(nb+γ)α-m-(na)m(na+γ)α-m-mG'm-1);
Second step, update the generation parameter of stochastic gradient variation Bayesian network model:
Wherein, θt+1Represent the generation parameter of stochastic gradient variation Bayesian network model after the t+1 times iteration, θtRepresent
The generation parameter of stochastic gradient variation Bayesian network model after the t times iteration,Represent the parameter to L (θ, φ)
θ asks the operation of local derviation;
3rd step, update the variational parameter of stochastic gradient variation Bayesian network model:
Wherein, φt+1Represent the variational parameter of stochastic gradient variation Bayesian network model after the t+1 times iteration, φtTable
Show the variational parameter of stochastic gradient variation Bayesian network model after the t times iteration,Represent to become L (θ, φ) ginseng
Amount φ asks the operation of local derviation;
4th step, judges whether the constant number of variation lower bound reaches threshold value 100, if so, performing the 5th step;Otherwise, hold
Row second step;
5th step, complete the training of stochastic gradient variation Bayesian network.
Compared with prior art, the present invention at least has the advantages that:
The present invention is based on G0Stochastic gradient variation Bayes's SAR image segmentation method of distribution, according to initial sketch model
Extract the sketch map of SAR image;Then SAR image is divided into by mixed pixel subspace, homogeneous pixel subspace according to administrative division map
With structure-pixel subspace;Estimate its G for each extremely not homogeneous region in mixed pixel subspace0Distributed constant, utilize
Based on G0The stochastic gradient variation Bayesian model of distribution learns its architectural feature, realize mixed pixel subspace unsupervised point
Cut;Split for homogeneous pixel subspace is corresponding with the progress of structure-pixel subspace, merge the segmentation result of three sub-spaces,
SAR image segmentation result is finally given, because the present invention is provided with a kind of stochastic gradient variation Bayesian network, by mixing picture
The each mutually disconnected region in sub-prime space carries out unsupervised training, and the weights for obtaining network will be trained not connected mutually as each
The architectural feature in region, overcome prior art and ask for the characteristic vector of SAR image with the Pixel-level feature of SAR image, and do not have
Have in study SAR image due to the correlation between pixel the shortcomings that distinctive architectural feature so that use the present invention can be with
Automatically extract the architectural feature of SAR image.
Further, in SAR image sketch map, the line of black is sketch line.It can be seen that the region of aggregation has a lot
The sketch line of aggregation, represent that the sketch line of side and line target is not assembled.This means the sketch line of different zones has not
Same characteristic, according to these different characteristics, is divided into sketch line with concentration class on the sketch line of aggregation and the sketch line of non-agglomerated.
Further, aggregation zone is extracted on the sketch line of aggregation, structural region is extracted on the sketch line of non-agglomerated,
Remaining region is exactly no sketch line region.So as to be based on sketch map, SAR image can be divided into aggregation zone, no sketch line
Region and structural region, obtained administrative division map are a kind of rarefaction representations of SAR image, and it is relatively single that SAR image is dropped into structure
Space, for instructing SAR image to split.
Further, because the present invention is carrying out feature learning to each mutually disconnected region in mixed pixel subspace
When, estimate G with the pixel intensity value of regional respectively0The parameter of distribution, then uses G0The probability density function of distribution produce with
Machine data initialize to the weights of network, overcome in the depth autoencoder network that prior art automatically extracts characteristics of image
With random distribution to netinit without catching SAR image substantive characteristics the shortcomings that so that can be effective using the present invention
The substantive characteristics for characterizing SAR image atural object is acquired, improves the accuracy of SAR image segmentation.
Further, G is used because the present invention uses0It is distributed the distribution as hidden layer variable priori and conditional probability, G0Point
Cloth can preferably portray the statistical nature of SAR image, assume preferably portray the feature of SAR image relative to Gauss,
So that the feature for characterizing SAR image atural object can be acquired using the present invention, the accuracy of SAR image segmentation is further increased.
In summary, it is extremely irregular to be assumed to be satisfaction by the present invention with approximate Posterior distrbutionp for hidden variable prior distribution in model
The G in matter region0Distribution, derives that corresponding analytic expression is learnt, improves extremely not homogeneous region in mixed pixel subspace
The accuracy of cluster.
Below by drawings and examples, technical scheme is described in further detail.
Brief description of the drawings
Fig. 1 is the flow chart of the present invention;
Fig. 2 is Bayesian inference illustraton of model of the present invention;
Fig. 3 is the analogous diagram of the present invention;
Fig. 4 is simulation result schematic diagram of the present invention.
Embodiment
The invention provides one kind to be based on G0Stochastic gradient variation Bayes's SAR image segmentation method of distribution, according to first
The sketch map of beginning sketch model extraction SAR image;SAR image is divided into by mixed pixel subspace, homogeneous pixel according to administrative division map
Subspace and structure-pixel subspace;Estimate its G for each extremely not homogeneous region in mixed pixel subspace0Distribution ginseng
Number, using based on G0The stochastic gradient variation Bayesian model of distribution learns its architectural feature, it is achieved thereby that mixed pixel is sub
The non-formaldehyde finishing in space;Split for homogeneous pixel subspace is corresponding with the progress of structure-pixel subspace, merge three sons
The segmentation result in space, finally give SAR image segmentation result.
Referring to Fig. 1, the present invention is based on G0Stochastic gradient variation Bayes's SAR image segmentation method of distribution, specific step
It is rapid as follows:
S1, SAR image sketch.
Input synthetic aperture radar SAR image.
According to following steps, the sketch model of synthetic aperture radar SAR image is established:
S101, in the range of [100,150], arbitrarily choose a number, the sum as template;
S102, the side being made up of pixel with different directions and yardstick, a template of line are constructed, utilize template
Direction and dimensional information structural anisotropy's Gaussian function, by the Gaussian function, the weighting of each pixel in calculation template
Coefficient, the weight coefficient of all pixels point in statistical mask, wherein, yardstick number value is 3~5, and direction number value is 18;
S103, according to the following formula, calculate pixel in the synthetic aperture radar SAR image corresponding with template area coordinate
Average:
Wherein, μ represents the equal of all pixels point in the synthetic aperture radar SAR image corresponding with template area coordinate
Value, ∑ represent sum operation, and g represents coordinate corresponding to any one pixel in the Ω region of template, and ∈ represents to belong to symbol
Number, wgRepresent weight coefficient of the pixel at coordinate g in the Ω region of template, wgSpan be wg∈ [0,1], Ag
Represent the value with pixel of the pixel in corresponding synthetic aperture radar SAR image at coordinate g in the Ω region of template;
S104, according to the following formula, calculate pixel in the synthetic aperture radar SAR image corresponding with template area coordinate
Variance yields:
Wherein, ν represents the variance of all pixels point in the synthetic aperture radar SAR image corresponding with template area coordinate
Value;
S105, according to the following formula, calculate the response that each pixel in synthetic aperture radar SAR image is directed to ratio operator
Value:
Wherein, R represents that each pixel is for the response of ratio operator, min { } in synthetic aperture radar SAR image
Minimum Value Operations are represented, a and b represent two different regions in template, μ respectivelyaRepresent all pixels point in a of template area
Average, μbRepresent the average of all pixels point in the b of template area;
S106, according to the following formula, calculate the response that each pixel in synthetic aperture radar SAR image is directed to correlation operator
Value:
Wherein, C represents that each pixel is directed to the response of correlation operator in synthetic aperture radar SAR image,Represent
Square root functions, a and b represent two different zones, ν in template respectivelyaRepresent the variance of all pixels point in a of template area
Value, νbRepresent the variance yields of all pixels point in the b of template area, μaRepresent the average of all pixels point in a of template area, μbTable
Show the average of all pixels point in the b of template area;
S107, according to the following formula, calculate the response that each pixel in synthetic aperture radar SAR image is directed to each template
Value:
Wherein, F represents that each pixel is directed to the response of each template in synthetic aperture radar SAR image,Represent
Square root functions, R and C represent that pixel is directed to ratio operator and synthetic aperture radar in synthetic aperture radar SAR image respectively
Pixel is directed to the response of correlation operator in SAR image;
S108, judge whether constructed template is equal to the sum of selected template, if so, then performing S102, otherwise, hold
Row S109;
S109, template of the selection with maximum response from each template, the mould as synthetic aperture radar SAR image
Plate, and the intensity using the maximum response of the template as pixel in synthetic aperture radar SAR image, by the direction of the template
As the direction of pixel in synthetic aperture radar SAR image, the sideline response diagram and ladder of acquisition synthetic aperture radar SAR image
Degree figure;
S110, according to the following formula, the intensity level of synthetic aperture radar SAR image intensity map is calculated, obtains intensity map:
Wherein, I represents the intensity level of synthetic aperture radar SAR image intensity map, and r represents synthetic aperture radar SAR image
Value in the response diagram of sideline, t represent the value in synthetic aperture radar SAR image gradient map;
S111, using non-maxima suppression method, intensity map is detected, obtains suggestion sketch;
S112, the pixel for suggesting that there is maximum intensity in sketch is chosen, the picture in sketch with the maximum intensity will be suggested
The pixel of vegetarian refreshments connection connects to form suggestion line segment, obtains suggestion sketch map;
S113, according to the following formula, calculate the code length gain for suggesting sketch line in sketch map:
Wherein, CLG represents to suggest the code length gain of sketch line in sketch map, and ∑ represents sum operation, and J represents current
The number of pixel, A in sketch line neighborhoodjRepresent the observation of j-th of pixel in current sketch line neighborhood, Aj,0Represent
In the case that current sketch line can not represent structural information, the estimate of j-th of pixel, ln () table in the sketch line neighborhood
Show the log operations using e the bottom of as, Aj,1Represent in the case where current sketch line can represent structural information, the sketch line neighborhood
In j-th of pixel estimate;
S114, in the range of [5,50], arbitrarily choose a number, as threshold value T;
S115, select CLG in all suggestion sketch lines>T suggestion sketch line, is combined into synthetic aperture radar SAR
The sketch map of image.
From the sketch map of sketch model extraction synthetic aperture radar SAR image.
The synthetic aperture radar SAR image sketch model that the present invention uses is that Jie-Wu et al. was published in IEEE in 2014
Article on Transactions on Geoscience and Remote Sensing magazines《Local maximal
homogenous region search for SAR speckle reduction with sketch-based
geometrical kernel function》Proposed in model.
S2, in sketch map obtain administrative division map.
Using sketch line fields method, the sketch map to synthetic aperture radar SAR image carries out compartmentalization processing, obtained
The administrative division map of synthetic aperture radar SAR image including aggregation zone, without sketch line region and structural region.
S201, the concentration class according to sketch line segment in the sketch map of synthetic aperture radar SAR image, sketch line is divided into
Represent aggregation atural object aggregation sketch line and represent border, line target, the border sketch line of isolated target, line target sketch line,
Isolated target sketch line.
S202, the statistics with histogram according to sketch line segment concentration class, choose the sketch line that concentration class is equal to optimal concentration class
Duan Zuowei seed line-segment sets { Ek, k=1,2 ..., m }, wherein, EkRepresent any bar sketch line segment in seed line-segment sets, k tables
Show the label of any bar sketch line segment in seed line-segment sets, m represents the total number of seed line segment, and { } represents set operation.
S203, using the unselected line segment for being added to seed line-segment sets sum as basic point, with this basic point recursive resolve line segment
Set.
S204, the circular primitive that one radius of construction is the optimal concentration class section upper bound, with the circular primitive to line-segment sets
Line segment in conjunction is expanded, and the line segment aggregate ecto-entad after expansion is corroded, and is obtained in sketch map with sketch point
For the aggregation zone of unit.
S205, to represent border, line target and isolated target sketch line, using each sketch point of each sketch line as
Central configuration size is 5 × 5 geometry window, obtains structural region.
S206, using the part removed in sketch map beyond aggregation zone and structural region as can not sketch region.
S207, by the aggregation zone in sketch map, can not sketch region and structural region merge, obtain including accumulation regions
Domain, the administrative division map of synthetic aperture radar SAR image without sketch line region and structural region.
S3, division pixel subspace
By the administrative division map including aggregation zone, without sketch line region and structural region, synthetic aperture radar SAR figures are mapped to
As in, the mixed pixel subspace, homogeneous pixel subspace and structure-pixel subspace of synthetic aperture radar SAR image are obtained.
S4, to the extremely not homogeneous region in mixed pixel subspace, using the intensity level of each region all pixels point, adopt
With the method for parameter estimation of Mellin transform, obtain each region and meet G0Three parameter alphas needed for distribution, γ, n estimate.
S5, structure stochastic gradient variation Bayesian network model.
According to the following formula, the input layer of stochastic gradient variation Bayesian network model is calculated to the intermediate variable of hidden layer:
Wherein, hφRepresent stochastic gradient variation Bayesian network model input layer to hidden layer intermediate variable,Table
Show the input layer of stochastic gradient variation Bayesian network model to intermediate variable hφConnection weight, m represent hidden layer neuron
Number, the neuron number of m=50, n expression input layer, n=441,RepresentCorresponding bias vector.
According to the following formula, the approximate posterior probability of stochastic gradient variation Bayesian network model is calculated:
qφ(z | x)~G0(αφ,γφ)
Wherein, qφ(z | x) represents the approximate posterior probability of stochastic gradient variation Bayesian network model, G0(αφ,γφ) table
It is α to show the uniformityφ, yardstick γφG0Distribution, G0The probability density formula of distribution
- α, γ, n, I (x, y) > 0
Wherein, I (x, y) is image pixel intensity value, and n is equivalent number;γ is scale parameter;α is the uniformity;Γ(x)
It is Gamma functions, is defined as in real number field:
As equivalent number n=1, distribution reforms into Beta-Prime distributions, and its expression formula is:
- α, γ, I (x, y) > 0
Wherein,Represent the input layer of stochastic gradient variation Bayesian network model to the intermediate variable h of hidden layerφWith-
αφConnection weight,RepresentCorresponding bias vector,Represent input layer to the intermediate variable h of hidden layerφWith γφ
Connection weight,RepresentCorresponding bias vector.
According to the following formula, the hidden layer of stochastic gradient variation Bayesian network model is calculated to the intermediate variable of reconstruction of layer:
Wherein, hθRepresent stochastic gradient variation Bayesian network model hidden layer to reconstruction of layer intermediate variable,It is hidden
Layer arrives intermediate variable hθConnection weight,RepresentCorresponding bias vector.
According to the following formula, the conditional probability of stochastic gradient variation Bayesian network model is calculated:
Wherein,Represent the conditional probability of stochastic gradient variation Bayesian network model, G0(αθ,γθ) represent equal
Evenness is αθ, yardstick γθNormal distribution,Represent hidden layer to the intermediate variable h of reconstruction of layerθWith-αθConnection weight,RepresentCorresponding bias vector,Represent hidden layer to the intermediate variable h of reconstruction of layerθWith γθConnection weight,
RepresentCorresponding bias vector.
According to the following formula, the variation lower bound of stochastic gradient variation Bayesian network model is calculated:
Wherein, L (θ, φ) represents the variation lower bound of stochastic gradient variation Bayesian network model, and φ represents that stochastic gradient becomes
Divide the variational parameter of Bayesian network model,θ represents that stochastic gradient becomes decibel
The generation parameter of this network model of leaf,DKL(qφ(z|x)||pθ(z) q) is representedφ
(z | x) and pθ(z) relative entropy between, z represent the hidden layer variable of stochastic gradient variation Bayesian network model, pθ(z) represent hidden
Layer variable z prior probability, Σ () represent sum operation, and L represents that hidden layer variable z carries out the number of Gauss sampling, log ()
Represent log operations, zlThe l times Gauss sampled result to z is represented, its value is by formulaObtain, its
In, ⊙ represents point multiplication operation, εlRepresent that Gauss samples auxiliary variable, εl~N (0, I), represent that Gauss samples auxiliary variable and meets mark
Quasi normal distribution.
D in described variation lower bound (1)KL(qφ(z|x)||pθ(z)), the present invention is to be based on qφ(z | x) and pθ(z)
For G0Analytic expression is derived under distributional assumption, rather than using Gaussian Profile, because the extremely not homogeneous region of SAR image is expired
Sufficient G0Distribution.Other G0Equivalent number n is distributed in equal to 1 with different not equal to form in the case of 1, so of the invention examine respectively
Equivalent number n=1 and n ≠ 1 situation are considered:
As equivalent number n=1, calculation formula is as follows:
-DKL(qφ(z|x)||pθ(z))=∫ Pβ'(z;-α,γ)logPβ'(z;c,γ1)-Pβ'(z;-α,γ)logPβ'(z;-
α,γ)dz (2)
Wherein, priori pθ(z) satisfaction-α=c, γ=γ1Beta-Primer distribution, c, γ1It is positive number, is basis
The known quantity that image is drawn.Approximate posteriority qφ(z | x) meet that Beta-Primer is distributed, wherein z ∈ [a, b], 0 < a < b≤1,
Represent the image pixel intensity value after normalization, and known quantity.Derivation is omitted here, directly gives final form:
As equivalent number n ≠ 1, calculation formula is as follows:
Wherein, priori pθ(z) satisfaction-α=c, γ=γ1G0Distribution, c, γ1It is positive number, is drawn according to image
Known quantity.Approximate posteriority qφ(z | x) meet G0Distribution, wherein z ∈ [a, b], 0 < a < b≤1, represent the image slices after normalization
Plain intensity level, and known quantity.Derivation is omitted here, directly gives final form:
-DKL(qφ(z|x)||pθ(z))=logn+clog γ1-logΓ(n-α)+logΓ(n)+logΓ(-α)-αlogγ
+(n-1)nnΓ(n-α)/(γαΓ(n)Γ(-α))·(1/(n(α-(n-1)))·(lnb·bn-1/(γ1+nb)n-1-α
-lna·an-1/(γ1+na)n-1-α-lnb·bn-1/(γ+nb)n-1-α+lna·an-1/(γ+na)n-1-α))
-(n+c)Γ(n-α)/(γαΓ(n)Γ(-α))·(1/(α-(n-2))·((nb)n-2log(nb+γ1)(nb+γ1)n-2-α
-(na)n-2log(na+γ1)(na+γ1)n-2-α-(n-1)F′n-2-G'n-2))
+(n-α)Γ(n-α)/(γαΓ(n)Γ(-α))(1/(α-(n-2))·((nb)n-2log(nb+γ)(nb+γ)n-2-α
-(na)n-2log(na+γ)(na+γ)n-2-α-(n-1)F′n-2-G'n-2))
Wherein F 'm=(1/ (α-(m-1)) ((nb)m-1log(nb+γ1)(nb+γ1)m-1-α
-(na)m-1log(na+γ1)(na+γ1)m-1-α-(m-1)F′m-1-G'm-1)),
G'm=1/ (α-m) ((nb)m(nb+γ)α-m-(na)m(na+γ)α-m-mG'm-1),m≥1。
S6, the feature learning of mixed pixel subspace.
S601, the mixed pixel subspace to synthetic aperture radar SAR image, spatially on the connective region that carries out draw
Point, if only one mutual not connected region, performs step S7.
S602, to each mutually not connected region, by 21 × 21 window every a sampling, obtain corresponding to each region
Multiple images block sample.
S603, to each mutually not connected region, produce corresponding to each region one group and meet uneven atural object distribution G0
The random number of distribution:
The first step:According to the following formula, the uneven atural object distribution G of synthetic aperture radar SAR image is calculated0The probability of distribution is close
Degree:
Wherein, the probability density of the uneven atural object distribution of P (I (x, y)) expressions synthetic aperture radar SAR image, I (x,
Y) denotation coordination is the intensity level of the pixel of (x, y), and n represents the equivalent number of synthetic aperture radar SAR image, and α represents to close
Into the form parameter of aperture radar SAR image, γ represents the scale parameter of synthetic aperture radar SAR image, and Γ () represents gal
Horse function, its value are obtained by following formula:
Wherein, u represents independent variable, and ∫ represents integration operation, and t represents integration variable.
Randomly select mixed pixel subspace region Ri50 image block samples, composition 441 × 50 matrix A.
Second step:Using matrix A, G is distributed by the uneven atural object of synthetic aperture radar SAR image0The probability of distribution
Density function generates the matrix B of one 441 × 50, and the data in matrix B meet synthetic aperture radar SAR image unevenly
Thing is distributed G0Distribution.
S604, to each mutually not connected region, one group of random number is to stochastic gradient variation pattra leaves corresponding to each region
The connection weight of this network is initialized, the stochastic gradient variation Bayesian network after being initialized:
The first step:Input layer x to intermediate variable h using matrix B as stochastic gradient variation Bayesian network modelφCompany
Connect weights
Second step:50 row are randomly selected from matrix B, the Matrix C of composition 50 × 50, are become Matrix C as stochastic gradient
Divide the intermediate variable h of Bayesian network modelφTo-αφConnection weightUsing Matrix C as stochastic gradient variation Bayes
The intermediate variable h of network modelφTo γφConnection weightUsing Matrix C as stochastic gradient variation Bayesian network model
Hidden layer z to intermediate variable hθConnection weight
3rd step:Intermediate variable h using the transposition of matrix B as stochastic gradient variation Bayesian network modelθTo-αθ's
Connection weightIntermediate variable h using the transposition of matrix B as stochastic gradient variation Bayesian network modelθTo γθCompany
Connect weights
S605, to it is each mutually not connected region initialization after the variation Bayesian network of gradient immediately, by image block sample
As the input layer of stochastic gradient variation Bayesian network, according to following steps, with the side of stochastic gradient variation Bayesian inference
Method, the stochastic gradient variation Bayesian network after initialization is trained, the stochastic gradient variation Bayes after being trained
Network:
1st step, the prior probability of stochastic gradient variation Bayesian network model hidden layer is initialized as G0Distribution probability, will
The approximate posterior probability of stochastic gradient variation Bayesian network model is also initialized as G0Distribution probability, obtain stochastic gradient variation
The analytic expression of the variation lower bound of Bayesian network model is as follows:
(a) as equivalent number n=1,
(b) as equivalent number n ≠ 1,
Wherein F 'm=(1/ (α-(m-1)) ((nb)m-1log(nb+γ1)(nb+γ1)m-1-α
-(na)m-1log(na+γ1)(na+γ1)m-1-α-(m-1)F′m-1-G'm-1))
G'm=1/ (α-m) ((nb)m(nb+γ)α-m-(na)m(na+γ)α-m-mG'm-1)
2nd step, according to the following formula, update the generation parameter of stochastic gradient variation Bayesian network model:
Wherein, θt+1Represent the generation parameter of stochastic gradient variation Bayesian network model after the t+1 times iteration, θtRepresent
The generation parameter of stochastic gradient variation Bayesian network model after the t times iteration,Represent the parameter θ to L (θ, φ)
Ask the operation of local derviation;
3rd step, according to the following formula, update the variational parameter of stochastic gradient variation Bayesian network model:
Wherein, φt+1Represent the variational parameter of stochastic gradient variation Bayesian network model after the t+1 times iteration, φtTable
Show the variational parameter of stochastic gradient variation Bayesian network model after the t times iteration,Represent to become L (θ, φ) ginseng
Amount φ asks the operation of local derviation;
4th step, judges whether the constant number of variation lower bound reaches threshold value 100, if so, performing the 5th step;Otherwise, perform
2nd step;
5th step, complete the training of stochastic gradient variation Bayesian network.
S606, for each mutually not connected region, take the weights of its stochastic gradient variation Bayesian network after trainingCharacteristic set as the region.
S7, segmentation SAR image mixed pixel subspace.
The characteristic set of all mutual connected regions is spliced, using spliced characteristic set as code book.
To all features of each mutual not connected region, the inner product with each feature in code book is calculated respectively, is obtained every
Projection vector of all features in individual region on code book.
To each mutually all projection vectors progress maximum convergence of connected region, one is obtained corresponding to each region
Structural eigenvector.
Using hierarchical clustering algorithm, the structural eigenvector of all mutual connected regions is not clustered, obtains mixing picture
The segmentation result in sub-prime space.
S8, segmenting structure pixel subspace.
Line target is extracted using vision semantic rules, then with the structural area of the hidden model of multinomial based on geometry window
Domain splitting method, structure-pixel subspace is split, obtain the segmentation result of structure-pixel subspace.
The structural region dividing method that the present invention uses is that Fang-Liu et al. was published in IEEE in 2016
Article on Trancactions on Geoscience and Remote Sensing magazines《SAR Image
Segmentation Based on Hierarchical Visual Semantic and Adaptive Neighborhood
Multinomial Latent Model》Proposed in model.
S9, the homogeneous pixel subspace of segmentation.
It is empty to homogeneous pixel using the homogenous region dividing method of the hidden model of the multinomial selected based on self-adapting window
Between split, obtain the segmentation result of homogeneous pixel subspace.
The homogenous region dividing method for the hidden model of multinomial based on self-adapting window selection that the present invention uses is Fang-
Liu et al. was published in 2016 on IEEE Trancactions on Geoscience and Remote Sensing magazines
Article《SAR Image Segmentation Based on Hierarchical Visual Semantic and
Adaptive Neighborhood Multinomial Latent Model》Proposed in model.
S10, the segmentation result merging by mixed pixel subspace, homogeneous pixel subspace and structure-pixel subspace, are obtained
To the final segmentation result of synthetic aperture radar SAR image.
The effect of the present invention is further described with reference to analogous diagram.
1. simulated conditions:
The hardware condition that the present invention emulates is:Intellisense and image understanding laboratory graphics workstation;Present invention emulation
Used synthetic aperture radar SAR image is:The Pyramid that X-band resolution ratio is 1 meter schemes.
2. emulation content:
The emulation experiment of the present invention is that the Pyramid figures in SAR image are split, as shown in Fig. 3 (a)
Pyramid schemes.The figure is from the synthetic aperture radar SAR image that X-band resolution ratio is 1 meter.
Using the SAR image sketch step of the present invention, to Pyramid the retouching of pixel shown in Fig. 3 (a), obtain such as Fig. 3
(b) sketch map shown in.
Using the division pixel subspace step of the present invention, to the sketch map compartmentalization shown in Fig. 3 (b), obtain such as Fig. 3
(c) administrative division map shown in.White space in Fig. 3 (c) represents aggregation zone, and others are without sketch line region and structural area
Domain.The administrative division map according to Fig. 3 (c) can obtain the Pyramid image blend pixels subspace figure as shown in Fig. 3 (d).
G is based on using the present invention0The stochastic gradient variation Bayesian network of distribution, is mixed to the Pyramid shown in Fig. 3 (d)
Close pixel subspace figure to be split, obtain the cluster structure of the mixed pixel subspace shown in Fig. 4 (a), its grey area
Represent untreated ground object space, the region representation same ground object space of remaining same color, the region representation of different colours
Different ground object spaces.
Using the combination and segmentation result step of the present invention, merge mixed pixel subspace segmentation result shown in Fig. 4 (a),
Homogeneous pixel subspace segmentation result and structure-pixel subspace segmentation result, obtain Fig. 4 (c), and Fig. 4 (c) is Fig. 3 (a)
The final segmentation result figure of Pyramid images.
3. simulated effect is analyzed:
Fig. 4 (a) is mixed pixel subspace segmentation result figure of the inventive method to Pyramid images, and Fig. 4 (b) is base
In the mixed pixel subspace segmentation result figure that the stochastic gradient variation Bayesian network of Gaussian Profile obtains.Fig. 4 (c) is this hair
For bright method to the final segmentation result figures of Pyramid images, Fig. 4 (d) is to assume obtained final segmentation knot based on Gaussian Profile
Fruit is schemed.By contrasting it may be concluded that the inventive method is more reasonable for the segmentation result of mixed pixel subspace.Use
The inventive method is split to synthetic aperture radar SAR image, improves the accuracy of SAR image segmentation.
The technological thought of above content only to illustrate the invention, it is impossible to protection scope of the present invention is limited with this, it is every to press
According to technological thought proposed by the present invention, any change done on the basis of technical scheme, claims of the present invention is each fallen within
Protection domain within.
Claims (10)
1. one kind is based on G0Stochastic gradient variation Bayes's SAR image segmentation method of distribution, it is characterised in that according to initial element
Retouch the sketch map of model extraction SAR image;Then SAR image is divided into by mixed pixel subspace, homogeneous pixel according to administrative division map
Subspace and structure-pixel subspace;Estimate its G for each extremely not homogeneous region in mixed pixel subspace0Distribution ginseng
Number, using based on G0The stochastic gradient variation Bayesian model of distribution learns its architectural feature, realizes mixed pixel subspace
Non-formaldehyde finishing;Split for homogeneous pixel subspace is corresponding with the progress of structure-pixel subspace, three sub-spaces of fusion
Segmentation result, finally give SAR image segmentation result.
2. one kind according to claim 1 is based on G0Stochastic gradient variation Bayes's SAR image segmentation method of distribution, its
It is characterised by, comprises the following steps that:
S1, input synthetic aperture radar SAR image, establish the sketch model of synthetic aperture radar SAR image;
S2, using sketch line fields method, the sketch map to synthetic aperture radar SAR image carries out compartmentalization processing, obtains
The administrative division map of synthetic aperture radar SAR image including aggregation zone, without sketch line region and structural region;
S3, by the administrative division map including aggregation zone, without sketch line region and structural region, be mapped to synthetic aperture radar SAR figures
As in, the mixed pixel subspace, homogeneous pixel subspace and structure-pixel subspace of synthetic aperture radar SAR image are obtained;
S4, to the extremely not homogeneous region in mixed pixel subspace, using the intensity level of each region all pixels point, using plum
The method for parameter estimation of woods conversion, obtains each region and meets G0Three parameter alphas needed for distribution, γ, n estimate;
S5, for mixed pixel subspace, build stochastic gradient variation Bayesian network model;
S6, feature learning is carried out to mixed pixel subspace;
S7, segmentation SAR image mixed pixel subspace;
S8, using vision semantic rules extract line target, then with the structural area of the hidden model of multinomial based on geometry window
Domain splitting method, structure-pixel subspace is split, obtain the segmentation result of structure-pixel subspace;
S9, the homogenous region dividing method using the hidden model of the multinomial selected based on self-adapting window, it is empty to homogeneous pixel
Between split, obtain the segmentation result of homogeneous pixel subspace;
S10, the segmentation result merging by mixed pixel subspace, homogeneous pixel subspace and structure-pixel subspace, are closed
Into the final segmentation result of aperture radar SAR image.
3. one kind according to claim 2 is based on G0Stochastic gradient variation Bayes's SAR image segmentation method of distribution, its
It is characterised by, step S1 specifically includes following steps:
S101, in the range of [100,150], arbitrarily choose a number, the sum as template;
S102, the side being made up of pixel with different directions and yardstick, a template of line are constructed, utilize the direction of template
With dimensional information structural anisotropy's Gaussian function, by the Gaussian function, the weight coefficient of each pixel in calculation template,
The weight coefficient of all pixels point in statistical mask, wherein, yardstick number value is 3~5, and direction number value is 18;
The average of pixel in S103, calculating the synthetic aperture radar SAR image corresponding with template area coordinate:
<mrow>
<mi>&mu;</mi>
<mo>=</mo>
<mfrac>
<mrow>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mi>g</mi>
<mo>&Element;</mo>
<mi>&Omega;</mi>
</mrow>
</munder>
<msub>
<mi>w</mi>
<mi>g</mi>
</msub>
<msub>
<mi>A</mi>
<mi>g</mi>
</msub>
</mrow>
<mrow>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mi>g</mi>
<mo>&Element;</mo>
<mi>&Omega;</mi>
</mrow>
</munder>
<msub>
<mi>w</mi>
<mi>g</mi>
</msub>
</mrow>
</mfrac>
</mrow>
Wherein, μ represents the average of all pixels point in the synthetic aperture radar SAR image corresponding with template area coordinate, ∑
Sum operation is represented, g represents coordinate corresponding to any one pixel in the Ω region of template, and ∈ represents to belong to symbol, wg
Represent weight coefficient of the pixel at coordinate g in the Ω region of template, wgSpan be wg∈ [0,1], AgRepresent with
The value of pixel in the Ω region of template in pixel synthetic aperture radar SAR image corresponding at the coordinate g;
The variance yields of pixel in S104, calculating the synthetic aperture radar SAR image corresponding with template area coordinate:
<mrow>
<mi>v</mi>
<mo>=</mo>
<mfrac>
<mrow>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mi>g</mi>
<mo>&Element;</mo>
<mi>&Omega;</mi>
</mrow>
</munder>
<msub>
<mi>w</mi>
<mi>g</mi>
</msub>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>A</mi>
<mi>g</mi>
</msub>
<mo>-</mo>
<mi>&mu;</mi>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mrow>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mi>g</mi>
<mo>&Element;</mo>
<mi>&Omega;</mi>
</mrow>
</munder>
<msub>
<mi>w</mi>
<mi>g</mi>
</msub>
</mrow>
</mfrac>
</mrow>
Wherein, ν represents the variance yields of all pixels point in the synthetic aperture radar SAR image corresponding with template area coordinate;
S105, calculate the response that each pixel in synthetic aperture radar SAR image is directed to ratio operator:
<mrow>
<mi>R</mi>
<mo>=</mo>
<mn>1</mn>
<mo>-</mo>
<mi>m</mi>
<mi>i</mi>
<mi>n</mi>
<mo>{</mo>
<mfrac>
<msub>
<mi>&mu;</mi>
<mi>a</mi>
</msub>
<msub>
<mi>&mu;</mi>
<mi>b</mi>
</msub>
</mfrac>
<mo>,</mo>
<mfrac>
<msub>
<mi>&mu;</mi>
<mi>b</mi>
</msub>
<msub>
<mi>&mu;</mi>
<mi>a</mi>
</msub>
</mfrac>
<mo>}</mo>
</mrow>
Wherein, R represents that each pixel is for the response of ratio operator, min { } expressions in synthetic aperture radar SAR image
Minimum Value Operations, a and b represent two different regions in template, μ respectivelyaAll pixels point is equal in expression template area a
Value, μbRepresent the average of all pixels point in the b of template area;
S106, calculate the response that each pixel in synthetic aperture radar SAR image is directed to correlation operator:
<mrow>
<mi>C</mi>
<mo>=</mo>
<msqrt>
<mfrac>
<mn>1</mn>
<mrow>
<mn>1</mn>
<mo>+</mo>
<mn>2</mn>
<mo>&CenterDot;</mo>
<mfrac>
<mrow>
<msubsup>
<mi>v</mi>
<mi>a</mi>
<mn>2</mn>
</msubsup>
<mo>+</mo>
<msubsup>
<mi>v</mi>
<mi>b</mi>
<mn>2</mn>
</msubsup>
</mrow>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>&mu;</mi>
<mi>a</mi>
</msub>
<mo>+</mo>
<msub>
<mi>&mu;</mi>
<mi>b</mi>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mfrac>
</mrow>
</mfrac>
</msqrt>
</mrow>
Wherein, C represents that each pixel is directed to the response of correlation operator in synthetic aperture radar SAR image,Expression square
Root operates, and a and b represent two different zones, ν in template respectivelyaRepresent the variance yields of all pixels point in a of template area, νbTable
Show the variance yields of all pixels point in the b of template area, μaRepresent the average of all pixels point in a of template area, μbRepresent template region
The average of all pixels point in the b of domain;
S107, calculate the response that each pixel in synthetic aperture radar SAR image is directed to each template:
<mrow>
<mi>F</mi>
<mo>=</mo>
<msqrt>
<mfrac>
<mrow>
<msup>
<mi>R</mi>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mi>C</mi>
<mn>2</mn>
</msup>
</mrow>
<mn>2</mn>
</mfrac>
</msqrt>
</mrow>
Wherein, F represents that each pixel is directed to the response of each template in synthetic aperture radar SAR image,Expression square
Root operates, and R and C represent that pixel is schemed for ratio operator and synthetic aperture radar SAR in synthetic aperture radar SAR image respectively
Pixel is directed to the response of correlation operator as in;
S108, judge whether constructed template is equal to the sum of selected template, if so, then performing step S102, otherwise, hold
Row S109;
S109, template of the selection with maximum response from each template, as the template of synthetic aperture radar SAR image,
And the intensity using the maximum response of the template as pixel in synthetic aperture radar SAR image, the direction of the template is made
For the direction of pixel in synthetic aperture radar SAR image, the sideline response diagram and gradient of acquisition synthetic aperture radar SAR image
Figure;
S110, the intensity level for calculating synthetic aperture radar SAR image intensity map, obtain intensity map:
<mrow>
<mi>I</mi>
<mo>=</mo>
<mfrac>
<mrow>
<mi>r</mi>
<mi>t</mi>
</mrow>
<mrow>
<mn>1</mn>
<mo>-</mo>
<mi>r</mi>
<mo>-</mo>
<mi>t</mi>
<mo>+</mo>
<mn>2</mn>
<mi>r</mi>
<mi>t</mi>
</mrow>
</mfrac>
</mrow>
Wherein, I represents the intensity level of synthetic aperture radar SAR image intensity map, and r represents synthetic aperture radar SAR image sideline
Value in response diagram, t represent the value in synthetic aperture radar SAR image gradient map;
S111, using non-maxima suppression method, intensity map is detected, obtains suggestion sketch;
S112, the pixel for suggesting that there is maximum intensity in sketch is chosen, the pixel in sketch with the maximum intensity will be suggested
The pixel of connection connects to form suggestion line segment, obtains suggestion sketch map;
S113, calculate the code length gain for suggesting sketch line in sketch map:
<mrow>
<mi>C</mi>
<mi>L</mi>
<mi>G</mi>
<mo>=</mo>
<munderover>
<mi>&Sigma;</mi>
<mi>j</mi>
<mi>J</mi>
</munderover>
<mo>&lsqb;</mo>
<mfrac>
<msubsup>
<mi>A</mi>
<mi>j</mi>
<mn>2</mn>
</msubsup>
<msubsup>
<mi>A</mi>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mn>0</mn>
</mrow>
<mn>2</mn>
</msubsup>
</mfrac>
<mo>+</mo>
<mi>l</mi>
<mi>n</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>A</mi>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mn>0</mn>
</mrow>
<mn>2</mn>
</msubsup>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mfrac>
<msubsup>
<mi>A</mi>
<mi>j</mi>
<mn>2</mn>
</msubsup>
<msubsup>
<mi>A</mi>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mn>1</mn>
</mrow>
<mn>2</mn>
</msubsup>
</mfrac>
<mo>-</mo>
<mi>l</mi>
<mi>n</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>A</mi>
<mrow>
<mi>j</mi>
<mo>,</mo>
<mn>1</mn>
</mrow>
<mn>2</mn>
</msubsup>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
<mo>,</mo>
</mrow>
Wherein, CLG represents to suggest the code length gain of sketch line in sketch map, and ∑ represents sum operation, and J represents current sketch
The number of pixel, A in line neighborhoodjRepresent the observation of j-th of pixel in current sketch line neighborhood, Aj,0Represent current
In the case that sketch line can not represent structural information, the estimate of j-th of pixel in the sketch line neighborhood, ln () represent with
E be bottom log operations, Aj,1Represent in the case where current sketch line can represent structural information, jth in the sketch line neighborhood
The estimate of individual pixel;
S114, in the range of [5,50], arbitrarily choose a number, as threshold value T;
S115, select CLG in all suggestion sketch lines>T suggestion sketch line, is combined into synthetic aperture radar SAR image
Sketch map, from the sketch map of sketch model extraction synthetic aperture radar SAR image.
4. one kind according to claim 2 is based on G0Stochastic gradient variation Bayes's SAR image segmentation method of distribution, its
It is characterised by, step S2 is specially:
S201, the concentration class according to sketch line segment in the sketch map of synthetic aperture radar SAR image, expression is divided into by sketch line
Assemble the aggregation sketch line of atural object and represent border, line target, the border sketch line of isolated target, line target sketch line, isolated
Target sketch line;
S202, the statistics with histogram according to sketch line segment concentration class, choose the sketch line segment work that concentration class is equal to optimal concentration class
For seed line-segment sets { Ek, k=1,2 ..., m }, wherein, EkAny bar sketch line segment in seed line-segment sets is represented, k represents kind
Sub-line section concentrates the label of any bar sketch line segment, and m represents the total number of seed line segment, and { } represents set operation;
S203, using the unselected line segment for being added to seed line-segment sets sum as basic point, with this basic point recursive resolve line segment aggregate;
S204, the circular primitive that one radius of construction is the optimal concentration class section upper bound, with the circular primitive in line segment aggregate
Line segment expanded, the line segment aggregate ecto-entad after expansion is corroded, obtained in sketch map using sketch point to be single
The aggregation zone of position;
S205, the sketch line to expression border, line target and isolated target, centered on each sketch point of each sketch line
The geometry window that size is 5 × 5 is constructed, obtains structural region;
S206, using the part removed in sketch map beyond aggregation zone and structural region as can not sketch region;
S207, by the aggregation zone in sketch map, can not sketch region and structural region merge, obtain including aggregation zone, nothing
The administrative division map of the synthetic aperture radar SAR image of sketch line region and structural region.
5. one kind according to claim 2 is based on G0Stochastic gradient variation Bayes's SAR image segmentation method of distribution, its
It is characterised by, step S5 is specific as follows:
Intermediate variable of the input layer of stochastic gradient variation Bayesian network model to hidden layer:
<mrow>
<msub>
<mi>h</mi>
<mi>&phi;</mi>
</msub>
<mo>=</mo>
<msubsup>
<mi>W</mi>
<mrow>
<mi>m</mi>
<mo>&times;</mo>
<mi>n</mi>
</mrow>
<mn>1</mn>
</msubsup>
<mi>x</mi>
<mo>+</mo>
<msubsup>
<mi>b</mi>
<mrow>
<mi>m</mi>
<mo>&times;</mo>
<mn>1</mn>
</mrow>
<mn>1</mn>
</msubsup>
</mrow>
Wherein, hφRepresent stochastic gradient variation Bayesian network model input layer to hidden layer intermediate variable,Represent with
The input layer of machine gradient variation Bayesian network model is to intermediate variable hφConnection weight, m represent hidden layer neuron number,
The neuron number of m=50, n expression input layer, n=441,RepresentCorresponding bias vector;
The approximate posterior probability of stochastic gradient variation Bayesian network model:
qφ(z | x)~G0(αφ,γφ)
<mrow>
<mo>-</mo>
<msub>
<mi>&alpha;</mi>
<mi>&phi;</mi>
</msub>
<mo>=</mo>
<msubsup>
<mi>W</mi>
<mrow>
<mi>m</mi>
<mo>&times;</mo>
<mi>m</mi>
</mrow>
<mn>2</mn>
</msubsup>
<msub>
<mi>h</mi>
<mi>&phi;</mi>
</msub>
<mo>+</mo>
<msubsup>
<mi>b</mi>
<mrow>
<mi>m</mi>
<mo>&times;</mo>
<mn>1</mn>
</mrow>
<mn>2</mn>
</msubsup>
<mo>,</mo>
<msub>
<mi>&gamma;</mi>
<mi>&phi;</mi>
</msub>
<mo>=</mo>
<msubsup>
<mi>W</mi>
<mrow>
<mi>m</mi>
<mo>&times;</mo>
<mi>m</mi>
</mrow>
<mn>3</mn>
</msubsup>
<msub>
<mi>h</mi>
<mi>&phi;</mi>
</msub>
<mo>+</mo>
<msubsup>
<mi>b</mi>
<mrow>
<mi>m</mi>
<mo>&times;</mo>
<mn>1</mn>
</mrow>
<mn>3</mn>
</msubsup>
</mrow>
Wherein, qφ(z | x) represents the approximate posterior probability of stochastic gradient variation Bayesian network model, G0(αφ,γφ) represent equal
Evenness is αφ, yardstick γφG0Distribution, G0The probability density formula of distribution
<mrow>
<msup>
<mi>P</mi>
<msup>
<mi>G</mi>
<mn>0</mn>
</msup>
</msup>
<mrow>
<mo>(</mo>
<mi>I</mi>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
<mo>;</mo>
<mi>&alpha;</mi>
<mo>,</mo>
<mi>&gamma;</mi>
<mo>,</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<msup>
<mi>n</mi>
<mi>n</mi>
</msup>
<mi>&Gamma;</mi>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>-</mo>
<mi>&alpha;</mi>
<mo>)</mo>
</mrow>
<mi>I</mi>
<msup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
</mrow>
<mrow>
<msup>
<mi>&gamma;</mi>
<mi>&alpha;</mi>
</msup>
<mi>&Gamma;</mi>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mi>&Gamma;</mi>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mi>&alpha;</mi>
<mo>)</mo>
</mrow>
<mrow>
<mo>(</mo>
<mi>&gamma;</mi>
<mo>+</mo>
<mi>n</mi>
<mi>I</mi>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
</mrow>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mi>&alpha;</mi>
</mrow>
</msup>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
</mrow>
- α, γ, n, I (x, y) > 0
Wherein, I (x, y) is image pixel intensity value, and n is equivalent number;γ is scale parameter;α is the uniformity;Γ (x) is
Gamma functions, are defined as in real number field:
<mrow>
<mi>&Gamma;</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<msubsup>
<mo>&Integral;</mo>
<mn>0</mn>
<mrow>
<mo>+</mo>
<mi>&infin;</mi>
</mrow>
</msubsup>
<msup>
<mi>t</mi>
<mrow>
<mi>x</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<msup>
<mi>e</mi>
<mrow>
<mo>-</mo>
<mi>t</mi>
</mrow>
</msup>
<mi>d</mi>
<mi>t</mi>
</mrow>
As equivalent number n=1, distribution reforms into Beta-Prime distributions, and its expression formula is:
<mrow>
<msup>
<mi>P</mi>
<msup>
<mi>&beta;</mi>
<mo>&prime;</mo>
</msup>
</msup>
<mrow>
<mo>(</mo>
<mi>I</mi>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
<mo>;</mo>
<mi>&alpha;</mi>
<mo>,</mo>
<mi>&gamma;</mi>
<mo>,</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<mo>-</mo>
<msup>
<mi>&alpha;&gamma;</mi>
<mrow>
<mo>-</mo>
<mi>&alpha;</mi>
</mrow>
</msup>
</mrow>
<mrow>
<mo>(</mo>
<mi>&gamma;</mi>
<mo>+</mo>
<mi>I</mi>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
</mrow>
<mrow>
<mn>1</mn>
<mo>-</mo>
<mi>&alpha;</mi>
</mrow>
</msup>
</mrow>
</mfrac>
</mrow>
- α, γ, I (x, y) > 0
Wherein,Represent the input layer of stochastic gradient variation Bayesian network model to the intermediate variable h of hidden layerφWith-αφ's
Connection weight,RepresentCorresponding bias vector,Represent input layer to the intermediate variable h of hidden layerφWith γφCompany
Connect weights,RepresentCorresponding bias vector;
Intermediate variable of the hidden layer of stochastic gradient variation Bayesian network model to reconstruction of layer:
<mrow>
<msub>
<mi>h</mi>
<mi>&theta;</mi>
</msub>
<mo>=</mo>
<msubsup>
<mi>W</mi>
<mrow>
<mi>m</mi>
<mo>&times;</mo>
<mi>m</mi>
</mrow>
<mn>4</mn>
</msubsup>
<mi>z</mi>
<mo>+</mo>
<msubsup>
<mi>b</mi>
<mrow>
<mi>m</mi>
<mo>&times;</mo>
<mn>1</mn>
</mrow>
<mn>4</mn>
</msubsup>
</mrow>
Wherein, hθRepresent stochastic gradient variation Bayesian network model hidden layer to reconstruction of layer intermediate variable,Hidden layer is in
Between variable hθConnection weight,RepresentCorresponding bias vector;
The conditional probability of stochastic gradient variation Bayesian network model:
<mrow>
<msub>
<mi>p</mi>
<mi>&theta;</mi>
</msub>
<mrow>
<mo>(</mo>
<mover>
<mi>x</mi>
<mo>^</mo>
</mover>
<mo>|</mo>
<mi>z</mi>
<mo>)</mo>
</mrow>
<mo>~</mo>
<msup>
<mi>G</mi>
<mn>0</mn>
</msup>
<mrow>
<mo>(</mo>
<msub>
<mi>&alpha;</mi>
<mi>&theta;</mi>
</msub>
<mo>,</mo>
<msub>
<mi>&gamma;</mi>
<mi>&theta;</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>-</mo>
<msub>
<mi>&alpha;</mi>
<mi>&theta;</mi>
</msub>
<mo>=</mo>
<msubsup>
<mi>W</mi>
<mrow>
<mi>n</mi>
<mo>&times;</mo>
<mi>m</mi>
</mrow>
<mn>5</mn>
</msubsup>
<msub>
<mi>h</mi>
<mi>&theta;</mi>
</msub>
<mo>+</mo>
<msubsup>
<mi>b</mi>
<mrow>
<mi>n</mi>
<mo>&times;</mo>
<mn>1</mn>
</mrow>
<mn>5</mn>
</msubsup>
<mo>,</mo>
<msub>
<mi>&gamma;</mi>
<mi>&theta;</mi>
</msub>
<mo>=</mo>
<msubsup>
<mi>W</mi>
<mrow>
<mi>n</mi>
<mo>&times;</mo>
<mi>m</mi>
</mrow>
<mn>6</mn>
</msubsup>
<msub>
<mi>h</mi>
<mi>&theta;</mi>
</msub>
<mo>+</mo>
<msubsup>
<mi>b</mi>
<mrow>
<mi>n</mi>
<mo>&times;</mo>
<mn>1</mn>
</mrow>
<mn>6</mn>
</msubsup>
</mrow>
Wherein,Represent the conditional probability of stochastic gradient variation Bayesian network model, G0(αθ,γθ) represent the uniformity
For αθ, yardstick γθNormal distribution,Represent hidden layer to the intermediate variable h of reconstruction of layerθWith-αθConnection weight,Table
ShowCorresponding bias vector,Represent hidden layer to the intermediate variable h of reconstruction of layerθWith γθConnection weight,RepresentCorresponding bias vector;
The variation lower bound of stochastic gradient variation Bayesian network model:
<mrow>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>&theta;</mi>
<mo>,</mo>
<mi>&phi;</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mo>-</mo>
<msub>
<mi>D</mi>
<mrow>
<mi>K</mi>
<mi>L</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>q</mi>
<mi>&phi;</mi>
</msub>
<mo>(</mo>
<mrow>
<mi>z</mi>
<mo>|</mo>
<mi>x</mi>
</mrow>
<mo>)</mo>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>p</mi>
<mi>&theta;</mi>
</msub>
<mo>(</mo>
<mi>z</mi>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mfrac>
<mn>1</mn>
<mi>L</mi>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>L</mi>
</munderover>
<mrow>
<mo>(</mo>
<mi>log</mi>
<mi> </mi>
<msub>
<mi>p</mi>
<mi>&theta;</mi>
</msub>
<mo>(</mo>
<mrow>
<mover>
<mi>x</mi>
<mo>^</mo>
</mover>
<mo>|</mo>
<msup>
<mi>z</mi>
<mi>l</mi>
</msup>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
</mrow>
Wherein, L (θ, φ) represents the variation lower bound of stochastic gradient variation Bayesian network model, and φ represents that stochastic gradient becomes decibel
The variational parameter of this network model of leaf,θ represents stochastic gradient variation Bayes
The generation parameter of network model,DKL(qφ(z|x)||pθ(z) q) is representedφ(z|
And p x)θ(z) relative entropy between, z represent the hidden layer variable of stochastic gradient variation Bayesian network model, pθ(z) hidden layer is represented
Variable z prior probability, ∑ () represent sum operation, and L represents that hidden layer variable z carries out the number of Gauss sampling, log () table
Show log operations, zl represents the l times Gauss sampled result to z, and its value is by formulaObtain, its
In, ⊙ represents point multiplication operation, εlRepresent that Gauss samples auxiliary variable, εl~N (0, I) represents that Gauss samples auxiliary variable and meets mark
Quasi normal distribution.
6. one kind according to claim 5 is based on G0Stochastic gradient variation Bayes's SAR image segmentation method of distribution, its
It is characterised by, considers equivalent number n=1 and n ≠ 1 two kinds of situations respectively:
As equivalent number n=1, it is calculated as follows:
-DKL(qφ(z|x)||pθ(z))=∫ Pβ'(z;-α,γ)logPβ'(z;c,γ1)-Pβ'(z;-α,γ)logPβ'(z;-α,
γ)dz
Wherein, priori pθ(z) satisfaction-α=c, γ=γ1Beta-Primer distribution, c, γ1It is positive number, is according to image
The known quantity drawn, approximate posteriority qφ(z | x) meets that Beta-Primer is distributed, wherein, z ∈ [a, b], 0 < a < b≤1 are represented
Image pixel intensity value after normalization;
As equivalent number n ≠ 1, it is calculated as follows:
<mrow>
<mo>-</mo>
<msub>
<mi>D</mi>
<mrow>
<mi>K</mi>
<mi>L</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<msub>
<mi>q</mi>
<mi>&phi;</mi>
</msub>
<mo>(</mo>
<mrow>
<mi>z</mi>
<mo>|</mo>
<mi>x</mi>
</mrow>
<mo>)</mo>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>p</mi>
<mi>&theta;</mi>
</msub>
<mo>(</mo>
<mi>z</mi>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mo>&Integral;</mo>
<msup>
<mi>P</mi>
<msup>
<mi>G</mi>
<mn>0</mn>
</msup>
</msup>
<mrow>
<mo>(</mo>
<mi>z</mi>
<mo>;</mo>
<mo>-</mo>
<mi>&alpha;</mi>
<mo>,</mo>
<mi>&gamma;</mi>
<mo>)</mo>
</mrow>
<mi>log</mi>
<mi> </mi>
<msup>
<mi>P</mi>
<msup>
<mi>G</mi>
<mn>0</mn>
</msup>
</msup>
<mrow>
<mo>(</mo>
<mi>z</mi>
<mo>;</mo>
<mi>c</mi>
<mo>,</mo>
<msub>
<mi>&gamma;</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msup>
<mi>P</mi>
<msup>
<mi>G</mi>
<mn>0</mn>
</msup>
</msup>
<mrow>
<mo>(</mo>
<mi>z</mi>
<mo>;</mo>
<mo>-</mo>
<mi>&alpha;</mi>
<mo>,</mo>
<mi>&gamma;</mi>
<mo>)</mo>
</mrow>
<mi>log</mi>
<mi> </mi>
<msup>
<mi>P</mi>
<msup>
<mi>G</mi>
<mn>0</mn>
</msup>
</msup>
<mrow>
<mo>(</mo>
<mi>z</mi>
<mo>;</mo>
<mo>-</mo>
<mi>&alpha;</mi>
<mo>,</mo>
<mi>&gamma;</mi>
<mo>)</mo>
</mrow>
<mi>d</mi>
<mi>z</mi>
</mrow>
Wherein, priori pθ(z) satisfaction-α=c, γ=γ1G0Distribution, c, γ1It is positive number, is according to known to being drawn image
Amount;Approximate posteriority qφ(z | x) meet G0Distribution, wherein, z ∈ [a, b], 0 < a < b≤1 represent that the image pixel after normalization is strong
Angle value.
7. one kind according to claim 2 is based on G0Stochastic gradient variation Bayes's SAR image segmentation method of distribution, its
It is characterised by, step S6 comprises the following steps:
S601, the mixed pixel subspace to synthetic aperture radar SAR image, spatially on it is connective carry out region division,
If only one mutual not connected region, performs step S7;
S602, to each mutually not connected region, by 21 × 21 window every a sampling, obtain multiple corresponding to each region
Image block sample;
S603, to each mutually not connected region, produce corresponding to each region one group and meet uneven atural object distribution G0Distribution
Random number;
S604, to each mutually not connected region, one group of random number is to stochastic gradient variation Bayesian network corresponding to each region
The connection weight of network is initialized, the stochastic gradient variation Bayesian network after being initialized;
S605, to it is each mutually not connected region initialization after the variation Bayesian network of gradient immediately, using image block sample as
The input layer of stochastic gradient variation Bayesian network, it is right with the method for stochastic gradient variation Bayesian inference according to following steps
Stochastic gradient variation Bayesian network after initialization is trained, the stochastic gradient variation Bayesian network after being trained;
S606, for each mutually not connected region, take the weights of its stochastic gradient variation Bayesian network after trainingAs
The characteristic set in the region.
8. one kind according to claim 7 is based on G0Stochastic gradient variation Bayes's SAR image segmentation method of distribution, its
It is characterised by, step S603 is specifically included:
The first step:Calculate the uneven atural object distribution G of synthetic aperture radar SAR image0The probability density of distribution:
<mrow>
<mi>P</mi>
<mrow>
<mo>(</mo>
<mi>I</mi>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<msup>
<mi>n</mi>
<mi>n</mi>
</msup>
<mi>&Gamma;</mi>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>-</mo>
<mi>&alpha;</mi>
<mo>)</mo>
</mrow>
<mi>I</mi>
<msup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
</mrow>
<mrow>
<msup>
<mi>&gamma;</mi>
<mi>&alpha;</mi>
</msup>
<mi>&Gamma;</mi>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mi>&Gamma;</mi>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mi>&alpha;</mi>
<mo>)</mo>
</mrow>
<mo>(</mo>
<mi>&gamma;</mi>
<mo>+</mo>
<mi>n</mi>
<mi>I</mi>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
</mrow>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mi>&alpha;</mi>
</mrow>
</msup>
</mrow>
</mfrac>
</mrow>
Wherein, P (I (x, y)) represents the probability density of the uneven atural object distribution of synthetic aperture radar SAR image, I (x, y) table
Show the intensity level for the pixel that coordinate is (x, y), n represents the equivalent number of synthetic aperture radar SAR image, and α represents synthesis hole
The form parameter of footpath radar SAR image, γ represent the scale parameter of synthetic aperture radar SAR image, and Γ () represents gamma letter
Number, its value are obtained by following formula:
<mrow>
<mi>&Gamma;</mi>
<mrow>
<mo>(</mo>
<mi>u</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munderover>
<mo>&Integral;</mo>
<mn>0</mn>
<mrow>
<mo>+</mo>
<mi>&infin;</mi>
</mrow>
</munderover>
<msup>
<mi>t</mi>
<mrow>
<mi>u</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<msup>
<mi>e</mi>
<mrow>
<mo>-</mo>
<mi>t</mi>
</mrow>
</msup>
<mi>d</mi>
<mi>t</mi>
</mrow>
Wherein, u represents independent variable, and ∫ represents integration operation, and t represents integration variable, randomly selects mixed pixel subspace region Ri
50 image block samples, composition 441 × 50 matrix A;
Second step:Using matrix A, G is distributed by the uneven atural object of synthetic aperture radar SAR image0The probability density letter of distribution
Number generates the matrix Bs of one 441 × 50, and the data in matrix B meet the uneven atural object distribution of synthetic aperture radar SAR image
G0Distribution.
9. one kind according to claim 7 is based on G0Stochastic gradient variation Bayes's SAR image segmentation method of distribution, its
It is characterised by, step S604 is specifically included:
The first step:Input layer x to intermediate variable h using matrix B as stochastic gradient variation Bayesian network modelφConnection weight
Value
Second step:50 row are randomly selected from matrix B, the Matrix C of composition 50 × 50, become decibel using Matrix C as stochastic gradient
The intermediate variable h of this network model of leafφTo-αφConnection weightUsing Matrix C as stochastic gradient variation Bayesian network
The intermediate variable h of network modelφTo γφConnection weightUsing Matrix C as the hidden of stochastic gradient variation Bayesian network model
Layer z to intermediate variable hθConnection weight
3rd step:Intermediate variable h using the transposition of matrix B as stochastic gradient variation Bayesian network modelθTo-αθConnection
WeightsIntermediate variable h using the transposition of matrix B as stochastic gradient variation Bayesian network modelθTo γθConnection weight
Value
10. one kind according to claim 7 is based on G0Stochastic gradient variation Bayes's SAR image segmentation method of distribution, its
It is characterised by, step S605 is specifically included:
The first step, the prior probability of stochastic gradient variation Bayesian network model hidden layer is initialized as G0Distribution probability, will be random
The approximate posterior probability of gradient variation Bayesian network model is also initialized as G0Distribution probability, obtain stochastic gradient variation pattra leaves
The analytic expression of the variation lower bound of this network model is as follows:
(a) as equivalent number n=1,
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>&theta;</mi>
<mo>,</mo>
<mi>&phi;</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>log</mi>
<mi> </mi>
<mi>c</mi>
<mo>+</mo>
<mi>c</mi>
<mi> </mi>
<msub>
<mi>log&gamma;</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<mi>log</mi>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mi>&alpha;</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mi>&alpha;</mi>
<mi>log</mi>
<mi>&gamma;</mi>
<mo>+</mo>
<mfrac>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>+</mo>
<mi>c</mi>
<mo>)</mo>
<mi>log</mi>
<mo>(</mo>
<mi>b</mi>
<mo>+</mo>
<msub>
<mi>&gamma;</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
<msup>
<mrow>
<mo>(</mo>
<mi>b</mi>
<mo>+</mo>
<msub>
<mi>&gamma;</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
<mi>&alpha;</mi>
</msup>
</mrow>
<msup>
<mi>&gamma;</mi>
<mi>&alpha;</mi>
</msup>
</mfrac>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mfrac>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>+</mo>
<mi>c</mi>
<mo>)</mo>
<mi>log</mi>
<mo>(</mo>
<mi>a</mi>
<mo>+</mo>
<msub>
<mi>&gamma;</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
<msup>
<mrow>
<mo>(</mo>
<mi>a</mi>
<mo>+</mo>
<msub>
<mi>&gamma;</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
<mi>&alpha;</mi>
</msup>
</mrow>
<msup>
<mi>&gamma;</mi>
<mi>&alpha;</mi>
</msup>
</mfrac>
<mo>+</mo>
<mfrac>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>+</mo>
<mi>c</mi>
<mo>)</mo>
<msup>
<mrow>
<mo>(</mo>
<mi>a</mi>
<mo>+</mo>
<msub>
<mi>&gamma;</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
<mi>&alpha;</mi>
</msup>
</mrow>
<mrow>
<msup>
<mi>&alpha;&gamma;</mi>
<mi>&alpha;</mi>
</msup>
</mrow>
</mfrac>
<mo>-</mo>
<mfrac>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>+</mo>
<mi>c</mi>
<mo>)</mo>
<msup>
<mrow>
<mo>(</mo>
<mi>b</mi>
<mo>+</mo>
<msub>
<mi>&gamma;</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
<mi>&alpha;</mi>
</msup>
</mrow>
<mrow>
<msup>
<mi>&alpha;&gamma;</mi>
<mi>&alpha;</mi>
</msup>
</mrow>
</mfrac>
<mo>-</mo>
<mfrac>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<mi>&alpha;</mi>
<mo>)</mo>
<mi>log</mi>
<mo>(</mo>
<mi>b</mi>
<mo>+</mo>
<mi>&gamma;</mi>
<mo>)</mo>
<msup>
<mrow>
<mo>(</mo>
<mi>b</mi>
<mo>+</mo>
<mi>&gamma;</mi>
<mo>)</mo>
</mrow>
<mi>&alpha;</mi>
</msup>
</mrow>
<msup>
<mi>&gamma;</mi>
<mi>&alpha;</mi>
</msup>
</mfrac>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>+</mo>
<mfrac>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<mi>&alpha;</mi>
<mo>)</mo>
<mi>log</mi>
<mo>(</mo>
<mi>a</mi>
<mo>+</mo>
<mi>&gamma;</mi>
<mo>)</mo>
<msup>
<mrow>
<mo>(</mo>
<mi>a</mi>
<mo>+</mo>
<mi>&gamma;</mi>
<mo>)</mo>
</mrow>
<mi>&alpha;</mi>
</msup>
</mrow>
<msup>
<mi>&gamma;</mi>
<mi>&alpha;</mi>
</msup>
</mfrac>
<mo>-</mo>
<mfrac>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<mi>&alpha;</mi>
<mo>)</mo>
<msup>
<mrow>
<mo>(</mo>
<mi>a</mi>
<mo>+</mo>
<mi>&gamma;</mi>
<mo>)</mo>
</mrow>
<mi>&alpha;</mi>
</msup>
</mrow>
<mrow>
<msup>
<mi>&alpha;&gamma;</mi>
<mi>&alpha;</mi>
</msup>
</mrow>
</mfrac>
<mo>+</mo>
<mfrac>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<mi>&alpha;</mi>
<mo>)</mo>
<msup>
<mrow>
<mo>(</mo>
<mi>b</mi>
<mo>+</mo>
<mi>&gamma;</mi>
<mo>)</mo>
</mrow>
<mi>&alpha;</mi>
</msup>
</mrow>
<mrow>
<msup>
<mi>&alpha;&gamma;</mi>
<mi>&alpha;</mi>
</msup>
</mrow>
</mfrac>
<mo>+</mo>
<mfrac>
<mn>1</mn>
<mi>L</mi>
</mfrac>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>l</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>L</mi>
</munderover>
<mrow>
<mo>(</mo>
<mi>log</mi>
<mi> </mi>
<msub>
<mi>p</mi>
<mi>&theta;</mi>
</msub>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>|</mo>
<msup>
<mi>z</mi>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>)</mo>
</mrow>
</msup>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
(b) as equivalent number n ≠ 1,
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>&theta;</mi>
<mo>,</mo>
<mi>&phi;</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>log</mi>
<mi> </mi>
<mi>n</mi>
<mo>+</mo>
<mi>c</mi>
<mi> </mi>
<msub>
<mi>log&gamma;</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<mi>log</mi>
<mi>&Gamma;</mi>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>-</mo>
<mi>&alpha;</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mi>log</mi>
<mi>&Gamma;</mi>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mi>log</mi>
<mi>&Gamma;</mi>
<mrow>
<mo>(</mo>
<mo>-</mo>
<mi>&alpha;</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mi>&alpha;</mi>
<mi>log</mi>
<mi>&gamma;</mi>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>+</mo>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>-</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<msup>
<mi>n</mi>
<mi>n</mi>
</msup>
<mi>&Gamma;</mi>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>-</mo>
<mi>&alpha;</mi>
<mo>)</mo>
</mrow>
<mo>/</mo>
<mrow>
<mo>(</mo>
<msup>
<mi>&gamma;</mi>
<mi>&alpha;</mi>
</msup>
<mi>&Gamma;</mi>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
<mi>&Gamma;</mi>
<mo>(</mo>
<mrow>
<mo>-</mo>
<mi>&alpha;</mi>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<mo>(</mo>
<mn>1</mn>
<mo>/</mo>
<mo>(</mo>
<mrow>
<mi>n</mi>
<mrow>
<mo>(</mo>
<mrow>
<mi>&alpha;</mi>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
<mo>)</mo>
<mo>&CenterDot;</mo>
<mo>(</mo>
<mi>ln</mi>
<mi> </mi>
<mi>b</mi>
<mo>&CenterDot;</mo>
<msup>
<mi>b</mi>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>/</mo>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<msub>
<mi>&gamma;</mi>
<mn>1</mn>
</msub>
<mo>+</mo>
<mi>n</mi>
<mi>b</mi>
</mrow>
<mo>)</mo>
</mrow>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>1</mn>
<mo>-</mo>
<mi>&alpha;</mi>
</mrow>
</msup>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mi>ln</mi>
<mi> </mi>
<mi>a</mi>
<mo>&CenterDot;</mo>
<msup>
<mi>a</mi>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>/</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>&gamma;</mi>
<mn>1</mn>
</msub>
<mo>+</mo>
<mi>n</mi>
<mi>a</mi>
<mo>)</mo>
</mrow>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>1</mn>
<mo>-</mo>
<mi>&alpha;</mi>
</mrow>
</msup>
<mo>-</mo>
<mi>ln</mi>
<mi> </mi>
<mi>b</mi>
<mo>&CenterDot;</mo>
<msup>
<mi>b</mi>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>/</mo>
<msup>
<mrow>
<mo>(</mo>
<mi>&gamma;</mi>
<mo>+</mo>
<mi>n</mi>
<mi>b</mi>
<mo>)</mo>
</mrow>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>1</mn>
<mo>-</mo>
<mi>&alpha;</mi>
</mrow>
</msup>
<mo>+</mo>
<mi>ln</mi>
<mi> </mi>
<mi>a</mi>
<mo>&CenterDot;</mo>
<msup>
<mi>a</mi>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>/</mo>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<mi>&gamma;</mi>
<mo>+</mo>
<mi>n</mi>
<mi>a</mi>
</mrow>
<mo>)</mo>
</mrow>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>1</mn>
<mo>-</mo>
<mi>&alpha;</mi>
</mrow>
</msup>
<mo>)</mo>
<mo>)</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>+</mo>
<mi>c</mi>
<mo>)</mo>
</mrow>
<mi>&Gamma;</mi>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>-</mo>
<mi>&alpha;</mi>
<mo>)</mo>
</mrow>
<mo>/</mo>
<mrow>
<mo>(</mo>
<msup>
<mi>&gamma;</mi>
<mi>&alpha;</mi>
</msup>
<mi>&Gamma;</mi>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
<mi>&Gamma;</mi>
<mo>(</mo>
<mrow>
<mo>-</mo>
<mi>&alpha;</mi>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mo>&CenterDot;</mo>
<mo>(</mo>
<mn>1</mn>
<mo>/</mo>
<mo>(</mo>
<mrow>
<mi>&alpha;</mi>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>2</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
<mo>)</mo>
<mo>&CenterDot;</mo>
<mo>(</mo>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<mi>n</mi>
<mi>b</mi>
</mrow>
<mo>)</mo>
</mrow>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>2</mn>
</mrow>
</msup>
<mi>log</mi>
<mo>(</mo>
<mrow>
<mi>n</mi>
<mi>b</mi>
<mo>+</mo>
<msub>
<mi>&gamma;</mi>
<mn>1</mn>
</msub>
</mrow>
<mo>)</mo>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<mi>n</mi>
<mi>b</mi>
<mo>+</mo>
<msub>
<mi>&gamma;</mi>
<mn>1</mn>
</msub>
</mrow>
<mo>)</mo>
</mrow>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>2</mn>
<mo>-</mo>
<mi>&alpha;</mi>
</mrow>
</msup>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mi>a</mi>
<mo>)</mo>
</mrow>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>2</mn>
</mrow>
</msup>
<mi>log</mi>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mi>a</mi>
<mo>+</mo>
<msub>
<mi>&gamma;</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
<msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mi>a</mi>
<mo>+</mo>
<msub>
<mi>&gamma;</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>2</mn>
<mo>-</mo>
<mi>&alpha;</mi>
</mrow>
</msup>
<mo>-</mo>
<mo>(</mo>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
<mo>)</mo>
<msubsup>
<mi>F</mi>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>2</mn>
</mrow>
<mo>&prime;</mo>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>G</mi>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>2</mn>
</mrow>
<mo>&prime;</mo>
</msubsup>
<mo>)</mo>
<mo>)</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>+</mo>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>-</mo>
<mi>&alpha;</mi>
<mo>)</mo>
</mrow>
<mi>&Gamma;</mi>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>-</mo>
<mi>&alpha;</mi>
<mo>)</mo>
</mrow>
<mo>/</mo>
<mrow>
<mo>(</mo>
<msup>
<mi>&gamma;</mi>
<mi>&alpha;</mi>
</msup>
<mi>&Gamma;</mi>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
<mi>&Gamma;</mi>
<mo>(</mo>
<mrow>
<mo>-</mo>
<mi>&alpha;</mi>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
<mo>(</mo>
<mn>1</mn>
<mo>/</mo>
<mo>(</mo>
<mrow>
<mi>&alpha;</mi>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>2</mn>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
<mo>)</mo>
<mo>&CenterDot;</mo>
<mo>(</mo>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<mi>n</mi>
<mi>b</mi>
</mrow>
<mo>)</mo>
</mrow>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>2</mn>
</mrow>
</msup>
<mi>log</mi>
<mo>(</mo>
<mrow>
<mi>n</mi>
<mi>b</mi>
<mo>+</mo>
<mi>&gamma;</mi>
</mrow>
<mo>)</mo>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<mi>n</mi>
<mi>b</mi>
<mo>+</mo>
<mi>&gamma;</mi>
</mrow>
<mo>)</mo>
</mrow>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>2</mn>
<mo>-</mo>
<mi>&alpha;</mi>
</mrow>
</msup>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>-</mo>
<msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mi>a</mi>
<mo>)</mo>
</mrow>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>2</mn>
</mrow>
</msup>
<mi>log</mi>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mi>a</mi>
<mo>+</mo>
<mi>&gamma;</mi>
<mo>)</mo>
</mrow>
<msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mi>a</mi>
<mo>+</mo>
<mi>&gamma;</mi>
<mo>)</mo>
</mrow>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>2</mn>
<mo>-</mo>
<mi>&alpha;</mi>
</mrow>
</msup>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>-</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<msubsup>
<mi>F</mi>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>2</mn>
</mrow>
<mo>&prime;</mo>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>G</mi>
<mrow>
<mi>n</mi>
<mo>-</mo>
<mn>2</mn>
</mrow>
<mo>&prime;</mo>
</msubsup>
<mo>)</mo>
<mo>)</mo>
<mo>+</mo>
<mfrac>
<mn>1</mn>
<mi>L</mi>
</mfrac>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>l</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>L</mi>
</munderover>
<mrow>
<mo>(</mo>
<mi>log</mi>
<mi> </mi>
<msub>
<mi>p</mi>
<mi>&theta;</mi>
</msub>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>|</mo>
<msup>
<mi>z</mi>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>)</mo>
</mrow>
</msup>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
Wherein F 'm=(1/ (α-(m-1)) ((nb)m-1log(nb+γ1)(nb+γ1)m-1-α
-(na)m-1log(na+γ1)(na+γ1)m-1-α-(m-1)F′m-1-G'm-1))
G'm=1/ (α-m) ((nb)m(nb+γ)α-m-(na)m(na+γ)α-m-mG'm-1);
Second step, update the generation parameter of stochastic gradient variation Bayesian network model:
<mrow>
<msup>
<mi>&theta;</mi>
<mrow>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>=</mo>
<msup>
<mi>&theta;</mi>
<mi>t</mi>
</msup>
<mo>+</mo>
<mfrac>
<mrow>
<mo>&part;</mo>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>&theta;</mi>
<mo>,</mo>
<mi>&phi;</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>&part;</mo>
<mi>&theta;</mi>
</mrow>
</mfrac>
</mrow>
Wherein, θt+1Represent the generation parameter of stochastic gradient variation Bayesian network model after the t+1 times iteration, θtRepresent the t times
The generation parameter of stochastic gradient variation Bayesian network model after iteration,Represent to ask inclined to L (θ, φ) parameter θ
The operation led;
3rd step, update the variational parameter of stochastic gradient variation Bayesian network model:
<mrow>
<msup>
<mi>&phi;</mi>
<mrow>
<mi>t</mi>
<mo>+</mo>
<mn>1</mn>
</mrow>
</msup>
<mo>=</mo>
<msup>
<mi>&phi;</mi>
<mi>t</mi>
</msup>
<mo>+</mo>
<mfrac>
<mrow>
<mo>&part;</mo>
<mi>L</mi>
<mrow>
<mo>(</mo>
<mi>&theta;</mi>
<mo>,</mo>
<mi>&phi;</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mo>&part;</mo>
<mi>&phi;</mi>
</mrow>
</mfrac>
</mrow>
Wherein, φt+1Represent the variational parameter of stochastic gradient variation Bayesian network model after the t+1 times iteration, φtRepresent t
The variational parameter of stochastic gradient variation Bayesian network model after secondary iteration,Represent to seek L (θ, φ) parameter φ
The operation of local derviation;
4th step, judges whether the constant number of variation lower bound reaches threshold value 100, if so, performing the 5th step;Otherwise, the is performed
Two steps;
5th step, complete the training of stochastic gradient variation Bayesian network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710702367.1A CN107464247B (en) | 2017-08-16 | 2017-08-16 | Based on G0Distributed random gradient variational Bayesian SAR image segmentation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710702367.1A CN107464247B (en) | 2017-08-16 | 2017-08-16 | Based on G0Distributed random gradient variational Bayesian SAR image segmentation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107464247A true CN107464247A (en) | 2017-12-12 |
CN107464247B CN107464247B (en) | 2021-09-21 |
Family
ID=60549887
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710702367.1A Active CN107464247B (en) | 2017-08-16 | 2017-08-16 | Based on G0Distributed random gradient variational Bayesian SAR image segmentation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107464247B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108664706A (en) * | 2018-04-16 | 2018-10-16 | 浙江大学 | A kind of synthetic ammonia process primary reformer oxygen content On-line Estimation method based on semi-supervised Bayes's gauss hybrid models |
CN108986108A (en) * | 2018-06-26 | 2018-12-11 | 西安电子科技大学 | A kind of SAR image sample block selection method based on sketch line segment aggregation properties |
CN109344837A (en) * | 2018-10-22 | 2019-02-15 | 西安电子科技大学 | A kind of SAR image semantic segmentation method based on depth convolutional network and Weakly supervised study |
CN110108806A (en) * | 2019-04-04 | 2019-08-09 | 广州供电局有限公司 | Transformer oil chromatographic data presentation technique based on probabilistic information compression |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110229034A1 (en) * | 2006-07-31 | 2011-09-22 | Stc.Unm | System and method for reduction of speckle noise in an image |
CN102903102A (en) * | 2012-09-11 | 2013-01-30 | 西安电子科技大学 | Non-local-based triple Markov random field synthetic aperture radar (SAR) image segmentation method |
CN106611422A (en) * | 2016-12-30 | 2017-05-03 | 西安电子科技大学 | Stochastic gradient Bayesian SAR image segmentation method based on sketch structure |
-
2017
- 2017-08-16 CN CN201710702367.1A patent/CN107464247B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110229034A1 (en) * | 2006-07-31 | 2011-09-22 | Stc.Unm | System and method for reduction of speckle noise in an image |
CN102903102A (en) * | 2012-09-11 | 2013-01-30 | 西安电子科技大学 | Non-local-based triple Markov random field synthetic aperture radar (SAR) image segmentation method |
CN106611422A (en) * | 2016-12-30 | 2017-05-03 | 西安电子科技大学 | Stochastic gradient Bayesian SAR image segmentation method based on sketch structure |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108664706A (en) * | 2018-04-16 | 2018-10-16 | 浙江大学 | A kind of synthetic ammonia process primary reformer oxygen content On-line Estimation method based on semi-supervised Bayes's gauss hybrid models |
CN108664706B (en) * | 2018-04-16 | 2020-11-03 | 浙江大学 | Semi-supervised Bayesian Gaussian mixture model-based online estimation method for oxygen content of one-stage furnace in ammonia synthesis process |
CN108986108A (en) * | 2018-06-26 | 2018-12-11 | 西安电子科技大学 | A kind of SAR image sample block selection method based on sketch line segment aggregation properties |
CN108986108B (en) * | 2018-06-26 | 2022-04-19 | 西安电子科技大学 | SAR image sample block selection method based on sketch line segment aggregation characteristics |
CN109344837A (en) * | 2018-10-22 | 2019-02-15 | 西安电子科技大学 | A kind of SAR image semantic segmentation method based on depth convolutional network and Weakly supervised study |
CN109344837B (en) * | 2018-10-22 | 2022-03-04 | 西安电子科技大学 | SAR image semantic segmentation method based on deep convolutional network and weak supervised learning |
CN110108806A (en) * | 2019-04-04 | 2019-08-09 | 广州供电局有限公司 | Transformer oil chromatographic data presentation technique based on probabilistic information compression |
CN110108806B (en) * | 2019-04-04 | 2022-03-22 | 广东电网有限责任公司广州供电局 | Transformer oil chromatographic data representation method based on probability information compression |
Also Published As
Publication number | Publication date |
---|---|
CN107464247B (en) | 2021-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106611422B (en) | Stochastic gradient Bayes's SAR image segmentation method based on sketch structure | |
CN104077599B (en) | Polarization SAR image classification method based on deep neural network | |
CN106611420B (en) | The SAR image segmentation method constrained based on deconvolution network and sketch map direction | |
CN106651884B (en) | Mean field variation Bayes's SAR image segmentation method based on sketch structure | |
CN106611423B (en) | SAR image segmentation method based on ridge ripple filter and deconvolution structural model | |
CN106683102B (en) | SAR image segmentation method based on ridge ripple filter and convolutional coding structure learning model | |
CN107464247A (en) | One kind is based on G0Stochastic gradient variation Bayes's SAR image segmentation method of distribution | |
Tso et al. | A contextual classification scheme based on MRF model with improved parameter estimation and multiscale fuzzy line process | |
CN107527023B (en) | Polarized SAR image classification method based on superpixels and topic models | |
CN106846322B (en) | The SAR image segmentation method learnt based on curve wave filter and convolutional coding structure | |
DE102008060789A1 (en) | System and method for unmonitored detection and Gleason grading for a prostate cancer preparation (whole-mount) using NIR fluorescence | |
CN107403434A (en) | SAR image semantic segmentation method based on two-phase analyzing method | |
CN105913081A (en) | Improved PCAnet-based SAR image classification method | |
CN103955709B (en) | Weighted synthetic kernel and triple markov field (TMF) based polarimetric synthetic aperture radar (SAR) image classification method | |
CN105760900A (en) | Hyperspectral image classification method based on affinity propagation clustering and sparse multiple kernel learning | |
CN107239777A (en) | A kind of tableware detection and recognition methods based on various visual angles graph model | |
CN104408731B (en) | Region graph and statistic similarity coding-based SAR (synthetic aperture radar) image segmentation method | |
CN109344917A (en) | A kind of the species discrimination method and identification system of Euproctis insect | |
CN109829519A (en) | Classifying Method in Remote Sensing Image and system based on adaptive space information | |
CN105512670B (en) | Divided based on KECA Feature Dimension Reduction and the HRCT peripheral nerve of cluster | |
Zou et al. | Survey on clustering-based image segmentation techniques | |
CN104463227B (en) | Classification of Polarimetric SAR Image method based on FQPSO and goal decomposition | |
CN107292268A (en) | The SAR image semantic segmentation method of quick ridge ripple deconvolution Structure learning model | |
CN105389798B (en) | SAR image segmentation method based on deconvolution network Yu mapping inference network | |
CN104751461A (en) | White cell nucleus segmentation method based on histogram threshold and low rank representation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |