CN111210433A - Markov field remote sensing image segmentation method based on anisotropic potential function - Google Patents

Markov field remote sensing image segmentation method based on anisotropic potential function Download PDF

Info

Publication number
CN111210433A
CN111210433A CN201910302954.0A CN201910302954A CN111210433A CN 111210433 A CN111210433 A CN 111210433A CN 201910302954 A CN201910302954 A CN 201910302954A CN 111210433 A CN111210433 A CN 111210433A
Authority
CN
China
Prior art keywords
segmentation
field
image
over
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910302954.0A
Other languages
Chinese (zh)
Other versions
CN111210433B (en
Inventor
郑晨
潘欣欣
陈运成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Luojia Totem Technology Co ltd
Original Assignee
Henan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University filed Critical Henan University
Priority to CN201910302954.0A priority Critical patent/CN111210433B/en
Publication of CN111210433A publication Critical patent/CN111210433A/en
Application granted granted Critical
Publication of CN111210433B publication Critical patent/CN111210433B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a Markov field remote sensing image segmentation method based on an anisotropic potential function, which comprises the following steps: performing initialization over-segmentation processing on the pixel-level image to obtain an object-level image and a region adjacency graph, and respectively defining an adjacent domain system, an observation characteristic field and a segmentation marker field on the region adjacency graph; performing feature modeling of a likelihood function on the observation feature field, and performing anisotropic probability modeling on a potential function of an energy function in the joint probability distribution of the segmentation marker field; and obtaining posterior distribution of the segmentation mark field under the condition of potential function anisotropy according to a Bayes criterion, and updating iterative segmentation by using a maximum posterior probability criterion to obtain a final segmentation result. The effectiveness of the method in remote sensing image segmentation is verified through experimental verification, namely qualitative analysis or quantitative analysis; the method can be used for solving the problem that the segmentation difficulty is increased due to the similarity of the spectral values in the remote sensing image, and greatly improves the working efficiency compared with manual detection.

Description

Markov field remote sensing image segmentation method based on anisotropic potential function
Technical Field
The invention relates to the technical field of image segmentation, in particular to segmentation of a high-resolution remote sensing image, and particularly relates to a Markov field remote sensing image segmentation method based on an anisotropic potential function.
Background
Image segmentation is a technique and process for dividing an image into specific regions with unique properties and extracting an object of interest. It is a key step from image processing to image analysis.
In recent years, with the continuous development of remote sensing technology, remote sensing images acquired by different sensors are widely applied to various fields such as land cover detection, forest cover detection, wetland resource detection, urban and rural planning, military reconnaissance and the like. In order to fully utilize useful information contained in remote sensing image data, researchers have conducted a great deal of image analysis processing research, in which image segmentation becomes one of the research hotspots in this field. Currently, there are many methods for image segmentation, such as: thresholding, clustering, etc., which often assume that homogeneous feature classes have similar features, but different feature differences between different features. However, with the improvement of the spatial resolution of the remote sensing image, the details of the ground object are highlighted, the structural information is more prominent, and the phenomenon that the spectrum of the same object is different from the spectrum of the foreign object is the same with the spectrum of the foreign object is further aggravated, so that the methods are difficult to obtain a high-precision segmentation result. In order to describe the characteristics of the remote sensing image with high spatial resolution, Deep learning, MRF and other methods are proposed in succession, and on the basis of the original segmentation theory, more prior information or condition constraints are considered. The accuracy of segmentation is improved. Among them, MRF is widely used because it has markov property and can better describe the spatial relationship between image elements. There are many common methods in MRF-based image segmentation, among which the earliest image segmentation method is the classical pixel-level ICM algorithm, and further develops to an image processing method based on a multi-scale MRF model, both of which are pixel-based segmentation methods, and the range considered by these methods is limited, so that there is a later object-level OMRF method, which considers more region information and spatial neighborhood relationship. Although these methods improve the segmentation effect, the potential functions of these methods are all isotropic, i.e. only determine whether two adjacent primitives belong to the same class of feature. However, the feature relationship is not limited to this, and there are different correlation relationships between different feature types.
Disclosure of Invention
Aiming at the technical problems that the potential function of a marked field energy function of the existing remote sensing image segmentation method is isotropic, only two adjacent elements can be judged whether to belong to the same class of ground objects, and a plurality of fine and broken wrong classifications exist, the invention provides the Markov field remote sensing image segmentation method based on the anisotropic potential function.
In order to achieve the purpose, the technical scheme of the invention is realized as follows: a Marfan field remote sensing image segmentation method based on an anisotropic potential function changes the original isotropy of the potential function of an energy function in a marker field into anisotropy, and comprises the following steps:
the method comprises the following steps: performing initialization over-segmentation processing on an input image, dividing the image into a plurality of over-segmentation areas, establishing an area adjacency graph by using the over-segmentation areas, and defining an image neighborhood system, an object-level observation characteristic field and an object-level segmentation marker field according to the area adjacency graph;
step two: performing feature modeling of a likelihood function on an observation feature field by adopting mixed Gaussian distribution, and performing anisotropic probability modeling on a potential function of an energy function in joint probability distribution of the segmentation marker field;
step three: and obtaining posterior distribution of the segmentation marker field under the condition of potential function anisotropy according to Bayes criterion, and updating iterative segmentation by using maximum posterior probability criterion to obtain a final segmentation image.
The method for realizing the neighborhood system, the observation characteristic field and the segmentation marker field in the first step comprises the following steps:
1) the size of each band of the input image I (R, G, B) is a × B, and the position set is:
S={(i1,j1)|1≤i1≤a,1≤j1≤b};
2) performing over-segmentation processing on the image I (R, G, B) by using mean shift algorithm to obtain corresponding over-segmentation regions, labeling each over-segmentation region from left to right and from top to bottom, wherein the set of over-segmentation regions is R ═ { R { (R) }1,R2,…,RlWhere l is the number of over-segmented regions, RsIs the s-th over-segmentation region, s belongs to {1,2, …, l };
3) constructing a corresponding region adjacency graph RAG based on the over-segmented regions: g ═ V, E; wherein, the vertex set V ═ { V ═ V1,V2,…,Vl},VsRepresents an over-divided region RsThe position of (a); set of boundaries E ═ Est|1≤s,t≤l},estRepresents an over-divided region RsAnd an over-divided region RtThe spatial adjacency of (a);
4) defining a segmentation marker field X at object level on the region adjacency graph G { X }s|1≤s≤l}={X1,X2,…,XlIn which a random variable XsIndicates the category label corresponding to the s-th area, and XsE is {1,2, …, k }, wherein k is the number of segmentation categories; defining an object-level observation characteristic field Y ═ Y on the region adjacency graph Gs|1≤s≤l}={Y1,Y2,…,Yl},YsTo go from the over-divided region RSThe extracted image features of (1);
5) defining a corresponding neighborhood system N ═ N according to the region adjacency graph GsL 1 is not less than s not more than l }; wherein N issNeighborhood system representing a vertex s, i.e. Ns={Rt|est>0,1≤t≤l}。
Spatial neighborhood E in the set of boundaries EstThe realization method comprises the following steps: if two over-divided regions RsAnd over-segmentation RtAnd adjacent, the value is 1, otherwise, the value is 0, namely:
Figure BDA0002028843990000021
wherein d isstIs an over-divided region RsAnd over-segmentation RtIs determined.
The method for realizing the feature modeling in the second step comprises the following steps: modeling an observation characteristic field Y of the image by adopting mixed Gaussian distribution: assuming that the homogeneous regions in the likelihood function have the same distribution, i.e. the same type of regions in the image obeys the same distribution, at this time,
Figure BDA0002028843990000031
each component of the observed characteristic field Y ═ Y is independent of the others given an implementation X ═ X of the marker field, and therefore:
Figure BDA0002028843990000032
where P (Y | X ═ X) is the posterior probability of segmenting the marker field X ═ X given the observed data Y ═ Y; x is the number ofsIs a class label XsValue of (a), ysIs an image feature YsTaking the value of (A); mu.shRepresenting the mean of the features, ΣhRepresenting a characteristic covariance matrix, p being the dimension of the image data;
estimating the feature mean mu using maximum likelihoodhSum-feature covariance matrix ∑hThese two parameters are estimated:
Figure BDA0002028843990000033
the method for realizing the probability modeling in the second step comprises the following steps:
in the MRF model, assuming that the mark field has Markov property, the joint probability p (x) of segmenting the mark field follows Gibbs distribution according to the harmsley-Clifford theorem, that is:
Figure BDA0002028843990000034
wherein,
Figure BDA0002028843990000035
is a normalization constant, X is an implementation of the segmentation marker field X, Ω is the set of all X; u (x) is an energy function, and U (x) is ∑c∈CVc(x),Vc(x) Is the group potential; t is a temperature constant; the energy function u (x) is the sum of all potential masses in the set of potential masses C;
replacing potential function V with loss matrix Lc(x) The loss matrix L is:
Figure BDA0002028843990000036
Wherein, thetamIndicating that the true ground object class is the m-th class, anThe ground object class obtained by the division is the nth class; l (theta)m,an) Is the true ground object class thetamIs mistakenly divided into divided ground object types anAnd l (θ)m,am)=0,m∈{1,2,…,k};
Function of energy
Figure BDA0002028843990000037
Wherein,
Figure BDA0002028843990000038
is the marking result of the ith iteration area t as a real ground object type thetam;xsThe feature type of the current area s needs to be updated, i.e. the feature type is used as the feature type anThe role of (c); loss value L (x)t,xs) Indicating that the current region s is of the type xsAnd (4) marking penalty between the adjacent region t.
In the loss matrix L, if the spectral values of two ground object types are similar, the cost L (theta)m,an) The value of (A) is large; if the spectral values of the two ground object types are greatly different, the cost l (theta)m,an) The value of (c) is small.
The method for obtaining the final segmentation image in the third step comprises the following steps:
object-by-object iterative update to obtain the final result, where an object represents an over-segmented region because X ═ XSL 1 is less than or equal to s and less than or equal to l, so:
P(X=x)=P{X1=x1,X2=x2,…,Xl=xl};
observing each component of the observation field Y ═ Y independently of each other under conditions which achieve a marker segmentation of length X ═ X for a given marker field, then:
Figure BDA0002028843990000041
Figure BDA0002028843990000042
therefore, the maximum posterior probability criterion and the Bayes formula are used to obtain:
Figure BDA0002028843990000043
when i is 0, obtaining an initial value x by a k-means algorithms (i)When i ═ i + 1:
Figure BDA0002028843990000044
after several iterations, converged
Figure BDA0002028843990000045
As a final segmentation result.
The invention has the beneficial effects that: the original isotropic potential function in the energy function of the marking field is changed into anisotropic, so that the segmentation difficulty caused by similar spectral values can be reduced, namely the phenomena of same-object different spectrum and foreign-object same spectrum can be better processed, proper punishment is applied to various wrong-division conditions, and the segmentation precision is improved; the problem of segmentation difficulty is increased due to the fact that spectral values in remote sensing images are similar is solved, and compared with manual detection, the working efficiency is greatly improved. Quantitative analysis is carried out on an experimental graph after image segmentation, so that the segmentation result obtains the optimal kappa and OA values; therefore, the effectiveness of the method in remote sensing image segmentation is verified whether the method is qualitative analysis or quantitative analysis.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of an experiment according to the present invention.
FIG. 2 is a flowchart illustrating a process for region adjacency graph according to the present invention.
Fig. 3 is a diagram showing an example of the structure of a region adjacency graph according to the present invention, where (a) is an over-segmentation result graph, (b) is a partial graph of (a), and (c) is a corresponding region adjacency graph of (b).
Fig. 4 is a comparison graph of the experimental image processing of the present invention, in which (a1) is a gray scale graph of an original color image, (a2) is a graph of the real manual segmentation result, (a3) is a graph of the segmentation result of the ICM method, (a4) is a graph of the segmentation result of the mrmrmrf method, (a5) is a graph of the segmentation result of the IRGS method, (a6) is a graph of the segmentation result of the OMRF method, and (a7) is a graph of the segmentation result of the method of the present invention.
Fig. 5 is a comparison graph of processing two images according to the present invention, wherein (b1) is a gray scale graph of an original color image, (b2) is a graph of a real manual segmentation result, (b3) is a graph of an ICM segmentation result, (b4) is a graph of a mrmrmrf segmentation result, (b5) is a graph of an IRGS segmentation result, (b6) is a graph of an OMRF segmentation result, and (b7) is a graph of a segmentation result according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
As shown in fig. 1, a method for segmenting a remote sensing image in a mahalanobis field based on an anisotropic potential function includes the following steps:
the method comprises the following steps: the method comprises the steps of carrying out initialization over-segmentation processing on an input image, dividing the image into a plurality of over-segmentation regions, establishing a region adjacency graph RAG by using the over-segmentation regions, and defining a neighborhood system N of the image, an object-level observation characteristic field Y and an object-level segmentation marker field X according to the region adjacency graph RAG.
As shown in fig. 2, the implementation method of the step one is:
1) for the input high-resolution video I (R, G, B), assuming that the size of each band is a × B, the position set is defined as S { (I1, j1) |1 ≦ I1 ≦ a,1 ≦ j1 ≦ B }.
2) Performing over-segmentation processing on the image I (R, G, B) by using mean shift algorithm to obtain corresponding over-segmentation regions, labeling each over-segmentation region from left to right in sequence from top to bottom, and setting the set of the over-segmentation regions as R ═ { R { (R) }1,R2,…,RlWhere l is the number of over-segmented regions, RsIs the s-th over-segmentation region, s ∈ {1,2, …, l }.
3) Constructing a corresponding region adjacency graph RAG based on the over-segmented regions: g ═ V, E, V is the set of vertices and E is the set of boundaries. Wherein V ═ { V ═ V1,V2,…,Vl},VsRepresents an over-divided region RsS ∈ {1,2, …, l }; set of boundaries E ═ Est|1≤s,t≤l},estRepresents an over-divided region RsAnd an over-divided region RtIn particular, if two over-divided regions RsAnd an over-divided region RtAnd adjacent, the value is 1, otherwise, the value is 0, namely:
Figure BDA0002028843990000051
wherein d isstIs an over-divided region RsAnd an over-divided region RtIs determined.
4) Defining a segmentation marker field X at object level on the region adjacency graph G { X }s|1≤s≤l}={X1,X2,…,XlWherein each X issIs a random variable used for representing the class mark corresponding to the s-th area, and the class mark XsE {1,2, …, k }, where k is the number of segmentation classes. Defining an object-level observation characteristic field Y ═ Y on the region adjacency graph Gs|1≤s≤l}={Y1,Y2,…,Yl},YsTo go from the over-divided region RSWherein s is ∈ {1,2, …, l }.
5) Defining a corresponding neighborhood system N ═ N according to the region adjacency graph GsL 1 is less than or equal to s and less than or equal to l. Wherein each NsNeighborhood system representing a vertex s, i.e. Ns={Rt|estT is more than 0 and more than or equal to 1 and less than or equal to l. Specifically, as shown in fig. 3, (a) is an initial segmentation result obtained by the mean shift algorithm, and its parameter is 100. To elaborate on the region adjacency graph, we cut a part from (a) as (b); (c) is a region adjacency graph corresponding to (b), wherein one region is a primitive and consists of two parts: X1-X5 are marks for indicating the category of the region, Y1-Y5 are the region features extracted by the marks X1-X5, and the two regions are connected if the two regions are adjacent in space; if not, they are not connected.
Step two: and performing feature modeling of a likelihood function on the observation feature field Y by adopting mixed Gaussian distribution, and performing anisotropic probability modeling on a potential function of an energy function in the joint probability distribution of the segmentation mark field X.
1) Performing feature modeling of a likelihood function on the observation feature field Y:
the main function of the feature field is to extract image features from the original observation data and to use a likelihood function to reflect the feature information of each position as accurately as possible. Common characteristic field likelihood functions have the following two distribution forms:
Figure BDA0002028843990000061
in consideration of computational efficiency, the method adopts mixed Gaussian distribution to model the observation characteristic field Y of the image. At the same time, it is also assumed that the homogeneous regions in the likelihood function have the same distribution, i.e. it is assumed that the regions of the same type in the image follow the same distribution, and at this time,
Figure BDA0002028843990000062
by conditional independence (assuming that each component of the observed feature field Y ═ Y is givenOne implementation of a segmentation-fixed marker field, X ═ X, is independent of each other), and therefore:
Figure BDA0002028843990000063
where P (Y | X ═ X) is the posterior probability of the marker field X ═ X given the observed data Y ═ Y, XsIs a class label XsValue of (a), ysIs an image feature YsTaking the value of (A); mu.shRepresenting the mean of the features, ΣhThe feature covariance matrix is expressed, p is the dimension, and p is 3 for the image data.
In the above formula, the feature mean μ needs to be estimatedhAnd the feature variance σhThe two parameters are estimated using maximum likelihood. Characteristic mean value mu of each classhSum-feature covariance matrix ∑hThe estimation results of (c) are as follows:
Figure BDA0002028843990000064
2) anisotropic probability modeling is performed on the potential function of the energy function in the segmentation marker field X combined probability distribution.
In the MRF model, assuming that the marker fields have Markov properties, as known from the harmsley-Clifford theorem (knowing that the position set S of the grid has neighborhood system N, if the random field X on the position set S is an MRF model, then the random field X is also a GRF), the joint probability p (X) of the marker fields obeys the Gibbs distribution, i.e.:
Figure BDA0002028843990000065
wherein,
Figure BDA0002028843990000071
is a normalized constant, X is an implementation of the segmentation marker field X, Ω is the set of all X, u (X) is the energy function: u (x) Σc∈CVc(x),Vc(x) Is the group trend. T is a temperature constant, and is generally set to 1 unless otherwise required, and is set to 1 for ease of calculation. The energy function u (x) is the sum of all the potential masses in the set of potential masses C. For isotropic GRFs, if only second order neighborhood systems are considered, u (x) can be calculated by:
U(x)=∑cV2(xs,xt);
wherein, V2(xs,xt) The values of the potential functions representing the region s and the region t, namely:
Figure BDA0002028843990000072
now consider the potential function Vc(x) Instead, anisotropic, i.e. segmentation errors between different terrain classes will bear different penalties, such as: the house and the farmland are divided into towns at the same time, which generally do not accord with the common geography knowledge and give greater punishment to the towns; if the house and the grassland are divided into towns at the same time, there is a high probability that it will be given a small penalty. So that the loss matrix L is used to replace the original potential function Vc(x) The definition is as follows:
Figure BDA0002028843990000073
wherein, thetamIndicating that the true ground object class is the m-th class, anThe term "n" denotes the ground object obtained by the division. l (theta)m,an) Is the true ground object class thetamIs mistakenly divided into divided ground object types anAnd l (θ)m,am) 0, m ∈ {1,2, …, k }. Potential V compared with groupc(x) The loss matrix L can realize the description of the anisotropy among different categories, and further accurately depict the space action relationship among the different categories so as to improve the segmentation precision.
At this time, the process of the present invention,
Figure BDA0002028843990000074
wherein,
Figure BDA0002028843990000075
is the marking result of the last iteration area t, namely the ith iteration area t, and is taken as a real ground object type thetam;xsIs the ground feature type that the region s needs to be updated at this time, namely serving as anThe role of (c); loss value L (x) at this timet,xs) Indicating that the current region s is of the type xsAnd (4) marking penalty between the adjacent region t.
In general, if the spectral values of the two classes are similar, we give a large penalty, i.e., l (θ)m,an) The value of (A) is large; if the spectral values of the two classes differ more, we give a smaller penalty, i.e. l (θ)m,an) The value of (c) is small. The specific loss matrix between different classes is generally given experimentally.
Step three: and obtaining posterior distribution of the segmentation marker field X under the condition of potential function anisotropy according to Bayes criterion, updating iterative segmentation by using maximum posterior probability criterion (MAP), and obtaining a final segmentation image.
In the pixel-level MAF model, an optimal segmentation result is obtained through a pixel-by-pixel update experiment. In the subject-level MAF model, the final result is obtained by iterative update, but here, the update is iterated on an object-by-object basis, where one object represents one oversplited area, and the specific implementation process is as follows:
because X ═ XsL 1 is less than or equal to s and less than or equal to l, so:
P(X=x)=P{X1=x1,X2=x2,…,Xl=xl}
known from the conditional independence assumption (assuming that each component of the observation field Y ═ Y is independent of the others under the condition of one realization X ═ X of a given marker field):
Figure BDA0002028843990000081
Figure BDA0002028843990000082
therefore, the maximum a posteriori probability (MAP) and bayes formula are known as follows:
Figure BDA0002028843990000083
when i is 0, obtaining an initial value x by a k-means algorithms (i)When i ═ i + 1:
Figure BDA0002028843990000084
after several iterations, the converged
Figure BDA0002028843990000085
As a final segmentation result.
And (3) experimental verification:
in view of the current state of research, quantitative indicators are the main criteria for evaluating the quality of segmentation. Among the quantitative evaluation indexes, Kappa coefficient, classification accuracy and classification error rate, boundary detection rate, and the like are more commonly used. The invention mainly adopts Kappa coefficient and Overall Accuracy (OA), which can effectively evaluate the Overall segmentation performance.
To compute the Kappa coefficients and OA, an error matrix (also called confusion matrix) is first computed, whose size is M × M, where M is the number of classes. The error matrix table is as follows:
Figure BDA0002028843990000086
Figure BDA0002028843990000091
wherein N isijThe number of pixels whose actual class is i is classified as j; n is the number of image pixels;
Figure BDA0002028843990000092
the number of the pixels which are divided into the ith pixel in the classification result is shown;
Figure BDA0002028843990000093
the number of j-th pixels actually included in the image.
The Kappa coefficient can be calculated by the following formula:
Figure BDA0002028843990000094
the value of K is between 0 and 1. The larger the value of K, the better the segmentation.
The Overall Accuracy (OA) is an index for evaluating the overall classification performance, and can be calculated by the following formula:
Figure BDA0002028843990000095
the value of OA is also between 0 and 1, and generally is slightly higher than the value K of the Kappa coefficient, and if the value of OA is larger, the segmentation precision is higher, and the segmentation effect is better.
In order to verify the effect of the present invention, comparison with the following four segmentation methods was made. The icm (iterative condition model) method is a pixel-level greedy algorithm, a deterministic algorithm based on local conditional probability, which completes image segmentation by updating image markers point by point. The MRMRF (Multi-Resolution Markov random field) method is an algorithm combined with wavelets, describes ground objects by using different scales through a pyramid structure of the wavelets, expands a spatial description range, and more effectively extracts features for analysis. Specifically, the MRMRF method usually adopts an image multi-resolution mode, models an image into a plurality of single-scale MRFs described in different resolutions, describes global features of the image using a lower resolution image, describes detailed features of the image using a higher resolution image, and performs interlayer message propagation from top to bottom through an interlayer causal relationship. An IRGS (iterative regional growing using fields) method is an object-level MRF algorithm, edge strength information is introduced in the process of constructing a spatial context model, an iterative region growing technology is adopted, and an over-segmentation region is used as a primitive, so that a more accurate segmentation result is obtained. The omrf (object Markov random field) method is another classical object-level Markov field model, which considers a region as a primitive to consider more spatial neighborhood relationships and solves the model using a generative iterative approach, unlike the pixel level.
The four methods have a common characteristic: the potential function of the marker field is all isotropic. By contrast with these four methods, the advantages of the present invention are demonstrated, since the present invention changes the potential function of the energy function in the marker field to anisotropic. In order to verify the effect of the invention, two aerial images of the Thai area are used as experimental data for testing, wherein one image of the experiment is a remote sensing image with the size of 1024 multiplied by 3, and the image contains four categories of cities and towns, rivers, bare lands and greenlands, as shown in FIG. 4; the second experiment image is a remote sensing image with the size of 512 x 3, and comprises three categories of cities, towns, rivers and greenbelts, as shown in fig. 5.
The operation platform of the present invention is that, the color image of the first remote sensing image experiment is shown in fig. 4(a1), the real manual segmentation is shown in fig. 4(a2), in order to ensure the fairness of the experiment, parameters are selected according to the relevant references to optimize the segmentation result, the experimental image is used in an ICM method, β is 30, the grayscale image of the segmentation result is shown in fig. 4(a3), the experimental image is used in an MRMRF method, the positional 1, level-number 2, wavelet is used in an mrf method, the grayscale image of the segmentation result is obtained in an IRGS method, the grayscale image of the experimental image is shown in fig. 4 (5638), the experimental image is used in an IRGS method, the ybei m (y 4, y5 is used in an IRGS method, the grayscale image of the experimental image is obtained in an omra matrix β, the grayscale image of the experimental image is shown in an omra matrix 3540, the experimental image of the invention is shown in an omra 368, the MinRA image is obtained in an mrf method:
Figure BDA0002028843990000101
for example, in the case of the remote sensing image experiment two, the color image is shown in fig. 5(b1), the actual manual segmentation is shown in fig. 5(b2), and in order to ensure the fairness of the experiment, parameters are selected according to the relevant reference documents, which optimize the segmentation result, for the experimental image dual-purpose ICM method, β is 10, and the grayscale image is shown in fig. 5(b3), for the experimental image dual-purpose mrf method, the capability is 1, the level-number is 1, and the wavelet-column' is 10, and the grayscale image of the segmentation result is shown in fig. 5(b4), for the experimental image dual-purpose irmrf method, the beta is 10, yi is m (y,3,0.5), and the grayscale image of the segmentation result is shown in fig. 5(b 63), and the experimental image is shown in fig. 5(b 675), and the grayscale image of the experimental image is shown in fig. 5(b 675), the experimental image dual-purpose mrf method, the grayscale image is shown in fig. 5, the experimental image for the experimental image experiment, the rf matrix is 7, the experimental image for the experimental image, the method, the grayscale image for the remote sensing image experiment, the method:
Figure BDA0002028843990000102
the obtained grayscale image of the division result is shown in fig. 5(b7) with MinRA 350. The Kappa coefficients of the segmentation results of the experimental image one and the experimental image two are shown in table 1, and the total accuracy OA of the segmentation results is shown in table 2.
TABLE 1 Kappa number
Figure BDA0002028843990000103
TABLE 2 OA coefficients
Figure BDA0002028843990000104
The aerial image contains more texture information, the spectral values in the same category are more different, and the sub-objects in different categories may have similar spectral values. For example, in the town portion, houses and roads have different spectral values, but the spectral values of trees in towns and trees in forests are similar. For these reasons, there are many finely divided misclassifications of the original segmentation method. The invention changes the original isotropic potential function in the energy function of the marking field into anisotropic one. The method has the advantages that the segmentation difficulty caused by similar spectral values can be reduced, proper punishment is given to various wrong segmentation conditions, and the segmentation precision is improved. For example, in the upper right half of fig. 5(b7), a small piece of greenbelt in a city is accurately divided into towns, unlike the wrong division into greenbelts as in fig. 5(b 6). Furthermore, quantitative analysis was also performed on the experimental graphs after image segmentation, and it can be seen from tables 1 and 2 that the segmentation results of the present invention obtained the optimum Kappa coefficient and OA values. As can be seen from the data in fig. 4, fig. 5 and tables 1 and 2, the segmentation effect of the present invention is the best. Therefore, the effectiveness of the method in remote sensing image segmentation is verified whether the method is qualitative analysis or quantitative analysis.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1. A Marfan field remote sensing image segmentation method based on anisotropic potential function is characterized in that original isotropy of the potential function of an energy function in a marker field is changed into anisotropy, and the method comprises the following steps:
the method comprises the following steps: performing initialization over-segmentation processing on an input image, dividing the image into a plurality of over-segmentation areas, establishing an area adjacency graph by using the over-segmentation areas, and defining an image neighborhood system, an object-level observation characteristic field and an object-level segmentation marker field according to the area adjacency graph;
step two: performing feature modeling of a likelihood function on an observation feature field by adopting mixed Gaussian distribution, and performing anisotropic probability modeling on a potential function of an energy function in joint probability distribution of the segmentation marker field;
step three: and obtaining posterior distribution of the segmentation marker field under the condition of potential function anisotropy according to Bayes criterion, and updating iterative segmentation by using maximum posterior probability criterion to obtain a final segmentation image.
2. The method for segmenting the remote sensing image of the Ma's field based on the anisotropic potential function according to claim 1, wherein the neighborhood system, the observation feature field and the segmentation mark field in the first step are realized by the following steps:
1) the size of each band of the input image I (R, G, B) is a × B, and the position set is: s { (i1, j1) |1 ≦ i1 ≦ a,1 ≦ j1 ≦ b };
2) performing over-segmentation processing on the image I (R, G, B) by using mean shift algorithm to obtain corresponding over-segmentation regions, labeling each over-segmentation region from left to right and from top to bottom, wherein the set of over-segmentation regions is R ═ { R { (R) }1,R2,...,RlWhere l is the number of over-segmented regions, RsIs the s-th over-segmentation region, s ∈ {1, 2.., l };
3) constructing a corresponding region adjacency graph RAG based on the over-segmented regions: g ═ V, E; wherein, the vertex set V ═ { V ═ V1,V2,...,Vl},VsRepresents an over-divided region RsThe position of (a); set of boundaries E ═ Est|1≤s,t≤l},estRepresents an over-divided region RsAnd an over-divided region RtThe spatial adjacency of (a);
4) defining a segmentation marker field X at object level on the region adjacency graph G { X }s|1≤s≤l}={X1,X2,...,XlIn which a random variable XsIndicates the category label corresponding to the s-th area, and XsE is {1,2, …, k }, wherein k is the number of segmentation categories; defining an object-level observation characteristic field Y ═ Y on the region adjacency graph Gs|1≤s≤l}={Y1,Y2,...,Yl},YsTo go from the over-divided region RSThe extracted image features of (1);
5) defining a corresponding neighborhood system N ═ N according to the region adjacency graph GsL 1 is not less than s not more than l }; wherein N issNeighborhood system representing a vertex s, i.e. Ns={Rt|est>0,1≤t≤l}。
3. The method for segmenting the remote sensing image of the Ma's field based on the anisotropic potential function of claim 2, wherein the spatial adjacent relation E in the boundary set E isstThe realization method comprises the following steps: if two over-divided regions RsAnd over-segmentation RtAnd adjacent, the value is 1, otherwise, the value is 0, namely:
Figure FDA0002028843980000011
wherein d isstIs an over-divided region RsAnd over-segmentation RtIs determined.
4. The method for segmenting the remote sensing image of the Ma's field based on the anisotropic potential function according to claim 2, wherein the method for realizing the feature modeling in the second step is as follows: modeling an observation characteristic field Y of the image by adopting mixed Gaussian distribution: assuming that the homogeneous regions in the likelihood function have the same distribution, i.e. the same type of regions in the image obeys the same distribution, at this time,
Figure FDA0002028843980000021
each component of the observed characteristic field Y ═ Y is independent of the others given an implementation X ═ X of the marker field, and therefore:
Figure FDA0002028843980000022
where P (Y | X ═ X) is the posterior probability of segmenting the marker field X ═ X given the observed data Y ═ Y; x is the number ofsIs a class label XsValue of (a), ysIs an image feature YsTaking the value of (A); mu.shRepresents the characteristic mean value, ΣhRepresenting a characteristic covariance matrix, p being the dimension of the image data;
estimating the feature mean mu using maximum likelihoodhSum-feature covariance matrix ∑hThese two parameters are estimated:
Figure FDA0002028843980000023
5. the method for segmenting the remote sensing image of the Ma's field based on the anisotropic potential function according to claim 4, wherein the probability modeling in the second step is realized by the following steps:
in the MRF model, assuming that the mark field has Markov property, the joint probability p (x) of segmenting the mark field follows Gibbs distribution according to the harmsley-Clifford theorem, that is:
Figure FDA0002028843980000024
wherein,
Figure FDA0002028843980000025
is a normalization constant, X is an implementation of the segmentation marker field X, Ω is the set of all X; u (x) is an energy function, and U (x) is ∑c∈CVc(x),Vc(x) Is the group potential; t is a temperature constant; the energy function u (x) is the sum of all potential masses in the set of potential masses C;
replacing potential function V with loss matrix Lc(x) The loss matrix L is:
Figure FDA0002028843980000026
wherein, thetamIndicating that the true ground object class is the m-th class, anThe ground object class obtained by the division is the nth class; l (theta)m,an) Is the true ground object class thetamIs mistakenly divided into divided ground object types anAnd l (θ)m,am)=0,m∈{1,2,...,k};
Function of energy
Figure FDA0002028843980000027
Wherein,
Figure FDA0002028843980000028
is the marking result of the ith iteration area t as a real ground object type thetam;xsThe feature type of the current area s needs to be updated, i.e. the feature type is used as the feature type anThe role of (c); loss value L (x)t,xs) Indicating that the current region s is of the type xsAnd (4) marking penalty between the adjacent region t.
6. The method for segmenting the remote sensing image of the Ma's field based on the anisotropic potential function of claim 5, wherein in the loss matrix L, if the spectral values of two ground object classes are similar, the cost L (theta) is obtainedm,an) The value of (A) is large; if the spectral values of the two ground object types are greatly different, the cost l (theta)m,an) The value of (c) is small.
7. The method for segmenting the remote sensing image of the Ma's field based on the anisotropic potential function according to claim 5, wherein the method for obtaining the final segmented image in the third step comprises the following steps:
object-by-object iterative update to obtain the final result, where an object represents an over-segmented region because X ═ XSL 1 is less than or equal to s and less than or equal to l, so:
P(X=x)=P{X1=x1,X2=x2,...,Xl=xl};
observing each component of the observation field Y ═ Y independently of each other under conditions which achieve a marker segmentation of length X ═ X for a given marker field, then:
Figure FDA0002028843980000031
Figure FDA0002028843980000032
therefore, the maximum posterior probability criterion and the Bayes formula are used to obtain:
Figure FDA0002028843980000033
when i is 0, obtaining an initial value x by a k-means algorithms (i)When i ═ i + 1:
Figure FDA0002028843980000034
after several iterations, converged
Figure FDA0002028843980000035
As a final segmentation result.
CN201910302954.0A 2019-04-16 2019-04-16 Markov field remote sensing image segmentation method based on anisotropic potential function Active CN111210433B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910302954.0A CN111210433B (en) 2019-04-16 2019-04-16 Markov field remote sensing image segmentation method based on anisotropic potential function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910302954.0A CN111210433B (en) 2019-04-16 2019-04-16 Markov field remote sensing image segmentation method based on anisotropic potential function

Publications (2)

Publication Number Publication Date
CN111210433A true CN111210433A (en) 2020-05-29
CN111210433B CN111210433B (en) 2023-03-03

Family

ID=70787963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910302954.0A Active CN111210433B (en) 2019-04-16 2019-04-16 Markov field remote sensing image segmentation method based on anisotropic potential function

Country Status (1)

Country Link
CN (1) CN111210433B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112765095A (en) * 2020-12-24 2021-05-07 山东省国土测绘院 Method and system for filing image data of stereo mapping satellite
CN115239746A (en) * 2022-09-23 2022-10-25 成都国星宇航科技股份有限公司 Object-oriented remote sensing image segmentation method, device, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080101678A1 (en) * 2006-10-25 2008-05-01 Agfa Healthcare Nv Method for Segmenting Digital Medical Image
CN108090913A (en) * 2017-12-12 2018-05-29 河南大学 A kind of image, semantic dividing method based on object level Gauss-Markov random fields

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080101678A1 (en) * 2006-10-25 2008-05-01 Agfa Healthcare Nv Method for Segmenting Digital Medical Image
CN108090913A (en) * 2017-12-12 2018-05-29 河南大学 A kind of image, semantic dividing method based on object level Gauss-Markov random fields

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
万玲等: "合成孔径雷达图像分割研究进展", 《遥感技术与应用》 *
郑晨等: "基于多尺度区域粒度分析的遥感图像分割", 《光谱学与光谱分析》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112765095A (en) * 2020-12-24 2021-05-07 山东省国土测绘院 Method and system for filing image data of stereo mapping satellite
CN115239746A (en) * 2022-09-23 2022-10-25 成都国星宇航科技股份有限公司 Object-oriented remote sensing image segmentation method, device, equipment and medium
CN115239746B (en) * 2022-09-23 2022-12-06 成都国星宇航科技股份有限公司 Object-oriented remote sensing image segmentation method, device, equipment and medium

Also Published As

Publication number Publication date
CN111210433B (en) 2023-03-03

Similar Documents

Publication Publication Date Title
CN107092870B (en) A kind of high resolution image Semantic features extraction method
CN109871875B (en) Building change detection method based on deep learning
CN102013017B (en) Method for roughly sorting high-resolution remote sensing image scene
CN110599537A (en) Mask R-CNN-based unmanned aerial vehicle image building area calculation method and system
CN110458172A (en) A kind of Weakly supervised image, semantic dividing method based on region contrast detection
CN107403434B (en) SAR image semantic segmentation method based on two-phase analyzing method
CN106846322B (en) The SAR image segmentation method learnt based on curve wave filter and convolutional coding structure
CN105139395A (en) SAR image segmentation method based on wavelet pooling convolutional neural networks
CN113223042B (en) Intelligent acquisition method and equipment for remote sensing image deep learning sample
CN106651865B (en) A kind of optimum segmentation scale automatic selecting method of new high-resolution remote sensing image
CN106611422B (en) Stochastic gradient Bayes's SAR image segmentation method based on sketch structure
CN103985112B (en) Image segmentation method based on improved multi-objective particle swarm optimization and clustering
CN102542302A (en) Automatic complicated target identification method based on hierarchical object semantic graph
CN108427919B (en) Unsupervised oil tank target detection method based on shape-guided saliency model
CN104657980A (en) Improved multi-channel image partitioning algorithm based on Meanshift
CN103593855A (en) Clustered image splitting method based on particle swarm optimization and spatial distance measurement
CN102496142B (en) SAR (synthetic aperture radar) image segmentation method based on fuzzy triple markov fields
CN106651884A (en) Sketch structure-based mean field variational Bayes synthetic aperture radar (SAR) image segmentation method
CN103854290A (en) Extended target tracking method combining skeleton characteristic points and distribution field descriptors
CN111210433B (en) Markov field remote sensing image segmentation method based on anisotropic potential function
CN108090913B (en) Image semantic segmentation method based on object-level Gauss-Markov random field
Li et al. An aerial image segmentation approach based on enhanced multi-scale convolutional neural network
CN106600611B (en) SAR image segmentation method based on sparse triple Markov field
CN105787505A (en) Infrared image clustering segmentation method combining sparse coding and spatial constraints
CN107292268A (en) The SAR image semantic segmentation method of quick ridge ripple deconvolution Structure learning model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240627

Address after: No. 7, Xujiatai, Xin'andu Office, Dongxihu District, Wuhan City, Hubei Province, 430040

Patentee after: Wuhan Luojia Totem Technology Co.,Ltd.

Country or region after: China

Address before: No.85, Minglun street, Shunhe District, Kaifeng City, Henan Province

Patentee before: Henan University

Country or region before: China

TR01 Transfer of patent right