CN111210433B - Markov field remote sensing image segmentation method based on anisotropic potential function - Google Patents

Markov field remote sensing image segmentation method based on anisotropic potential function Download PDF

Info

Publication number
CN111210433B
CN111210433B CN201910302954.0A CN201910302954A CN111210433B CN 111210433 B CN111210433 B CN 111210433B CN 201910302954 A CN201910302954 A CN 201910302954A CN 111210433 B CN111210433 B CN 111210433B
Authority
CN
China
Prior art keywords
segmentation
field
image
over
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910302954.0A
Other languages
Chinese (zh)
Other versions
CN111210433A (en
Inventor
郑晨
潘欣欣
陈运成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University
Original Assignee
Henan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University filed Critical Henan University
Priority to CN201910302954.0A priority Critical patent/CN111210433B/en
Publication of CN111210433A publication Critical patent/CN111210433A/en
Application granted granted Critical
Publication of CN111210433B publication Critical patent/CN111210433B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a Markov field remote sensing image segmentation method based on an anisotropic potential function, which comprises the following steps: performing initialization over-segmentation processing on the pixel-level image to obtain an object-level image and a region adjacency graph, and respectively defining an adjacent domain system, an observation characteristic field and a segmentation marker field on the region adjacency graph; performing feature modeling of a likelihood function on the observation feature field, and performing anisotropic probability modeling on a potential function of an energy function in the joint probability distribution of the segmentation marker field; and obtaining posterior distribution of the segmentation mark field under the condition of potential function anisotropy according to a Bayes criterion, and updating iterative segmentation by using a maximum posterior probability criterion to obtain a final segmentation result. The effectiveness of the method in remote sensing image segmentation is verified through experimental verification, namely qualitative analysis or quantitative analysis; the method can be used for solving the problem that the segmentation difficulty is increased due to the similarity of the spectral values in the remote sensing image, and greatly improves the working efficiency compared with manual detection.

Description

Markov field remote sensing image segmentation method based on anisotropic potential function
Technical Field
The invention relates to the technical field of image segmentation, in particular to segmentation of a high-resolution remote sensing image, and particularly relates to a Markov field remote sensing image segmentation method based on an anisotropic potential function.
Background
Image segmentation is a technique and process for dividing an image into specific regions with unique properties and extracting an object of interest. It is a key step from image processing to image analysis.
In recent years, with the continuous development of remote sensing technology, remote sensing images acquired by different sensors are widely applied to various fields such as land cover detection, forest cover detection, wetland resource detection, urban and rural planning, military reconnaissance and the like. In order to fully utilize useful information contained in remote sensing image data, researchers have conducted a great deal of image analysis processing research, in which image segmentation becomes one of the research hotspots in this field. Currently, there are many methods for image segmentation, such as: thresholding, clustering, etc., which often assume that homogeneous feature classes have similar features, but different feature differences between different features. However, with the improvement of the spatial resolution of the remote sensing image, the details of the ground object are highlighted, the structural information is more prominent, and the phenomenon that the spectrum of the same object is different from the spectrum of the foreign object is the same with the spectrum of the foreign object is further aggravated, so that the methods are difficult to obtain a high-precision segmentation result. In order to describe the characteristics of the remote sensing image with high spatial resolution, deep learning, MRF and other methods are proposed in succession, and on the basis of the original segmentation theory, more prior information or condition constraints are considered. The accuracy of segmentation is improved. Among them, MRF is widely used because it has markov property and can better describe the spatial relationship between image elements. There are many common methods in MRF-based image segmentation, among which the earliest image segmentation method is the classical pixel-level ICM algorithm, and further develops to an image processing method based on a multi-scale MRF model, both of which are pixel-based segmentation methods, and the range considered by these methods is limited, so that there is a later object-level OMRF method, which considers more region information and spatial neighborhood relationship. Although these methods improve the segmentation effect, the potential functions of these methods are all isotropic, i.e. only determine whether two adjacent primitives belong to the same class of feature. However, the feature relationship is not limited to this, and there are different correlation relationships between different feature types.
Disclosure of Invention
Aiming at the technical problems that the potential function of a marked field energy function of the existing remote sensing image segmentation method is isotropic, only two adjacent elements can be judged whether to belong to the same class of ground objects, and a plurality of fine and broken wrong classifications exist, the invention provides the Markov field remote sensing image segmentation method based on the anisotropic potential function.
In order to achieve the purpose, the technical scheme of the invention is realized as follows: a Marfan field remote sensing image segmentation method based on an anisotropic potential function changes the original isotropy of the potential function of an energy function in a marker field into anisotropy, and comprises the following steps:
the method comprises the following steps: performing initialization over-segmentation processing on an input image, dividing the image into a plurality of over-segmentation areas, establishing an area adjacency graph by using the over-segmentation areas, and defining an image neighborhood system, an object-level observation characteristic field and an object-level segmentation marker field according to the area adjacency graph;
step two: performing feature modeling of a likelihood function on an observation feature field by adopting mixed Gaussian distribution, and performing anisotropic probability modeling on a potential function of an energy function in joint probability distribution of the segmentation marker field;
step three: and obtaining posterior distribution of the segmentation marker field under the condition of potential function anisotropy according to Bayes criterion, and updating iterative segmentation by using maximum posterior probability criterion to obtain a final segmentation image.
In the first step, the adjacent domain system, the observation characteristic field and the segmentation marking field are realized by the following steps:
1) The size of each band of the input image I (R, G, B) is a × B, and the position set is:
S={(i1,j1)|1≤i1≤a,1≤j1≤b};
2) Performing over-segmentation processing on the image I (R, G, B) by using mean shift algorithm to obtain corresponding over-segmented regions, labeling each over-segmented region from left to right and from top to bottom, wherein the set of over-segmented regions is R = { R 1 ,R 2 ,…,R l In which l is the number of over-divided regions, R s Is the s-th over-segmentation region, s belongs to {1,2, \8230;, l };
3) Constructing a corresponding region adjacency graph RAG based on the over-segmented regions: g = (V, E); wherein, the vertex set V = { V = { (V) 1 ,V 2 ,…,V l },V s Represents an over-divided region R s The position of (a); boundary set E = { E = { E } st |1≤s,t≤l},e st Represents an over-divided region R s And over-divided region R t The spatial adjacency of (a);
4) Defining a segmentation marker field X = { X) at object level on a region adjacency graph G s |1≤s≤l}={X 1 ,X 2 ,…,X l In which a random variable X s Indicates the category label corresponding to the s-th region, and X s Belongs to {1,2, \8230;, k }, wherein k is the number of segmentation classes; defining object-level observations on a region adjacency graph GCharacteristic field Y = { Y s |1≤s≤l}={Y 1 ,Y 2 ,…,Y l },Y s To go from the over-divided region R S The extracted image features of (1);
5) Defining a corresponding neighborhood system N = { N) according to the region adjacency graph G s L 1 is less than or equal to s and less than or equal to l }; wherein, N s Neighbor system representing vertex s, i.e. N s ={R t |e st >0,1≤t≤l}。
Spatial adjacency E in the boundary set E st The realization method comprises the following steps: if two over-divided regions R s And over-segmentation R t And adjacent, the value is 1, otherwise, the value is 0, namely:
Figure BDA0002028843990000021
wherein, d st Is an over-divided region R s And over-segmentation R t Is determined.
The method for realizing the feature modeling in the second step comprises the following steps: modeling an observation characteristic field Y of the image by adopting mixed Gaussian distribution: assuming that the homogeneous regions in the likelihood function have the same distribution, i.e. the same type of regions in the image obeys the same distribution, then,
Figure BDA0002028843990000031
each component of the observed feature field Y = Y is independent of the others for a given realization X = X of the marker field, and therefore:
Figure BDA0002028843990000032
wherein P (Y = Y | X = X) is the posterior probability of segmenting the marker field X = X given the observation data Y = Y; x is the number of s Is a category label X s Value of (a), y s Is an image feature Y s Taking the value of (A); mu.s h Representing the mean of the features, Σ h Representing a characteristic covariance matrix, p being the dimension of the image data;
estimating the feature mean mu using maximum likelihood h Sum-feature covariance matrix sigma h These two parameters are estimated:
Figure BDA0002028843990000033
the method for realizing the probability modeling in the second step comprises the following steps:
in the MRF model, assuming that the marker field has Markov properties, the joint probability P (x) of segmenting the marker field follows Gibbs distribution, as known from the harmsley-Clifford theorem, that is:
Figure BDA0002028843990000034
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002028843990000035
is a normalization constant, X is an implementation of the segmentation marker field X, Ω is the set of all X; u (x) is an energy function, and U (x) = ∑ Σ c∈C V c (x),V c (x) Is a group's trend; t is a temperature constant; the energy function U (x) is the sum of all potential masses in the set of potential masses C;
replacing potential function V with loss matrix L c (x) The loss matrix L is:
Figure BDA0002028843990000036
wherein, theta m Indicating that the true ground object class is the m-th class, a n The ground object class obtained by the division is the nth class; l (theta) m ,a n ) Is the true ground object class theta m Is mistakenly divided into divided ground object types a n And l (θ) m ,a m )=0,m∈{1,2,…,k};
Function of energy
Figure BDA0002028843990000037
Wherein the content of the first and second substances,
Figure BDA0002028843990000038
is the marking result of the ith iteration area t as a real ground object type theta m ;x s The feature type of the current area s needs to be updated, i.e. the feature type is used as the feature type a n The role of (c); loss value L (x) t ,x s ) Indicating that the current region s is of the type x s And (4) marking penalty between the adjacent region t.
In the loss matrix L, if the spectral values of two ground object types are similar, the cost L (theta) m ,a n ) The value of (2) is large; if the spectral values of the two ground object types are greatly different, the cost l (theta) m ,a n ) The value of (c) is small.
The method for obtaining the final segmentation image in the third step comprises the following steps:
object-by-object iterative updating to obtain the final result, where an object represents an over-segmented region, since X = { X = S L 1 is less than or equal to s and less than or equal to l, so:
P(X=x)=P{X 1 =x 1 ,X 2 =x 2 ,…,X l =x l };
observing each component of the observation field Y = Y independently of each other given a marker field effecting a marker segmentation of length X = X, then:
Figure BDA0002028843990000041
Figure BDA0002028843990000042
therefore, the maximum posterior probability criterion and the Bayes formula are used to obtain:
Figure BDA0002028843990000043
when i =0, obtaining an initial value x by a k-means algorithm s (i) When i = i +1:
Figure BDA0002028843990000044
After several iterations, converged
Figure BDA0002028843990000045
As a final segmentation result.
The invention has the beneficial effects that: the original isotropic potential function in the energy function of the marking field is changed into anisotropic, so that the segmentation difficulty caused by similar spectral values can be reduced, namely the phenomena of same-object different spectrum and foreign-object same spectrum can be better processed, proper punishment is applied to various wrong-division conditions, and the segmentation precision is improved; the problem of segmentation difficulty is increased due to the fact that spectral values in remote sensing images are similar is solved, and compared with manual detection, the working efficiency is greatly improved. Quantitative analysis is carried out on an experimental graph after image segmentation, so that the segmentation result obtains the optimal kappa and OA values; therefore, the effectiveness of the method in remote sensing image segmentation is verified whether the method is qualitative analysis or quantitative analysis.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of an experiment according to the present invention.
FIG. 2 is a flowchart illustrating the process of the region adjacency graph according to the present invention.
Fig. 3 is a diagram showing an example of the structure of a region adjacency graph according to the present invention, where (a) is an over-segmentation result graph, (b) is a partial graph of (a), and (c) is a corresponding region adjacency graph of (b).
FIG. 4 is a comparison graph of the original color image in the experimental image processing of the present invention, wherein (a 1) is a gray scale graph of the original color image, (a 2) is a graph of the real manual segmentation result, (a 3) is a graph of the ICM segmentation result, (a 4) is a graph of the MRMRF segmentation result, (a 5) is a graph of the IRGS segmentation result, (a 6) is a graph of the OMRF segmentation result, and (a 7) is a graph of the segmentation result of the present invention.
FIG. 5 is a comparison graph of processing two images according to the present invention, wherein (b 1) is a gray scale graph of an original color image, (b 2) is a graph of a real manual segmentation result, (b 3) is a graph of an ICM segmentation result, (b 4) is a graph of a MRMRF segmentation result, (b 5) is a graph of an IRGS segmentation result, (b 6) is a graph of an OMRF segmentation result, and (b 7) is a graph of a segmentation result according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
As shown in fig. 1, a method for segmenting a remote sensing image in a mahalanobis field based on an anisotropic potential function includes the following steps:
the method comprises the following steps: the method comprises the steps of carrying out initialization over-segmentation processing on an input image, dividing the image into a plurality of over-segmentation areas, establishing an area adjacency graph RAG by utilizing the over-segmentation areas, and defining a neighborhood system N of the image, an object-level observation characteristic field Y and an object-level segmentation marker field X according to the area adjacency graph RAG.
As shown in fig. 2, the implementation method of the step one is:
1) For an input high-resolution image I (R, G, B), assuming that the size of each band is a × B, the position set is defined as S = { (I1, j 1) |1 ≦ I1 ≦ a,1 ≦ j1 ≦ B }.
2) Performing over-segmentation processing on the image I (R, G, B) by using mean shift algorithm to obtain corresponding over-segmented regions, labeling each over-segmented region in the order from left to right from top to bottom, wherein the set of over-segmented regions is R = { R = 1 ,R 2 ,…,R l Where l is the number of over-segmented regions, R s Is the s-th over-segmentation region, s ∈ {1,2, \ 8230;, l }.
3) Constructing a corresponding region adjacency graph RAG based on the over-segmented regions: g = (V, E), V is a set of vertices, and E is a set of boundaries. Wherein V = { V 1 ,V 2 ,…,V l },V s Represents an over-divided region R s S is from {1,2, \8230;, l }; boundary set E = { E = { E = } st |1≤s,t≤l},e st Represents an over-divided region R s And over-divided region R t In particular, if two over-divided regions R s And over-divided region R t And adjacent, the value is 1, otherwise, the value is 0, namely:
Figure BDA0002028843990000051
wherein d is st Is an over-divided region R s And an over-divided region R t Is determined.
4) Defining object-level segmentation marker field X = { X' on region adjacency graph G s |1≤s≤l}={X 1 ,X 2 ,…,X l Wherein each X is s Is a random variable used for representing the category label corresponding to the s-th area, and the category label X s E.g. {1,2, \8230;, k }, where k is the number of segmentation classes. Defining an object-level observation characteristic field Y = { Y) on a region adjacency graph G s |1≤s≤l}={Y 1 ,Y 2 ,…,Y l },Y s To go from the over-divided region R S Wherein s belongs to {1,2, \8230;, l }.
5) Defining a corresponding neighborhood system N = { N) according to the region adjacency graph G s L 1 is less than or equal to s and less than or equal to l. Wherein each N s Neighborhood system representing a vertex s, i.e. N s ={R t |e st T is more than 0,1 and less than or equal to l }. Specifically, as shown in fig. 3, (a) is an initial segmentation result obtained by the mean shift algorithm, and its parameter is 100. To specify the region adjacency graph, we cut a part from (a) as (b); (c) Is a region adjacency graph corresponding to (b), wherein one region is a primitive and consists of two parts: X1-X5 are a labelTo represent the category of the region, Y1-Y5 are the region features extracted by the marks X1-X5, and if the two regions are adjacent in space, the two regions are connected; if not, they are not connected.
Step two: and performing feature modeling of a likelihood function on the observation feature field Y by adopting mixed Gaussian distribution, and performing anisotropic probability modeling on a potential function of an energy function in the joint probability distribution of the segmentation marking field X.
1) Performing feature modeling of a likelihood function on the observation feature field Y:
the main function of the feature field is to extract image features from the original observation data and to use a likelihood function to reflect the feature information of each position as accurately as possible. Common characteristic field likelihood functions have the following two distribution forms:
Figure BDA0002028843990000061
in consideration of computational efficiency, the method adopts mixed Gaussian distribution to model the observation characteristic field Y of the image. At the same time, it is also assumed that the homogeneous regions in the likelihood function have the same distribution, i.e. it is assumed that the regions of the same type in the image follow the same distribution, and at this time,
Figure BDA0002028843990000062
from conditional independence (assuming that each component of the observed feature field Y = Y is independent of each other under one realization X = X of a given segmented marker field), there is therefore:
Figure BDA0002028843990000063
where P (Y = Y | X = X) is the posterior probability of the marker field X = X given the observation data Y = Y, X s Is a class label X s Value of (a), y s Is an image feature Y s Taking the value of (A); mu.s h Representing the mean of the features, Σ h Representing the characteristic covariance matrix, p is the dimension, p =3 for image data.
In the above formula, the feature mean value μ needs to be estimated h And the characteristic variance σ h The two parameters are estimated using maximum likelihood. Mean value of features μ for each class h Sum-feature covariance matrix sigma h The estimation results of (c) are as follows:
Figure BDA0002028843990000064
2) And carrying out anisotropic probability modeling on the potential function of the energy function in the joint probability distribution of the segmentation marking field X.
In the MRF model, assuming that the marker fields have Markov properties, it is known from the harmersley-Clifford theorem (knowing that the position set S of the grid has neighborhood system N, if the random field X on the position set S is an MRF model, then the random field X is also a GRF) that the joint probability P (X) of the marker fields obeys the Gibbs distribution, i.e.:
Figure BDA0002028843990000065
wherein the content of the first and second substances,
Figure BDA0002028843990000071
is a normalized constant, X is an implementation of the segmentation marker field X, Ω is the set of all X, U (X) is the energy function: u (x) = ∑ Σ c∈C V c (x),V c (x) Is a group's trend. T is a temperature constant, and is generally set to 1 unless otherwise required, which is set to 1 for simplicity of calculation. The energy function U (x) is the sum of all the potential masses in the set of potential masses C. For isotropic GRFs, if only second order neighborhood systems are considered, U (x) can be calculated by:
U(x)=∑ c V 2 (x s ,x t );
wherein, V 2 (x s ,x t ) The values of the potential functions representing the region s and the region t, namely:
Figure BDA0002028843990000072
now consider the potential function V c (x) Instead, it is anisotropic, i.e. segmentation errors between different feature classes will bear different penalties, such as: the house and the farmland are divided into towns at the same time, which generally do not accord with the common geography knowledge and give greater punishment to the towns; if the house and the grassland are divided into towns at the same time, there is a high probability that it will be given a small penalty. So that the loss matrix L is used to replace the original potential function V c (x) The definition is as follows:
Figure BDA0002028843990000073
wherein, theta m Indicating that the true ground object class is the m-th class, a n The term "n" denotes the ground object obtained by the division. l (theta) m ,a n ) Is the true surface feature class theta m Is mistakenly divided into divided ground object types a n And l (θ) m ,a m ) =0,m ∈ {1,2, \8230;, k }. Potential V compared with group c (x) The loss matrix L can realize the description of the anisotropy among different categories, and further accurately depict the space action relationship among the different categories so as to improve the segmentation precision.
At this time, the process of the present invention,
Figure BDA0002028843990000074
wherein the content of the first and second substances,
Figure BDA0002028843990000075
is the marking result of the last iteration area t, namely the ith iteration area t, and is taken as a real ground object type theta m ;x s Is the feature type of the current area s needing to be updated, namely serving as a n The role of (c); loss value L (x) at this time t ,x s ) Indicating that the current region s is of the type x s And (4) marking penalty between the adjacent region t.
In general, if there are two categoriesThe spectral values of (A) are similar, and we give a larger penalty, i.e. l (theta) m ,a n ) The value of (A) is large; if the spectral values of the two classes differ more, we give a smaller penalty, i.e., l (θ) m ,a n ) The value of (c) is small. The specific loss matrix between different classes is generally given experimentally.
Step three: and obtaining posterior distribution of the segmentation marker field X under the condition of potential function anisotropy according to Bayes criterion, updating iterative segmentation by using maximum posterior probability criterion (MAP), and obtaining a final segmentation image.
In the MAF model at the pixel level, an optimal segmentation result is obtained through a pixel-by-pixel update experiment. In the subject-level MAF model, the final result is obtained by iterative update, but here, the update is iterated on an object-by-object basis, where one object represents one oversplited area, and the specific implementation process is as follows:
because X = { X s L 1 is less than or equal to s and less than or equal to l, so:
P(X=x)=P{X 1 =x 1 ,X 2 =x 2 ,…,X l =x l }
known from the conditional independence assumption (assuming that each component of the observation field Y = Y is independent of each other for one realization X = X of a given marker field):
Figure BDA0002028843990000081
Figure BDA0002028843990000082
therefore, the maximum a posteriori probability (MAP) and bayes formula are used to obtain:
Figure BDA0002028843990000083
when i =0, obtaining an initial value x by a k-means algorithm s (i) When i =i + 1:
Figure BDA0002028843990000084
after several iterations, the converged
Figure BDA0002028843990000085
As a final segmentation result.
And (3) experimental verification:
in view of the current state of research, quantitative indicators are the main criteria for evaluating the quality of segmentation. Among the quantitative evaluation indexes, kappa coefficient, classification accuracy and classification error rate, boundary detection rate, and the like are more commonly used. The invention mainly adopts Kappa coefficient and Overall Accuracy (OA), which can effectively evaluate the Overall segmentation performance.
To calculate the Kappa coefficients and OA, an error matrix (also called confusion matrix) is first calculated, which has a size M × M, where M is the number of classes. The error matrix table is as follows:
Figure BDA0002028843990000086
Figure BDA0002028843990000091
wherein N is ij The number of pixels whose actual class is i is classified as j; n is the number of image pixels;
Figure BDA0002028843990000092
the number of the pixels which are divided into the ith pixel in the classification result is shown;
Figure BDA0002028843990000093
the number of j-th pixels actually included in the image.
The Kappa coefficient can be calculated by the following formula:
Figure BDA0002028843990000094
the value of K is between 0 and 1. The larger the value of K, the better the segmentation.
The Overall Accuracy (OA) is an index for evaluating the overall classification performance, and can be calculated by the following formula:
Figure BDA0002028843990000095
the value of OA is also between 0 and 1, which is generally slightly higher than the value K of Kappa coefficient, and if the value of OA is larger, the segmentation precision is higher, and the segmentation effect is better.
In order to verify the effect of the present invention, comparison with the following four segmentation methods was made. The ICM (iterative conditioning model) method is a pixel-level greedy algorithm, a deterministic algorithm based on local conditional probabilities, which completes image segmentation by updating image markers point-by-point. The MRMRF (Multi-Resolution Markov Random Field) method is an algorithm combined with wavelets, and describes the ground features by using different scales through the pyramid structure of the wavelets, so that the spatial description range is expanded, and the features are extracted more effectively for analysis. Specifically, the MRMRF method usually adopts an image multi-resolution mode, models an image into a plurality of single-scale MRFs described in different resolutions, describes global features of the image using a lower resolution image, describes detailed features of the image using a higher resolution image, and performs interlayer message propagation from top to bottom through an interlayer causal relationship. The IRGS (iterative regional growing using fields) method is an object-level MRF algorithm, edge strength information is introduced in the process of constructing a spatial context model, an iterative region growing technology is adopted, and an over-segmentation region is used as a primitive, so that a more accurate segmentation result is obtained. The OMRF (Object Markov Random Field) method is another classical Object-level Markov Field model, and unlike the pixel level, considers a region as a primitive to consider more spatial neighborhood relationships and solves the model using an iterative method of generative equations.
The four methods have a common feature: the potential function of the marker field is all isotropic. By contrast with these four methods, the advantages of the present invention are demonstrated, since the present invention changes the potential function of the energy function in the marker field to anisotropic. In order to verify the effect of the invention, two aerial images of the thai area are used as experimental data for testing, wherein one image of the experiment is a remote sensing image with the size of 1024 × 1024 × 3, and the image of the experiment contains four categories of cities, towns, rivers, bare lands and greenbelts, as shown in fig. 4; the second experiment image is a remote sensing image with the size of 512 x 3, and comprises three categories of cities, towns, rivers and greenbelts, as shown in fig. 5.
The operation platform of the invention is as follows: core (TM) i5-4590@3.30GHz, RAM:4G, 64-position win7 system, 2016a version matlab. The color image of the remote sensing image experiment one is shown in fig. 4 (a 1), and the real manual segmentation is shown in fig. 4 (a 2). In order to ensure the fairness of the experiment, according to the relevant reference documents, all the parameters are selected to enable the segmentation result to reach the optimal value. The experimental image was divided by β =30 using the ICM method, and the grayscale image obtained is shown in fig. 4 (a 3). Fig. 4 (a 4) shows a grayscale image of the segmentation result obtained by using the mrmrmrf method with potential =1,level-number =2,level ebase = 'haar' for the experimental image. Fig. 4 (a 5) shows a grayscale image of the division result obtained by applying the IRGS method to the experimental image, beta =150, yi = icm (y, 4, 0.5). Fig. 4 (a 6) shows a grayscale image of the division result obtained by using the OMRF method for the experimental image and β =40 and minra = 800. For the experimental image, using the method of the present invention, the loss matrix L is:
Figure BDA0002028843990000101
the MinRA =1000, and the obtained grayscale image of the division result is shown in fig. 4 (a 7). Fig. 5 (b 1) shows a color image of the remote sensing image experiment two, and fig. 5 (b 2) shows a real manual segmentation. In order to ensure the fairness of the experiment, according to relevant references, all parameters are selected to enable the segmentation result to reach the optimal value. For the experiment image dual-purpose ICM method, the beta =10The gradation image of the obtained division result is shown in fig. 5 (b 3). In the mrmrmrf method for experimental image use, the grayscale image of the segmentation result obtained with potential =1,level-number =1,level ebase = 'haar' is shown in fig. 5 (b 4). In the IRGS method for experimental video, beta =10, yi = icm (y, 3, 0.5), and a grayscale image of the division result is shown in fig. 5 (b 5). Fig. 5 (b 6) shows a grayscale image of a division result obtained by using the OMRF method for the experimental image and β =11 and minra = 400. For the method of the invention for experimental image dual-purpose, the loss matrix L is:
Figure BDA0002028843990000102
MinRA =350, and the obtained grayscale image of the division result is shown in fig. 5 (b 7). The Kappa coefficients of the segmentation results of the experimental image one and the experimental image two are shown in table 1, and the total accuracy OA of the segmentation results is shown in table 2.
TABLE 1 Kappa number
Figure BDA0002028843990000103
TABLE 2 OA coefficients
Figure BDA0002028843990000104
The aerial image contains more texture information, the spectral values in the same category are more different, and the sub-objects in different categories may have similar spectral values. For example, in the town portion, houses and roads have different spectral values, but the spectral values of trees in towns and trees in forests are similar. For these reasons, the original segmentation method has many finely divided misclassifications. The invention changes the original isotropic potential function in the energy function of the marking field into anisotropic one. The method has the advantages that the segmentation difficulty caused by similar spectral values can be reduced, proper punishment is given to various wrong segmentation conditions, and the segmentation precision is improved. For example, in the upper right half of fig. 5 (b 7), a small greenfield in a city is accurately divided into towns, unlike the green field in fig. 5 (b 6) which is mistaken. In addition, quantitative analysis was also performed on the experimental graphs after image segmentation, and it can be seen from tables 1 and 2 that the segmentation results of the present invention obtained the optimal Kappa coefficient and OA values. As can be seen from the data in fig. 4, fig. 5 and tables 1 and 2, the segmentation effect of the present invention is the best. Therefore, the effectiveness of the method in remote sensing image segmentation is verified whether the method is qualitative analysis or quantitative analysis.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (5)

1. A Marfan field remote sensing image segmentation method based on anisotropic potential function is characterized in that original isotropy of the potential function of an energy function in a marker field is changed into anisotropy, and the method comprises the following steps:
the method comprises the following steps: performing initialization over-segmentation processing on an input image, dividing the image into a plurality of over-segmentation areas, establishing an area adjacency graph by using the over-segmentation areas, and defining an adjacency system of the image, an object-level observation characteristic field and an object-level segmentation marker field according to the area adjacency graph;
step two: performing feature modeling of a likelihood function on the observation feature field by adopting mixed Gaussian distribution, and performing anisotropic probability modeling on a potential function of an energy function in joint probability distribution of the segmentation marker field;
the method for realizing the feature modeling in the second step comprises the following steps: modeling an observation characteristic field Y of the image by adopting mixed Gaussian distribution: assuming that the homogeneous regions in the likelihood function have the same distribution, i.e. the same type of regions in the image obeys the same distribution, at this time,
Figure FDA0004056634380000011
each component of the observed feature field Y = Y is independent of the others for a given realization X = X of the marker field, and therefore:
Figure FDA0004056634380000012
wherein (Y = Y | X = X) is the posterior probability of segmenting the marker field X = X given the observation data Y = Y; x is the number of s Is a class label X s Value of (a), y s Is an image feature Y s Taking the value of (a); mu.s h Representing the mean of the features, Σ h Representing a characteristic covariance matrix, p being the dimension of the image data;
estimating the feature mean mu using maximum likelihood h Sum-feature covariance matrix ∑ h These two parameters are estimated:
Figure FDA0004056634380000013
the method for realizing the probability modeling in the second step comprises the following steps:
in the MRF model, assuming that the mark field has Markov property, the joint probability P (x) of segmenting the mark field follows Gibbs distribution according to the harmsley-Clifford theorem, that is:
Figure FDA0004056634380000014
wherein the content of the first and second substances,
Figure FDA0004056634380000015
is a normalization constant, X is an implementation of the segmentation marker field X, Ω is the set of all X; u (x) is an energy function, and U (x) = ∑ Σ c∈C V c (x),V c (x) Is the group potential; t is a temperature constant; the energy function U (x) is the sum of all potential masses in the set of potential masses C;
replacing potential function V with loss matrix L c (x) The loss matrix L is:
Figure FDA0004056634380000021
wherein, theta m Indicating that the true ground object class is the m-th class, a n The ground object class obtained by the division is the nth class; l (theta) m ,a n ) Is the true ground object class theta m Is mistakenly divided into divided ground object types a n And l (θ) m ,a m )=0,m∈{1,2,…,k};
Function of energy
Figure FDA0004056634380000022
Wherein the content of the first and second substances,
Figure FDA0004056634380000023
is the marking result of the ith iteration area t as a real ground object type theta m ;x s The feature type of the current area s needs to be updated, namely the feature type is used as the feature type a n The role of (c); loss value L (x) t ,x s ) Indicating that the current region s is of the type x s Then, marking punishment between the adjacent area t;
step three: and obtaining posterior distribution of the segmentation marker field under the condition of potential function anisotropy according to Bayes criterion, and updating iterative segmentation by using maximum posterior probability criterion to obtain a final segmentation image.
2. The method for segmenting the remote sensing image of the Ma's field based on the anisotropic potential function according to claim 1, wherein the neighborhood system, the observation feature field and the segmentation mark field in the first step are realized by the following steps:
1) The size of each band of the input image I (R, G, B) is a × B, and the position set is: s = (i 1, j 1) |1 ≦ i1 ≦ a,1 ≦ j1 ≦ b };
2) Performing over-segmentation processing on the image I (R, G, B) by using mean shift algorithm to obtain corresponding over-segmented regions, labeling each over-segmented region from left to right and from top to bottom, wherein the set of over-segmented regions is R = { R 1 ,R 2 ,…,R l Where l is the number of over-segmented regions, R s Is the s-th over-segmentation region, s belongs to {1,2, \8230;, l };
3) Constructing a corresponding region adjacency graph RAG based on the over-segmented regions: g = (V, E); wherein, the vertex set V { V } 1 ,V 2 ,…,V l },V s Represents an over-divided region R s The position of (a); boundary set E = { E = { E } st |1≤s,t≤l},e st Represents an over-divided region R s And an over-divided region R t The spatial adjacency of (a);
4) Defining a segmentation marker field X = { X) at object level on a region adjacency graph G s |1≤s≤l}={X 1 ,X 2 ,…,X l In which a random variable X s Indicates the category label corresponding to the s-th area, and X s Belongs to {1,2, \8230;, k }, wherein k is the number of segmentation classes; defining an object-level observation characteristic field Y = { Y) on a region adjacency graph G s |1≤s≤l}={Y 1 ,Y 2 ,…,Y l },Y s To go from the over-divided region R S The extracted image features;
5) Defining a corresponding neighborhood system N = { N) according to the region adjacency graph G s L 1 is not less than s not more than l }; wherein, N s Neighborhood system representing a vertex s, i.e. N s ={R t |e st >0,1≤t≤l}。
3. The method for segmenting the remote sensing image of the Ma's field based on the anisotropic potential function of claim 2, wherein the spatial adjacent relation E in the boundary set E is st The realization method comprises the following steps: if two over-divided regions R s And over-segmentation R t And adjacent, the value is 1, otherwise, the value is 0, namely:
Figure FDA0004056634380000031
wherein, d st Is an over-divided region R s And over-segmentation R t Is determined.
4. The method of claim 1 or 3The Marfan field remote sensing image segmentation method based on the anisotropic potential function is characterized in that in the loss matrix L, if the spectral values of two ground object types are similar, the cost L (theta) is m ,a n ) The value of (A) is large; if the spectral values of the two ground object types are greatly different, the cost l (theta) , ,a m ) The value of (c) is small.
5. The method for segmenting the remote sensing image of the Ma's field based on the anisotropic potential function according to claim 4, wherein the method for obtaining the final segmented image in the third step comprises the following steps:
object-by-object iterative updating to obtain the final result, where an object represents an over-segmented region, since X = { X = S L 1 is less than or equal to s and less than or equal to l, so:
P(X=x)=P{X 1 =x 1 ,X 2 =x 2 ,…,X l =x l };
observing each component of the observation field Y = Y independently of each other given a marker field effecting a marker segmentation of length X = X, then:
Figure FDA0004056634380000032
Figure FDA0004056634380000033
therefore, the maximum posterior probability criterion and the Bayes formula are used to obtain:
Figure FDA0004056634380000034
when i =0, obtaining an initial value x by a k-means algorithm s (i) When i = + 1:
Figure FDA0004056634380000035
after several iterations, converged
Figure FDA0004056634380000036
As a final segmentation result.
CN201910302954.0A 2019-04-16 2019-04-16 Markov field remote sensing image segmentation method based on anisotropic potential function Active CN111210433B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910302954.0A CN111210433B (en) 2019-04-16 2019-04-16 Markov field remote sensing image segmentation method based on anisotropic potential function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910302954.0A CN111210433B (en) 2019-04-16 2019-04-16 Markov field remote sensing image segmentation method based on anisotropic potential function

Publications (2)

Publication Number Publication Date
CN111210433A CN111210433A (en) 2020-05-29
CN111210433B true CN111210433B (en) 2023-03-03

Family

ID=70787963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910302954.0A Active CN111210433B (en) 2019-04-16 2019-04-16 Markov field remote sensing image segmentation method based on anisotropic potential function

Country Status (1)

Country Link
CN (1) CN111210433B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112765095B (en) * 2020-12-24 2022-08-23 山东省国土测绘院 Method and system for filing image data of stereo mapping satellite
CN115239746B (en) * 2022-09-23 2022-12-06 成都国星宇航科技股份有限公司 Object-oriented remote sensing image segmentation method, device, equipment and medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090913A (en) * 2017-12-12 2018-05-29 河南大学 A kind of image, semantic dividing method based on object level Gauss-Markov random fields

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1916624B1 (en) * 2006-10-25 2016-11-23 Agfa HealthCare NV Method for segmenting a digital medical image.

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090913A (en) * 2017-12-12 2018-05-29 河南大学 A kind of image, semantic dividing method based on object level Gauss-Markov random fields

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
合成孔径雷达图像分割研究进展;万玲等;《遥感技术与应用》(第01期);全文 *
基于多尺度区域粒度分析的遥感图像分割;郑晨等;《光谱学与光谱分析》(第07期);全文 *

Also Published As

Publication number Publication date
CN111210433A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
Mongus et al. Ground and building extraction from LiDAR data based on differential morphological profiles and locally fitted surfaces
Chen et al. An automated approach for updating land cover maps based on integrated change detection and classification methods
CN110599537A (en) Mask R-CNN-based unmanned aerial vehicle image building area calculation method and system
CN109871875B (en) Building change detection method based on deep learning
CN110796667B (en) Color image segmentation method based on improved wavelet clustering
CN107403434B (en) SAR image semantic segmentation method based on two-phase analyzing method
WO2023000160A1 (en) Hyperspectral remote sensing image semi-supervised classification method, apparatus, and device, and storage medium
CN106611422B (en) Stochastic gradient Bayes's SAR image segmentation method based on sketch structure
JP2008217706A (en) Labeling device, labeling method and program
CN113223042B (en) Intelligent acquisition method and equipment for remote sensing image deep learning sample
CN108427919B (en) Unsupervised oil tank target detection method based on shape-guided saliency model
CN104657980A (en) Improved multi-channel image partitioning algorithm based on Meanshift
CN111210433B (en) Markov field remote sensing image segmentation method based on anisotropic potential function
CN106651884A (en) Sketch structure-based mean field variational Bayes synthetic aperture radar (SAR) image segmentation method
CN108090913B (en) Image semantic segmentation method based on object-level Gauss-Markov random field
CN110930413A (en) Image segmentation method based on weak supervision multi-core classification optimization merging
CN110084107A (en) A kind of high-resolution remote sensing image method for extracting roads and device based on improvement MRF
CN106600611B (en) SAR image segmentation method based on sparse triple Markov field
Li et al. An aerial image segmentation approach based on enhanced multi-scale convolutional neural network
CN113963261A (en) Method and system for extracting full convolution neural network cultivated land based on multi-scale fusion
CN113409335B (en) Image segmentation method based on strong and weak joint semi-supervised intuitive fuzzy clustering
CN104732230A (en) Pathology image local-feature extracting method based on cell nucleus statistical information
Senthilnath et al. Automatic road extraction using high resolution satellite image based on texture progressive analysis and normalized cut method
CN108509835B (en) PolSAR image ground object classification method based on DFIC super-pixels
CN110136143A (en) Geneva based on ADMM algorithm multiresolution remote sensing image segmentation method off field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant