CN104636758B - A kind of SAR image suitability Forecasting Methodology based on support vector regression - Google Patents

A kind of SAR image suitability Forecasting Methodology based on support vector regression Download PDF

Info

Publication number
CN104636758B
CN104636758B CN201510075677.6A CN201510075677A CN104636758B CN 104636758 B CN104636758 B CN 104636758B CN 201510075677 A CN201510075677 A CN 201510075677A CN 104636758 B CN104636758 B CN 104636758B
Authority
CN
China
Prior art keywords
mrow
msub
mtd
mtr
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510075677.6A
Other languages
Chinese (zh)
Other versions
CN104636758A (en
Inventor
杨卫东
王梓鉴
曹治国
邹腊梅
桑农
刘婧婷
张洁
刘晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201510075677.6A priority Critical patent/CN104636758B/en
Publication of CN104636758A publication Critical patent/CN104636758A/en
Application granted granted Critical
Publication of CN104636758B publication Critical patent/CN104636758B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of radar image suitability Forecasting Methodology based on support vector regression.Methods described includes:Study stage, extraction SAR image multidimensional characteristic form study collection;After study collection sample characteristics pretreatment, study collection L is classified as1、L2, then with study collection L1Training Support Vector Machines, and L is collected to study with obtained SVM models2Classified, the fitting percentage of each sample is calculated according to the distance between classification accuracy rate, sample characteristics and class heart;Afterwards suitability anticipation function model is obtained using study collection feature and its corresponding fitting percentage, fitting recurrence;Forecast period, to SAR image to be assessed, extraction character pair is as test sample data, input adaptation anticipation function model after data prediction, calculates the fitting percentage of the image.The present invention is according to the intensity and texture and structural characteristic of SAR image, it is established that the functional relation between SAR image fitting percentage and characteristic information, the matching performance of this method energy accurate evaluation SAR image by experimental verification.

Description

SAR image adaptability prediction method based on support vector regression
Technical Field
The invention belongs to the technical field of machine learning, pattern recognition and template matching, and particularly relates to a Synthetic Aperture Radar (SAR) image suitability prediction method based on support vector regression.
Background
The selection of the SAR scene matching sub-area is a core technology of scene matching, and mainly analyzes, evaluates and predicts the matching positioning performance (namely adaptability) of the SAR scene matching area so as to determine whether the selected matching sub-area is suitable for matching. So far, no mature scheme exists for selecting the matching region, most tasks are manually completed, scientific analysis is difficult to perform generally, and the requirement of practical application is difficult to meet by manually estimating the adaptation performance of the selected matching sub-region. And so far, there is no method for making quantitative and probabilistic predictions of matching region selection.
In the selection of the matching sub-area, scholars at home and abroad carry out a great deal of research, and the main method provided is to select the scene matching sub-area by using the image description characteristic parameters such as similarity, correlation length, gray variance, cross-correlation peak characteristic, information entropy, texture energy ratio, multi-resolution self-similarity measure and the like of the scene matching sub-area. However, the methods only consider the influence of a single factor on the matching performance, and fix other indexes in the experiment, and the relevance of the factors is not considered, so that the scene matching subregion selection criterion has low adaptability and poor anti-interference performance.
In the prior published literature, methods for prediction of the suitability of SAR images have not yet formed a mature solution and are applied in engineering practice, nor have quantitative, probabilistic prediction solutions be made for the suitability of SAR images.
Disclosure of Invention
The invention provides a SAR image adaptation performance prediction method based on a support vector regression aiming at the problem of SAR image adaptation performance evaluation in an SAR scene matching system, which specifically comprises the following steps:
(1) extracting light and shade target density and structure significant intensity characteristics of SAR training images, forming sample information corresponding to each SAR image by a characteristic set and given positive and negative category attributes, and forming a learning set by the sample information corresponding to all the SAR training images;
(2) preprocessing the characteristic data of the learning samples, namely removing the coupling relation of every two-dimensional characteristic of the sample set data in the learning set, and normalizing the characteristic after the coupling relation is removed according to the dimension;
(3) dividing the learning set after data preprocessing into a learning set L1And a learning set L2Using learning set L1The sample training support vector machine in the method obtains SVM classifier models of positive/negative attribute samples and Gaussian distribution characteristics of positive/negative sample characteristics; using learning sets L2The sample tests the performance of the classifier, the category attribute of each sample after being classified by the SVM classifier model is counted, and the learning set L is calculated according to the given positive and negative category attribute information1Probability P that class center feature of middle positive/negative sample belongs to positive/negative sample+、P-
(4) Using learning sets L1Positive/negative sample class-heart feature and its corresponding probability of belonging to positive/negative sample, and L in learning set1The Gaussian distribution characteristics of each dimension characteristic of the positive/negative samples are obtained, the mapping relation of the probability that each dimension characteristic of each sample of the learning set belongs to the positive/negative sample is obtained, and therefore L of each learning set is calculated2Probability p of each dimension feature of sample belonging to positive/negative categoryj +、pj -Then calculates the learning set L2Adaptation rate p of each dimension characteristic of samplej_match
(5) Classification learning set L by controlling variable method and corresponding SVM model2Is classified with accuracy P(j,k)Calculating a learning set L2Sensitivity of each dimensional feature to suitability;
(6) the learning set L obtained according to the step (4)2Adaptation rate of each dimension characteristic and learning set L obtained in step (5)2The sensitivity of each dimension characteristic of the sample is calculated to obtain a learning set L2Sample adaptation rate; learning set L2The sample adaptation rate and the characteristic information of each dimension form a learning set L2New sample ofInformation;
(7) learning set L obtained in step (5)2Fitting regression on new sample information to obtain an image suitability prediction function model;
(8) and (3) for the SAR image to be evaluated, extracting all dimension characteristic information of the SAR image according to the methods in the steps (1) and (2), preprocessing data, and predicting the adaptation rate of the SAR image to be evaluated through the sample adaptation prediction model in the step (7) by the processed data.
Compared with the prior art, the invention has the technical effects that:
in the prior art, a method for evaluating the matching performance of the SAR image does not form a mature solution, most tasks are manually completed and are generally difficult to scientifically analyze, and the matching performance of a selected matching sub-area is manually estimated, so that the requirement of practical application is difficult to meet. The method obtains a support vector regression model by training, establishes a functional relation between the adaptation rate and the characteristic information of the SAR image, predicts the adaptation performance of the SAR sub-area, and expands the field to probability prediction, so that the result is more accurate. The defect of artificially and subjectively evaluating the screening subarea is overcome, the stability is improved, and the quality of the screened SAR matching subarea is improved.
The invention provides a SAR image matching performance evaluation method based on a support vector regression machine, which comprises the steps of extracting features of an SAR image, training a support vector machine model, classifying and learning samples by using the support vector machine, obtaining the adaptation rate of each sample according to a classification result and the Gaussian distribution characteristics of the samples, training the support vector regression machine by using sample data with the adaptation rate, and fitting a regression prediction function model; and finally, evaluating the SAR image to be evaluated by using the function model to obtain the adaptation rate. According to the SAR image adaptive prediction method, multiple machine learning and mode recognition methods are comprehensively used according to SAR imaging characteristics and sub-area structural strength textural features, adaptive prediction of SAR images is achieved, a set of systematic SAR image adaptive prediction method is formed, the field of matching area selection is expanded to probability prediction, the method based on manual screening is effectively improved, accuracy of SAR matching sub-area screening is improved, and the method has great significance for research and development of SAR matching sub-area selection.
Drawings
FIG. 1 is a general flow chart of the SVM-based SAR image adaptation method of the present invention;
FIG. 2 is a partial SAR image in an embodiment of the invention;
FIG. 3 is a partial SAR image to be evaluated in the embodiment of the present invention;
FIG. 4 is a diagram of a predicted SAR image adaptation probability prediction result in an embodiment of the present invention;
fig. 5 is a graph of the predicted adaptation rate and the corresponding SAR image verification result in the embodiment of the present invention, wherein:
FIG. 5(a) shows the SAR image with poor matching performance and the predicted adaptation rate (p is more than or equal to 0 and less than 0.4);
FIG. 5(b) shows SAR images with moderate matching performance and the predicted adaptation rate (p is more than or equal to 0.4 and less than 0.7);
FIG. 5(c) shows the SAR image with strong matching performance and the predicted adaptation rate (p is more than or equal to 0.7 and less than or equal to 1).
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The invention provides a SAR image suitability prediction method based on a support vector regression, the general flow of which is shown in figure 1, and the specific process of the method is as follows:
1 learning phase
1.1 data preparation phase
Extracting intensity such as uniformity, divergence, light and dark target density and structure significant intensity of the SAR image and multi-dimensional characteristics of a texture structure, calibrating a matching result of the SAR image and a real-time image to be used as category attributes, using the multi-dimensional characteristics and the category attributes as sample information corresponding to each SAR image, and forming a sample set by samples corresponding to all the SAR images. The invention selects two-dimensional characteristics of light and shade target density and structure obvious intensity and category attributes as sample information.
1.1.1 feature extraction
As shown in fig. 1, a partial SAR training sample image in an embodiment of the present invention is shown. Extracting multi-dimensional characteristic information such as divergence, uniformity, light and dark target density, structural significant strength and the like of each SAR image;
uniformity r:
where μ is the mean of the gray scale and σ is the standard deviation of the gray scale.
Divergence div:
wherein σ1Respectively representing the standard deviation of the pixel set with the gray value smaller than the SAR image gray mean value mu. Accordingly, σ2And the standard deviation of a pixel set with the gray value larger than the SAR image gray mean value mu is represented.
Bright and dark target density: and the dark target pixel points represent points with gray values larger than 2/3 of the SAR image full-image gray value, and the dark target pixel points represent points with gray values smaller than 1/3 of the SAR image full-image gray value.
The structure has obvious strength: and (3) performing binary edge extraction on the radar image, and removing noise of the connected domain mark with less number of pixels by using a connected domain marking method to mark the ratio of the total number of pixels to the image width, height and width mean value.
The invention finally selects the two-dimensional characteristics of light and dark target density and structure obvious strength through experiments.
1.1.2 Positive and negative Category attributes
Marking the category attribute of each sample according to the matching result of the SAR image and the real-time image, if the matching is suitable, marking the attribute as +1, and taking the sample as a sample in a matching area; if not, the sample is marked as-1, and is a sample of the non-adaptive area.
Two-dimensional characteristic information of light and shade target density and structure significant strength of the SAR image and positive and negative category attributes are extracted to serve as sample information corresponding to each image, and samples corresponding to all the SAR images form a sample set.
1.2 data preprocessing
Performing data preprocessing on the sample set characteristic data, namely removing the coupling relation of each two-dimensional characteristic of the sample set characteristic data, and normalizing according to the dimension characteristic;
1.2.1 decoupling relationships
By analyzing the distribution of the relationship between each two-dimensional feature in the sample space, it can be seen that the linear relationship between each two-dimensional feature is not obvious and can be considered as non-linearly related. We removed the coupling relationship between features using Schmidt orthogonalization. Here, the feature matrix Z ═ { Z ═ is used1,Z2,...,ZmDenotes the m-dimensional features of the sample set.
If vector Z1,Z2,...,ZmLinearly independent, reference vector b0=(1,0,...,0)TLet us order
Then b is1,b2,…,bmOrthogonal pairwise to form an orthogonal vector set, and unitizing the orthogonal vector set to obtain a matrix X, wherein X is { X ═ X1,X2,…,Xm};
The process from matrix Z to matrix X is called Schmidt orthogonalization. After the original data is processed based on Schmidt orthogonalization, the coupling relation among all dimensional features of the sample can be removed, and the original data has the same reference vector, so that the influence of the change of a certain one-dimensional feature on the matching accuracy can be considered conveniently later.
1.2.2 normalization of features in each dimension
Assuming decoupling, the m-dimensional feature set of n samples is represented as:
in the above formula, xijRepresenting the jth dimension characteristic vector value of the ith sample, normalizing the samples according to dimensions, and setting the jth dimension characteristic x of the ith sampleijNormalized to yijThen the mapping of x → y is as follows:
[ymin,ymax]to be provided withConstant normalized interval, xmax=max(x1j,x2j,…xnj)T,xmin=min(x1j,x2j,…xnj)T
The normalized sample set feature matrix is
In the above formula, ynmRepresenting the m-dimension characteristic vector value of the n-th sample after normalization, all data are in [0,1 ] after normalization by the invention]Within the range.
1.3 probability of each dimension feature belonging to a positive/negative sample
1.3.1 training support vector machine
Dividing the processed sample set data into a learning set L due to the difference between samples1And a learning set L2Using learning set L1A sample training Support Vector Machine (SVM) performs parameter optimization on a Gaussian kernel function c and a penalty coefficient g in the SVM, selects the optimal parameters c and g, and trains an SVM model. Using learning sets L2The sample tests the performance of the classifier, counts the class attributes after the classification of the SVM classifier model obtained by the classification, and obtains the classification accuracy of positive/negative classes by referring to the given positive and negative class attribute information, namely the learning set L1The class center feature of the positive/negative sample belongs to the probability P of the positive/negative sample+、P-
Set learning set L2The number of samples with positive attribute is n1The number of samples with negative attribute is n2Learning set L1After the trained SVM classifier is classified, k is1A positive attribute sample is divided into negative samples with k2Dividing the negative attribute sample into positive samples, defining a learning set L1The feature vector of the positive sample centroid isLearning set L1The class center of the negative sample is characterized in thatWherein x isi +、xi -Are respectively a learning set L1Features of the positive and negative samples, then the learning set L1The class center feature of the positive/negative sample belongs to the probability P of the positive and negative samples+、P-Comprises the following steps:
1.3.2 L2probability p that each dimension feature of a sample belongs to a positive samplej +
Set learning set L1The mean value of the class centers of the positive attribute samples is characterized asCorresponding to a probability of belonging to a positive sample of P+Wherein x isi +=(xi1,xi2,…xij) For positive samples, the sample characteristics obey a Gaussian distribution x+~N(μ++2). Assuming that the sample characteristics of the learning set and the probability of the learning set belonging to the positive sample are in a mapping relation, and obtaining L according to the Gaussian distribution of the positive sample2Sample j-th dimension feature belongs to positive sample probability pj +Comprises the following steps:
wherein,is the mean of the j-th dimension features of the positive samples,variance of j-th dimension feature of positive sample, C1、C2Is a linear coefficient;
when x is mujWhen, the probability P that the positive class center belongs to the positive sample corresponds to+
When x is + ∞, the probability of belonging to a positive sample is highest at this time, and is considered to be 1 at this time;
when x is ═ infinity, the probability of belonging to a positive sample is the lowest at this time, and it is considered to be 0 at this time.
The function is expressed as x ═ mujIs segmented into
1.3.3 L2Probability of each dimension feature of sample belonging to negative sample
The same theory is as 1.3.2, and a learning set L is set1The negative attribute sample class center is characterized byCorresponding to a probability of belonging to a negative example of P-Wherein x isi -=(xi1,xi2,…xij) For negative samples, the sample characteristics obey a Gaussian distribution x-~N(μ--2). Assuming that the sample characteristics of the learning set and the probability of the learning set belonging to the negative sample are in a mapping relation, and obtaining the learning set L according to the Gaussian distribution of the negative sample2Probability that j-th dimension feature of sample belongs to negative sampleIs composed of
Wherein,is the class center of the j-th dimension feature of the negative sample,variance of j-th dimension characteristic of negative sample, C1'、C2' is a linear coefficient;
when x is mu-When, the probability P that the negative class center belongs to the negative sample corresponds to-
When x is ═ infinity, the probability of belonging to a negative sample is highest at this time, and it is considered to be 1 at this time;
when x ∞ is + ∞, the probability of belonging to a negative sample is lowest at this time, which is considered to be 0 at this time.
By applying the function toSegmented into the above formula
1.3.4 L2Adaptation rate p of each dimension characteristic of samplej
L is obtained by this step2Probability P that j-th dimension feature of sample belongs to positive/negative samplej +、Pj -And L is2Adaptation rate p of each dimension characteristic of samplej_match
1.4 sensitivity of features in each dimension
Classification learning set L by controlling variable method and corresponding SVM model2Is classified with accuracy P(j,k)Calculating a learning set L2Sensitivity of each dimension characteristic of the sample to adaptability;
considering that different sample characteristics have different effects on matching performance, we introduce the concept of "sensitivity", as follows:
changing the learning set L2J-th dimension eigenvector x of sample matrixjSimultaneously controlling the rest m-1 dimensional feature vectors to be unchanged when x isj=(k,k,k,…k)TThe sample matrix is
k is changed from 0,0.1 and 0.2 … 1.0.0 in sequence, after the sample matrix is preprocessed, the previous SVM training model is used for classifying X', and the number of correctly classified samples is n(j,k)If the total number of samples is n, the classification accuracy at this time is determinedRecording the maximum value P of the classification accuracyjmaxAnd a minimum value Pjmin
Pjmax=max{P(j,k)|k=0,0.1,...,1.0}
Pjmin=min{P(j,k)|k=0,0.1,...,1.0}
The value after the m dimensions are normalized is used as the sensitivity of the adaptation rate of the corresponding dimension characteristic, namely the contribution correct rate or weight of the final adaptation rate.
Wherein, WjAnd the weight value corresponding to the j-th dimension characteristic vector.
1.5 calculating the Adaptation Rate of sample i
Learning set L obtained according to step 1.32Probability that each dimension feature vector of the sample belongs to positive/negative samples and learning set L obtained in step 1.42The sensitivity of each dimension characteristic vector of the sample is calculated to obtain a learning set L2Adaptation rate per sample, and learning set L2The adaptation rate of the sample and the characteristic information of each dimension form a new learning set L2Sample information;
the resulting learning set L2The matching accuracy of the middle sample is
a+b=1
Where m denotes the dimension of the sample feature vector, WjSensitivity, P, of sample matching performance to j-dimension feature vectorj +Representing the probability, P, that a sample j-dimensional feature vector belongs to a positive samplej -Representing the probability that the j-th dimension of the sample feature vector belongs to a negative sample when Pj +→ 1 time, Pj -→ 0; when P is presentj +Time → 0, Pj -→ 1; if a is 0.5 and b is 0.5, then
1.6 regression function model
New learning set L from step 1.52And fitting regression on the sample set information to obtain an image suitability prediction function model.
For the learning set samples, we utilize the learning set L2Characteristic x of each sampleiAnd step 1.5 calculating the obtained adaptation rate PiForm a new learning set L2Sample, sample point attribute information may be represented as (x)i,Pi). And (3) performing parameter optimization on the SVM parameters c and g by using the method in the step 1.3.1, and selecting the parameters c and g when the classification performance of the SVM classifier is optimal. The libsvm library function of professor Taiwan Chinen is used for assisting in establishing a sample adaptation rate prediction model, and a new learning set L obtained in the previous step is utilized2Training SVM classifier model by sample to obtain learning set L2The fitness prediction model of the sample f (x).
We use the Mean Squared Error (MSE) and the squared correlation coefficient (r)2) To verify the performance of the predictive model.
Wherein, yiRepresents the input adaptation rate, f (x), of the training sample ii) Denotes the adaptation rate of the prediction of the training sample i and l denotes the number of training samples.
2 prediction phase
And predicting the adaptation rate of the SAR subimage to be evaluated by using a regression function model.
As shown in fig. 3, for a part of the SAR image to be evaluated in the embodiment of the present invention, for the SAR image to be evaluated, according to the method in step 1.1 of the learning phase, the light and dark target density and the structural significant intensity feature of the SAR training image are extracted and extracted, then the data preprocessing is performed according to the method in step 1.2, and the processed data predicts the adaptation rate of the SAR image to be evaluated through the sample adaptation rate regression prediction model in step 1.6, that is:
PMatchProbability=F(x1,x2,…,xm)
wherein, PMatchProbabilityDenotes the adaptation rate, x, of the samplemRepresenting the m-dimension features of the sample.
As shown in fig. 4, the result of predicting the adaptation rate of the SAR image to be evaluated in the embodiment of the present invention is shown, where samples 1 to 12 are SAR images with poor matching performance (p is greater than or equal to 0 and less than 0.4); the samples 13-20 are SAR images with moderate matching (p is more than or equal to 0.4 and less than 0.7); the samples 21-31 are SAR images with strong matching performance (p is more than or equal to 0.7 and less than or equal to 1). FIG. 5 shows the predicted adaptation rate and the corresponding SAR image verification result in the embodiment of the present invention, where FIG. 5(a) shows the SAR image with poor matching performance and the predicted adaptation rate (p is greater than or equal to 0 and less than 0.4), i.e., the adaptation rates of samples 1-12 and the corresponding SAR image verification result, FIG. 5(b) shows the SAR image with moderate matching performance and the predicted adaptation rate (p is greater than or equal to 0.4 and less than 0.7), i.e., the adaptation rates of samples 13-20 and the corresponding SAR image verification result, and FIG. 5(c) shows the SAR image with strong matching performance and the predicted adaptation rate (p is greater than or equal to 0.7 and less than or equal to 1), i.e., the adaptation rates of samples 21-31 and the corresponding SAR image verification result.

Claims (8)

1. A SAR image adaptability prediction method based on support vector regression is characterized by comprising the following steps:
(1) extracting light and shade target density and structure significant intensity characteristics of SAR training images, forming sample information corresponding to each SAR image by a characteristic set and given positive and negative category attributes, and forming a learning set by the sample information corresponding to all the SAR training images;
(2) preprocessing the characteristic data in the learning set, namely removing the coupling relation of each two-dimensional characteristic of the characteristic data in the learning set, and normalizing the characteristic with the coupling relation removed according to the dimension characteristic;
(3) dividing the learning set after data preprocessing into a learning set L1And a learning set L2Using learning set L1The sample training support vector machine in the method obtains SVM classifier models of positive/negative attribute samples and Gaussian distribution characteristics of positive/negative sample characteristics; using learning sets L2The sample tests the performance of the classifier, the category attribute of each sample after being classified by the SVM classifier model is counted, and the learning set L is calculated according to the given positive and negative category attribute information1Probability P that class center feature of middle positive/negative sample belongs to positive/negative sample+、P-
(4) Using learning sets L1Class-heart feature of positive/negative sample and corresponding probability of belonging to positive/negative sample, and learning set L1The Gaussian distribution characteristics of each dimension characteristic of the medium positive/negative samples are obtained, the mapping relation of the probability that each dimension characteristic of each sample of the learning set belongs to the positive/negative sample is obtained, and therefore L of each learning set is calculated2Probability p of each dimension feature of sample belonging to positive/negative categoryj +、pj -Then calculates the learning set L2Adaptation rate p of each dimension characteristic in samplej_matchJ represents the dimension serial number of the characteristic vector, the value of j is 1 to m, and m represents the dimension of the sample characteristic vector;
(5) classification learning set L by controlling variable method and corresponding SVM model2Is classified with accuracy P(j,k)Calculating a learning set L2Sensitivity of each dimension feature to adaptability, wherein k represents the value of each element in a j-dimension feature vector, and k sequentially changes from 0,0.1,0.2 … 1.0.0;
(6) the learning set L obtained according to the step (4)2Adaptation rate of each dimension characteristic and learning set L obtained in step (5)2The sensitivity of each dimension characteristic of the sample is calculated to obtain a learning set L2The adaptation rate of the sample; learning set L2The sample adaptation rate and the characteristic information of each dimension form a learning set L2New sample information;
(7) new learning set L obtained from step (6)2The information on the samples is used to determine,fitting regression to obtain an image suitability prediction function model;
(8) and (3) for the SAR image to be evaluated, extracting the corresponding characteristics of the SAR image to be evaluated according to the methods in the steps (1) and (2), preprocessing data, and predicting the adaptation rate of the SAR image to be evaluated through the adaptability prediction model in the step (7) by the processed data.
2. The method of claim 1, wherein the step (2) of removing the coupling relation for each two-dimensional feature of the feature data in the learning set is specifically:
let the characteristic matrix Z ═ Z1,Z2,...,ZmDenotes the m-dimensional features of the learning set, with the reference vector b0=(1,0,...,0)TLet us order
<mrow> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>Z</mi> <mn>1</mn> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mfrac> <msub> <mi>b</mi> <mn>0</mn> </msub> </mrow>
<mrow> <msub> <mi>b</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>Z</mi> <mn>2</mn> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mfrac> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mfrac> <msub> <mi>b</mi> <mn>0</mn> </msub> </mrow>
...
<mrow> <msub> <mi>b</mi> <mi>m</mi> </msub> <mo>=</mo> <msub> <mi>Z</mi> <mi>m</mi> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mi>m</mi> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mfrac> <msub> <mi>b</mi> <mn>0</mn> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mi>m</mi> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mfrac> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mi>m</mi> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mfrac> <msub> <mi>b</mi> <mn>2</mn> </msub> <mo>-</mo> <mo>...</mo> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mi>m</mi> </msub> <mo>,</mo> <msub> <mi>b</mi> <mrow> <mi>m</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mrow> <mi>m</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>b</mi> <mrow> <mi>m</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> </mfrac> <msub> <mi>b</mi> <mrow> <mi>m</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> </mrow>
Then b is1,b2,…,bmForming orthogonal vector groups by pairwise orthogonality, and unitizing the orthogonal vector groups to obtain a matrix X, wherein X is { X1, X2, … and Xm }, and n is the total number of samples;
<mrow> <mi>X</mi> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mn>11</mn> </msub> </mtd> <mtd> <msub> <mi>x</mi> <mn>12</mn> </msub> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msub> <mi>x</mi> <mrow> <mn>1</mn> <mi>m</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>x</mi> <mn>21</mn> </msub> </mtd> <mtd> <msub> <mi>x</mi> <mn>22</mn> </msub> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msub> <mi>x</mi> <mrow> <mn>2</mn> <mi>m</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>x</mi> <mrow> <mi>n</mi> <mn>1</mn> </mrow> </msub> </mtd> <mtd> <msub> <mi>x</mi> <mrow> <mi>n</mi> <mn>2</mn> </mrow> </msub> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msub> <mi>x</mi> <mrow> <mi>n</mi> <mi>m</mi> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow>
<mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <msub> <mi>b</mi> <mn>1</mn> </msub> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>|</mo> <mo>|</mo> </mrow> </mfrac> </mrow> </mtd> <mtd> <mrow> <msub> <mi>X</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <msub> <mi>b</mi> <mn>2</mn> </msub> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>b</mi> <mn>2</mn> </msub> <mo>|</mo> <mo>|</mo> </mrow> </mfrac> </mrow> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <mrow> <msub> <mi>X</mi> <mi>m</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>b</mi> <mi>m</mi> </msub> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>b</mi> <mi>m</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> </mfrac> </mrow> </mtd> </mtr> </mtable> <mo>.</mo> </mrow>
3. the method according to claim 1 or 2, wherein the normalization of the features after the coupling relationship is removed in the step (2) according to the dimensional features specifically comprises:
let the i j dimension of the sample be the feature xijNormalized to yijThen the mapping of x → y is as follows:
<mrow> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>min</mi> </msub> <mo>)</mo> </mrow> <mfrac> <mrow> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>min</mi> </msub> </mrow> <mrow> <msub> <mi>x</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>min</mi> </msub> </mrow> </mfrac> <mo>+</mo> <msub> <mi>y</mi> <mi>min</mi> </msub> </mrow>
[ymin,ymax]for a set normalization interval, xmax=max(x1j,x2j,…xnj)T,xmin=min(x1j,x2j,…xnj)T
The normalized sample set feature matrix is
<mrow> <mi>Y</mi> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>y</mi> <mn>11</mn> </msub> </mtd> <mtd> <msub> <mi>y</mi> <mn>12</mn> </msub> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msub> <mi>y</mi> <mrow> <mn>1</mn> <mi>m</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mn>21</mn> </msub> </mtd> <mtd> <msub> <mi>y</mi> <mn>22</mn> </msub> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msub> <mi>y</mi> <mrow> <mn>2</mn> <mi>m</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mrow> <mi>n</mi> <mn>1</mn> </mrow> </msub> </mtd> <mtd> <msub> <mi>y</mi> <mrow> <mi>n</mi> <mn>2</mn> </mrow> </msub> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msub> <mi>y</mi> <mrow> <mi>n</mi> <mi>m</mi> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein, ynmAnd representing the m-dimension characteristic vector value of the n-th sample after normalization.
4. The method according to claim 1 or 2, wherein the learning set L is calculated in the step (3)1Probability P that class-center feature of positive/negative sample belongs to positive and negative sample+、P-The method specifically comprises the following steps:
set learning set L2The number of samples with positive attribute is n1The number of samples with negative attribute is n2Learning set L1After the trained SVM classifier is classified, k is1A positive attribute sample is divided into negative samples with k2Dividing the negative attribute sample into positive samples, defining a learning set L1The class center of the positive sample is characterized in thatn represents the total number of samples, learning set L1The class center of the negative sample is characterized in thatWherein x isi +、xi -Are respectively a learning set L1Features of the positive and negative samples, then the learning set L1The class center feature of the positive/negative sample belongs to the probability P of the positive and negative samples+、P-Comprises the following steps:
<mrow> <msub> <mi>P</mi> <mo>+</mo> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>n</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>k</mi> <mn>1</mn> </msub> </mrow> <msub> <mi>n</mi> <mn>1</mn> </msub> </mfrac> </mrow>
<mrow> <msub> <mi>P</mi> <mo>-</mo> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>n</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>k</mi> <mn>2</mn> </msub> </mrow> <msub> <mi>n</mi> <mn>2</mn> </msub> </mfrac> <mo>.</mo> </mrow>
5. the method according to claim 1 or 2, wherein the learning set L is obtained in the step (4) as follows2Adaptation rate of each dimension characteristic of the sample:
(4.1)L2probability p that each dimension feature of a sample belongs to a positive samplej +
Is provided with L1The mean value of the class centers of the positive attribute samples is characterized asCorresponding to a probability of belonging to a positive sample of P+Wherein x isi +=(xi1,xi2,…xij) For positive samples, the sample characteristics obey a Gaussian distribution x+~N(μ++2) (ii) a Assuming that the sample characteristics of the learning set and the probability of the learning set belonging to the positive sample are in a mapping relation, and obtaining L according to the Gaussian distribution of the positive sample2Sample j-th dimension feature belongs to positive sample probability pj +Comprises the following steps:
<mrow> <msup> <msub> <mi>p</mi> <mi>j</mi> </msub> <mo>+</mo> </msup> <mo>=</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>*</mo> <mfrac> <mn>1</mn> <mrow> <msqrt> <mrow> <mn>2</mn> <mi>&amp;pi;</mi> </mrow> </msqrt> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mo>+</mo> </msubsup> </mrow> </mfrac> <msubsup> <mo>&amp;Integral;</mo> <mrow> <mo>-</mo> <mi>&amp;infin;</mi> </mrow> <mi>x</mi> </msubsup> <mi>exp</mi> <mrow> <mo>(</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msubsup> <mi>&amp;mu;</mi> <mi>j</mi> <mo>+</mo> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mn>2</mn> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mrow> <mo>+</mo> <mn>2</mn> </mrow> </msubsup> </mrow> </mfrac> <mo>)</mo> </mrow> <mi>d</mi> <mi>x</mi> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow>
wherein,is the mean of the j-th dimension features of the positive samples,variance of j-th dimension feature of positive sample, C1、C2Is a linear coefficient;
when x is mujWhen, the probability P that the positive class center belongs to the positive sample corresponds to+
When x is + ∞, the probability of belonging to a positive sample is highest at this time, and is considered to be 1 at this time;
when x is ═ infinity, the probability of belonging to a positive sample is lowest at this time, and it is considered to be 0 at this time;
the function is expressed as x ═ mujIs segmented into
<mrow> <msup> <msub> <mi>p</mi> <mi>j</mi> </msub> <mo>+</mo> </msup> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>2</mn> <msub> <mi>P</mi> <mo>+</mo> </msub> <mo>*</mo> <mfrac> <mn>1</mn> <mrow> <msqrt> <mrow> <mn>2</mn> <mi>&amp;pi;</mi> </mrow> </msqrt> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mo>+</mo> </msubsup> </mrow> </mfrac> <msubsup> <mo>&amp;Integral;</mo> <mrow> <mo>-</mo> <mi>&amp;infin;</mi> </mrow> <mi>x</mi> </msubsup> <mi>exp</mi> <mrow> <mo>(</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>-</mo> <msubsup> <mi>&amp;mu;</mi> <mi>j</mi> <mo>+</mo> </msubsup> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mn>2</mn> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mrow> <mo>+</mo> <mn>2</mn> </mrow> </msubsup> </mrow> </mfrac> <mo>)</mo> </mrow> <mi>d</mi> <mi>x</mi> </mrow> </mtd> <mtd> <mrow> <mi>x</mi> <mo>&amp;le;</mo> <msub> <mi>&amp;mu;</mi> <mi>j</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>2</mn> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>P</mi> <mo>+</mo> </msub> <mo>)</mo> </mrow> <mfrac> <mn>1</mn> <mrow> <msqrt> <mrow> <mn>2</mn> <mi>&amp;pi;</mi> </mrow> </msqrt> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mo>+</mo> </msubsup> </mrow> </mfrac> <msubsup> <mo>&amp;Integral;</mo> <mrow> <mo>-</mo> <mi>&amp;infin;</mi> </mrow> <mi>x</mi> </msubsup> <mi>exp</mi> <mrow> <mo>(</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msubsup> <mi>&amp;mu;</mi> <mi>j</mi> <mo>+</mo> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mn>2</mn> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mrow> <mo>+</mo> <mn>2</mn> </mrow> </msubsup> </mrow> </mfrac> <mo>)</mo> </mrow> <mi>d</mi> <mi>x</mi> <mo>+</mo> <mn>2</mn> <msub> <mi>P</mi> <mo>+</mo> </msub> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mrow> <mi>x</mi> <mo>&amp;GreaterEqual;</mo> <msub> <mi>&amp;mu;</mi> <mi>j</mi> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
(4.2)L2Probability of each dimension feature of sample belonging to negative sample
Is provided with L1The negative attribute sample class center is characterized byCorresponding to a probability of belonging to a negative example of P-Wherein x isi -=(xi1,xi2,…xij) For negative samples, the sample characteristics obey a Gaussian distribution x-~N(μ- -2) (ii) a Assuming that the sample characteristics of the learning set and the probability of the learning set belonging to the negative sample are in a mapping relation, and obtaining L according to the Gaussian distribution of the negative sample2Probability that j-th dimension feature of sample belongs to negative sampleComprises the following steps:
<mrow> <msubsup> <mi>p</mi> <mi>j</mi> <mo>-</mo> </msubsup> <mo>=</mo> <msup> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>&amp;prime;</mo> </msup> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mfrac> <mn>1</mn> <mrow> <msqrt> <mrow> <mn>2</mn> <mi>&amp;pi;</mi> </mrow> </msqrt> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mo>-</mo> </msubsup> </mrow> </mfrac> <msubsup> <mo>&amp;Integral;</mo> <mrow> <mo>-</mo> <mi>&amp;infin;</mi> </mrow> <mi>x</mi> </msubsup> <mi>exp</mi> <mo>(</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msubsup> <mi>&amp;mu;</mi> <mi>j</mi> <mo>-</mo> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mn>2</mn> <msup> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mo>-</mo> </msubsup> <mn>2</mn> </msup> </mrow> </mfrac> <mo>)</mo> <mi>d</mi> <mi>x</mi> <mo>)</mo> </mrow> <mo>+</mo> <msup> <msub> <mi>C</mi> <mn>2</mn> </msub> <mo>&amp;prime;</mo> </msup> </mrow>
wherein,is the mean of the j-th dimension features of the negative samples,variance of j-th dimension characteristic of negative sample, C1'、C2' is a linear coefficient;
when x is mu-When, the probability P that the negative class center belongs to the negative sample corresponds to-
When x is ═ infinity, the probability of belonging to a negative sample is highest at this time, and it is considered to be 1 at this time;
when x is + ∞, the probability of belonging to the negative sample is the lowest, and the probability is considered to be 0;
by applying the function toSegmented into the above formula
<mrow> <msup> <msub> <mi>p</mi> <mi>j</mi> </msub> <mo>-</mo> </msup> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>2</mn> <mo>*</mo> <mrow> <mo>(</mo> <mrow> <mn>1</mn> <mo>-</mo> <msub> <mi>P</mi> <mo>-</mo> </msub> </mrow> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <mrow> <mn>1</mn> <mo>-</mo> <mfrac> <mn>1</mn> <mrow> <msqrt> <mrow> <mn>2</mn> <mi>&amp;pi;</mi> </mrow> </msqrt> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mo>-</mo> </msubsup> </mrow> </mfrac> <msubsup> <mo>&amp;Integral;</mo> <mrow> <mo>-</mo> <mi>&amp;infin;</mi> </mrow> <mi>x</mi> </msubsup> <mi>exp</mi> <mrow> <mo>(</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>-</mo> <msubsup> <mi>&amp;mu;</mi> <mi>j</mi> <mo>-</mo> </msubsup> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mn>2</mn> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mrow> <mo>-</mo> <mn>2</mn> </mrow> </msubsup> </mrow> </mfrac> <mo>)</mo> </mrow> <mi>d</mi> <mi>x</mi> </mrow> <mo>)</mo> </mrow> <mo>+</mo> <mn>2</mn> <msub> <mi>P</mi> <mo>-</mo> </msub> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mrow> <mi>x</mi> <mo>&amp;le;</mo> <msubsup> <mi>&amp;mu;</mi> <mi>j</mi> <mo>-</mo> </msubsup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>2</mn> <msub> <mi>P</mi> <mo>-</mo> </msub> <mo>*</mo> <mrow> <mo>(</mo> <mrow> <mn>1</mn> <mo>-</mo> <mfrac> <mn>1</mn> <mrow> <msqrt> <mrow> <mn>2</mn> <mi>&amp;pi;</mi> </mrow> </msqrt> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mo>-</mo> </msubsup> </mrow> </mfrac> <msubsup> <mo>&amp;Integral;</mo> <mrow> <mo>-</mo> <mi>&amp;infin;</mi> </mrow> <mi>x</mi> </msubsup> <mi>exp</mi> <mrow> <mo>(</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>-</mo> <msubsup> <mi>&amp;mu;</mi> <mi>j</mi> <mo>-</mo> </msubsup> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mn>2</mn> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mrow> <mo>-</mo> <mn>2</mn> </mrow> </msubsup> </mrow> </mfrac> <mo>)</mo> </mrow> <mi>d</mi> <mi>x</mi> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>x</mi> <mo>&amp;GreaterEqual;</mo> <msubsup> <mi>&amp;mu;</mi> <mi>j</mi> <mo>-</mo> </msubsup> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
(4.3)L2Adaptation rate p of each dimension characteristic of samplej
<mrow> <msub> <mi>p</mi> <mrow> <mi>j</mi> <mo>_</mo> <mi>m</mi> <mi>a</mi> <mi>t</mi> <mi>c</mi> <mi>h</mi> </mrow> </msub> <mo>=</mo> <msubsup> <mi>p</mi> <mi>j</mi> <mo>+</mo> </msubsup> <mo>-</mo> <msubsup> <mi>p</mi> <mi>j</mi> <mo>-</mo> </msubsup> </mrow>
L is obtained by this step2Probability P that j-th dimension feature of sample belongs to positive/negative samplej +、Pj -And L2Adaptation rate P of each dimension characteristic of samplej_match
6. The method according to claim 1 or 2, wherein the step (5) is performed by calculating the learning set L as follows2Sensitivity of each dimensional feature of the sample to suitability:
changing the learning set L2J-th dimension eigenvector x of sample matrixjSimultaneously controlling the rest m-1 dimensional feature vectors to be unchanged when x isj=(k,k,k,…k)TK varies sequentially from 0,0.1,0.2 … 1.0.0 with a sample matrix of
<mrow> <msup> <mi>X</mi> <mo>&amp;prime;</mo> </msup> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mn>11</mn> </msub> </mtd> <mtd> <msub> <mi>x</mi> <mn>12</mn> </msub> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <mi>k</mi> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <msub> <mi>x</mi> <mrow> <mn>1</mn> <mi>m</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>x</mi> <mn>21</mn> </msub> </mtd> <mtd> <msub> <mi>x</mi> <mn>22</mn> </msub> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <mi>k</mi> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <msub> <mi>x</mi> <mrow> <mn>2</mn> <mi>m</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mtable> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> </mtable> </mtd> <mtd> <mtable> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> </mtable> </mtd> <mtd> <mtable> <mtr> <mtd> <mrow></mrow> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> </mtr> </mtable> </mtd> <mtd> <mtable> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> </mtable> </mtd> <mtd> <mtable> <mtr> <mtd> <mrow></mrow> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> </mtr> </mtable> </mtd> <mtd> <mtable> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> </mtable> </mtd> </mtr> <mtr> <mtd> <msub> <mi>x</mi> <mrow> <mi>n</mi> <mn>1</mn> </mrow> </msub> </mtd> <mtd> <msub> <mi>x</mi> <mrow> <mi>n</mi> <mn>2</mn> </mrow> </msub> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <mi>k</mi> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <msub> <mi>x</mi> <mrow> <mi>n</mi> <mi>m</mi> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow>
After preprocessing the sample matrix, classifying X' by using the previous SVM training model to obtain n samples with correct classification number(j,k)If the total number of samples is n, the classification accuracy at this time is determinedRecording the maximum value P of the classification accuracyjmaxAnd a minimum value Pjmin
Pjmax=max{P(j,k)|k=0,0.1,...,1.0}
Pjmin=min{P(j,k)|k=0,0.1,...,1.0}
The values after the m dimensionalities are normalized are used as the sensitivity of the adaptation rate of the corresponding dimensionality characteristics;
<mrow> <msub> <mi>W</mi> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>P</mi> <mrow> <mi>j</mi> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>P</mi> <mrow> <mi>j</mi> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </mrow> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mrow> <mi>j</mi> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>P</mi> <mrow> <mi>j</mi> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow>
<mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>W</mi> <mi>j</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow>
wherein, WjAnd adapting the sensitivity of the rate for the j-th dimension of the feature vector.
7. A method according to claim 1 or 2, characterised in thatThe step (6) obtains the learning set L as follows2Adaptation rate P of the ith samplei
<mrow> <msub> <mi>P</mi> <mi>i</mi> </msub> <mo>=</mo> <mi>a</mi> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>W</mi> <msup> <mi>j</mi> <mo>*</mo> </msup> </msub> <mrow> <mo>(</mo> <msup> <msub> <mi>P</mi> <mi>j</mi> </msub> <mo>+</mo> </msup> <mo>-</mo> <msup> <msub> <mi>P</mi> <mi>j</mi> </msub> <mo>-</mo> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mi>b</mi> </mrow>
a+b=1
Where m denotes the dimension of the sample feature, WjSensitivity, P, representing adaptation rate of j-th dimension feature vectorj +Represents L2Probability that the j-th feature of a sample belongs to a positive sample, Pj -Represents L2The probability that the j-th dimension feature of a sample belongs to a negative sample,
when P is presentj +→ 1 time, Pj -→0,Pi→1;
When P is presentj +Time → 0, Pj -→1,Pi→0;
If a is 0.5 and b is 0.5, then
<mrow> <msub> <mi>P</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>0.5</mn> <mo>*</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>W</mi> <msup> <mi>j</mi> <mo>*</mo> </msup> </msub> <mrow> <mo>(</mo> <msup> <msub> <mi>P</mi> <mi>j</mi> </msub> <mo>+</mo> </msup> <mo>-</mo> <msup> <msub> <mi>P</mi> <mi>j</mi> </msub> <mo>-</mo> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mn>0.5.</mn> </mrow>
8. The method of claim 1 or 2, wherein the features extracted in step (1) are:
bright and dark target density: the proportion of light and dark target pixel points is that the light target pixel points represent points with gray values larger than 2/3 of the SAR image full-image gray value, and the dark target pixel points represent points with gray values smaller than 1/3 of the SAR image full-image gray value;
the structure has obvious strength: and (3) performing binary edge extraction on the radar image, and removing noise of the connected domain mark with less number of pixels by using a connected domain marking method to mark the ratio of the total number of pixels to the image width, height and width mean value.
CN201510075677.6A 2015-02-12 2015-02-12 A kind of SAR image suitability Forecasting Methodology based on support vector regression Active CN104636758B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510075677.6A CN104636758B (en) 2015-02-12 2015-02-12 A kind of SAR image suitability Forecasting Methodology based on support vector regression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510075677.6A CN104636758B (en) 2015-02-12 2015-02-12 A kind of SAR image suitability Forecasting Methodology based on support vector regression

Publications (2)

Publication Number Publication Date
CN104636758A CN104636758A (en) 2015-05-20
CN104636758B true CN104636758B (en) 2018-02-16

Family

ID=53215486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510075677.6A Active CN104636758B (en) 2015-02-12 2015-02-12 A kind of SAR image suitability Forecasting Methodology based on support vector regression

Country Status (1)

Country Link
CN (1) CN104636758B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106054189B (en) * 2016-07-17 2018-06-05 西安电子科技大学 Radar target identification method based on dpKMMDP models
CN110246134A (en) * 2019-06-24 2019-09-17 株洲时代电子技术有限公司 A kind of rail defects and failures sorter

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073873A (en) * 2011-01-28 2011-05-25 华中科技大学 Method for selecting SAR (spaceborne synthetic aperture radar) scene matching area on basis of SVM (support vector machine)
CN102663436A (en) * 2012-05-03 2012-09-12 武汉大学 Self-adapting characteristic extracting method for optical texture images and synthetic aperture radar (SAR) images
CN102902979A (en) * 2012-09-13 2013-01-30 电子科技大学 Method for automatic target recognition of synthetic aperture radar (SAR)
CN103942749A (en) * 2014-02-24 2014-07-23 西安电子科技大学 Hyperspectral ground feature classification method based on modified cluster hypothesis and semi-supervised extreme learning machine

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073873A (en) * 2011-01-28 2011-05-25 华中科技大学 Method for selecting SAR (spaceborne synthetic aperture radar) scene matching area on basis of SVM (support vector machine)
CN102663436A (en) * 2012-05-03 2012-09-12 武汉大学 Self-adapting characteristic extracting method for optical texture images and synthetic aperture radar (SAR) images
CN102902979A (en) * 2012-09-13 2013-01-30 电子科技大学 Method for automatic target recognition of synthetic aperture radar (SAR)
CN103942749A (en) * 2014-02-24 2014-07-23 西安电子科技大学 Hyperspectral ground feature classification method based on modified cluster hypothesis and semi-supervised extreme learning machine

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Research on Scene Infrared Image Simulation;Lamei Zou 等;《International Journal of Digital Content Technology and its Applications》;20120331;第6卷(第4期);第77-86页 *
基于检测识别的实孔径雷达景象匹配定位方法;杨卫东 等;《华中科技大学学报》;20050228;第33卷(第2期);第25-27页 *

Also Published As

Publication number Publication date
CN104636758A (en) 2015-05-20

Similar Documents

Publication Publication Date Title
CN111160176B (en) Fusion feature-based ground radar target classification method for one-dimensional convolutional neural network
CN111414942B (en) Remote sensing image classification method based on active learning and convolutional neural network
CN110516596B (en) Octave convolution-based spatial spectrum attention hyperspectral image classification method
WO2022062419A1 (en) Target re-identification method and system based on non-supervised pyramid similarity learning
CN110197286A (en) A kind of Active Learning classification method based on mixed Gauss model and sparse Bayesian
CN113408605A (en) Hyperspectral image semi-supervised classification method based on small sample learning
CN109492750B (en) Zero sample image classification method based on convolutional neural network and factor space
CN112001422B (en) Image mark estimation method based on deep Bayesian learning
CN106156805A (en) A kind of classifier training method of sample label missing data
CN103440512A (en) Identifying method of brain cognitive states based on tensor locality preserving projection
CN111401426A (en) Small sample hyperspectral image classification method based on pseudo label learning
CN117315381B (en) Hyperspectral image classification method based on second-order biased random walk
CN111639697B (en) Hyperspectral image classification method based on non-repeated sampling and prototype network
CN114795178B (en) Brain state decoding method based on multi-attention neural network
Wang et al. A novel sparse boosting method for crater detection in the high resolution planetary image
CN116310466A (en) Small sample image classification method based on local irrelevant area screening graph neural network
CN117349743A (en) Data classification method and system of hypergraph neural network based on multi-mode data
CN115661539A (en) Less-sample image identification method embedded with uncertainty information
CN115496950A (en) Neighborhood information embedded semi-supervised discrimination dictionary pair learning image classification method
CN104636758B (en) A kind of SAR image suitability Forecasting Methodology based on support vector regression
CN117911771A (en) Method for constructing training model for medical chest image disease classification based on Resnext integrated network
CN107729942A (en) A kind of sorting technique of structured view missing data
CN109872319B (en) Thermal image defect extraction method based on feature mining and neural network
CN108304546B (en) Medical image retrieval method based on content similarity and Softmax classifier
CN112804650B (en) Channel state information data dimension reduction method and intelligent indoor positioning method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant