CN104636758B - A kind of SAR image suitability Forecasting Methodology based on support vector regression - Google Patents
A kind of SAR image suitability Forecasting Methodology based on support vector regression Download PDFInfo
- Publication number
- CN104636758B CN104636758B CN201510075677.6A CN201510075677A CN104636758B CN 104636758 B CN104636758 B CN 104636758B CN 201510075677 A CN201510075677 A CN 201510075677A CN 104636758 B CN104636758 B CN 104636758B
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- mtd
- sample
- mtr
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Analysis (AREA)
- Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
Abstract
The invention discloses a kind of radar image suitability Forecasting Methodology based on support vector regression.Methods described includes:Study stage, extraction SAR image multidimensional characteristic form study collection;After study collection sample characteristics pretreatment, study collection L is classified as1、L2, then with study collection L1Training Support Vector Machines, and L is collected to study with obtained SVM models2Classified, the fitting percentage of each sample is calculated according to the distance between classification accuracy rate, sample characteristics and class heart;Afterwards suitability anticipation function model is obtained using study collection feature and its corresponding fitting percentage, fitting recurrence;Forecast period, to SAR image to be assessed, extraction character pair is as test sample data, input adaptation anticipation function model after data prediction, calculates the fitting percentage of the image.The present invention is according to the intensity and texture and structural characteristic of SAR image, it is established that the functional relation between SAR image fitting percentage and characteristic information, the matching performance of this method energy accurate evaluation SAR image by experimental verification.
Description
Technical field
The invention belongs to machine learning, pattern-recognition, matching template technical field, and in particular to one kind is based on supporting vector
Synthetic aperture radar (Synthetic Aperture Radar, SAR) image suitability Forecasting Methodology of recurrence, this method are being examined
On the basis of considering SAR imaging characteristics and strength structure textural characteristics, the Adapter Property prediction to SAR image is realized, is passed through
The Adapter Property prediction result accurate and effective that this method obtains.
Background technology
The selection of SAR scene matching aided navigation sub-districts is the core technology of scene matching aided navigation, mainly to SAR scene matching aided navigations region
Analyzed, assessed, predicted with positioning performance (i.e. suitability), so that it is determined that selected matching sub-district is if appropriate for matching.At present
Untill, the selection of Matching band there is no ripe scheme, and most of task is generally difficult to point of carry out science by being accomplished manually
Analysis, the Adapter Property of the artificial selected matching sub-district of estimation, it is difficult to meet the needs of practical application.And so far, do not have also
A kind of chosen to Matching band makes quantitative, probabilistic forecasting method.
In the selection of matching sub-district, domestic and foreign scholars have carried out substantial amounts of research, and the main method of proposition has with scape
As the similitude of matching sub-district, correlation length, gray variance, cross-correlation peak value feature, comentropy, texture energy ratio and more points
The characteristic parameter of the iamge descriptions such as resolution self-similar selects scene matching aided navigation sub-district.But these methods only consider it is single because
Influence of the element to matching performance and other indexs are fixed in experiment, lack the correlation for considering these factors, result in scape
Not strong as matching sub-district selection criterion adaptability, anti-interference is poor.
In existing disclosed document, the solution party of maturation is not yet formed on the foreseeable method of SAR image suitability
Case, and applied in engineering practice, quantitative, probabilistic forecasting scheme is not also made to SAR image suitability.
The content of the invention
The present invention is directed in SAR scene matching aided navigation systems to SAR image Adapter Property evaluation problem, proposes that one kind is based on supporting
The SAR image Adapter Property Forecasting Methodology of vector regression, is specifically included:
(1) the light and shade target density and the notable strength characteristic of structure of SAR training images are extracted, by characteristic set and given
Positive and negative category attribute forms sample information corresponding to each SAR image, sample information structure science corresponding to all SAR training images
Practise collection;
(2) learning sample characteristic is pre-processed, i.e., every two dimensional feature for concentrating sample set data to study is gone
Except coupled relation, and pair and to remove coupled relation after feature by dimension normalization;
(3) the study collection after data prediction is divided into study collection L1Collect L with study2, collect L using study1In sample instruction
Practice SVMs, obtain the SVM classifier model of positive/negative two generic attributes sample and the Gauss of positive/negative two classes sample characteristics
Distribution character;Collect L with study2Test sample classifier performance, after counting each sample by SVM classifier category of model
Category attribute, according to given positive and negative category attribute information, calculate study collection L1In positive/negative sample class heart feature belong to positive/negative
The probability P of sample+、P-;
(4) study collection L is utilized1Positive/negative sample class heart feature and probability, the Yi Jixue for belonging to positive/negative sample corresponding to it
Practise and concentrate L1The Gaussian distribution feature of the positive/negative each dimensional characteristics of two classes sample, obtain study and collect each sample per dimensional feature category
In the mapping relations of positive/negative sample probability, each study collection L is thus calculated2The each dimensional characteristics of sample belong to positive/negative classification
Probability pj +、pj -, then calculate study collection L2The fitting percentage p of each dimensional characteristics of samplej_match;
(5) control variate method and corresponding SVM categories of model study collection L are passed through2Classification accuracy rate P(j,k), calculate study
Collect L2Sensitivity of each dimensional characteristics to suitability;
(6) the study collection L obtained according to step (4)2The study collection L that each dimensional characteristics fitting percentage and step (5) obtain2
The sensitivity of each dimensional characteristics of sample, study collection L is calculated2Sample fitting percentage;Study collection L2Sample fitting percentage is each with its
Dimensional characteristics information structure study collection L2New sample information;
(7) the study collection L obtained by step (5)2New sample information, fitting recurrence obtain image suitability anticipation function
Model;
(8) for SAR image to be assessed, according to the method for step (1) (2), each dimensional characteristics for extracting SAR image are believed
Cease and carry out data prediction, the data after processing are predicted to be assessed by the sample suitability forecast model of step (7)
The fitting percentage of SAR image.
Compared with prior art, technique effect of the invention is embodied in:
In the prior art, the method assessed on SAR image matching performance not yet forms the solution of maturation, big portion
Point task is generally difficult to the analysis of carry out science by being accomplished manually, the matching performance of the artificial selected matching sub-district of estimation,
It is difficult to meets the needs of practical application.The present invention is supported vector regression model using training, it is established that SAR image is fitted
With the functional relation between rate and characteristic information, SAR sub-district Adapter Properties are predicted, and it is pre- that field is extended into probability
Survey so that result is more accurate.The defects of human subjective assesses screening sub-district is overcome, stability is improved, improves screening
SAR matching sub-district quality.
The invention provides a kind of SAR image matching performance appraisal procedure based on support vector regression, this method pair
SAR image extracts feature, trains SVM model, then with SVM classification learning sample, according to classification results, sample
This Gaussian distribution feature obtains the fitting percentage of each sample, is returned afterwards using the sample data training support vector with fitting percentage
Machine, fit regression forecasting function model;Finally, SAR image to be assessed is assessed using function model, obtains fitting percentage.This hair
It is bright that a variety of machine learning and mode identification method have been used according to SAR imaging characteristics and sub-district structural strength textural characteristics, synthesis,
The suitability prediction to SAR image is realized, forms the SAR image suitability Forecasting Methodology of set of system, and Matching band is selected
The field taken is extended to probabilistic forecasting, improve efficiently the method based on artificial screening, improves screening SAR matching sub-districts
Accuracy, the research and development chosen to SAR matching sub-districts are significant.
Brief description of the drawings
Fig. 1 is the overview flow chart of the SAR image suitability method of the invention based on SVM;
Fig. 2 is part SAR image in the embodiment of the present invention;
Fig. 3 is part SAR image to be assessed in the embodiment of the present invention;
Fig. 4 is that the SAR image predicted in the embodiment of the present invention is adapted to probabilistic forecasting result figure;
The fitting percentage predicted in Fig. 5 embodiment of the present invention and corresponding SAR image the result figure, wherein:
Fig. 5 (a) is the SAR image of matching performance difference and the fitting percentage (0≤p < 0.4) of prediction;
Fig. 5 (b) is the fitting percentage (0.4≤p < 0.7) of the moderate SAR image of matching performance and prediction;
Fig. 5 (c) is the fitting percentage (0.7≤p≤1) of the strong SAR image of matching performance and prediction.
Embodiment
In order to make the purpose , technical scheme and advantage of the present invention be clearer, it is right below in conjunction with drawings and Examples
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used in the restriction present invention.As long as in addition, technical characteristic involved in each embodiment of invention described below
Conflict can is not formed each other to be mutually combined.
The invention provides a kind of SAR image suitability Forecasting Methodology based on support vector regression, its overall procedure
As shown in figure 1, this method detailed process is:
1 study stage
1.1 data preparation stage
The notable intensity equal strength of the uniformity, divergence, light and shade target density, structure and the texture structure for extracting SAR image are more
Dimensional feature, and demarcation SAR image are used as each with real-time figure matching result as category attribute, multidimensional characteristic and category attribute
Sample information corresponding to SAR image, sample corresponding to all SAR images form sample set.The present invention chooses light and shade target density
With this two dimensional feature of the notable intensity of structure and category attribute as sample information.
1.1.1 feature extraction
As shown in figure 1, it is part SAR training sample images in the embodiment of the present invention.The divergence of every width SAR image is extracted,
The multidimensional characteristic information such as the uniformity, the notable intensity of light and shade target density, structure;
Uniformity r:
Wherein, wherein μ is gray average, and σ is gray standard deviation.
Divergence div:
Wherein, σ1The standard deviation of pixel set of the gray value less than SAR image gray average μ is represented respectively.Correspondingly, σ2
Represent the standard deviation of pixel set of the gray value more than SAR image gray average μ.
Light and shade target density:Ratio shared by light and shade target pixel points, bright target pixel points represent that gray value is schemed more than SAR
As the point of full figure gray value 2/3, dark target pixel points represent that gray value is less than the point of SAR image full figure gray value 1/3.
The notable intensity of structure:Binary edge extraction is carried out to radar image, and utilizes connected component labeling method, removes few picture
After the noise of number connected component labeling, mark sum of all pixels and the ratio of the wide wide average of height of image.
The present invention last chooses light and shade target density and the notable intensity of structure this two dimensional feature by testing.
1.1.2 positive and negative category attribute
According to SAR image and the category attribute of each sample of result queue of figure matching in real time, if being adapted to matching, mark
For+1, to be adapted to area's sample;If being not suitable for matching, -1 is labeled as, for non-adaptation area's sample.
This bidimensional characteristic information of the notable intensity of light and shade target density, structure and positive and negative category attribute for extracting SAR image are made
For sample information corresponding to each image, sample corresponding to all SAR images forms sample set.
1.2 data prediction
Data prediction is carried out to sample set characteristic, i.e., coupling is removed to every two dimensional feature of sample set characteristic
Relation, and normalized by dimensional characteristics;
1.2.1 uncoupling relation
Analyze the relation distribution between every two dimensional feature of sample space, it can be seen that the linear relationship between every two dimensional feature is not
Substantially, it is believed that be nonlinear correlation.We use Schmidt orthogonalization process, remove the coupled relation between feature.This
In use eigenmatrix Z={ Z1,Z2,...,ZmRepresent sample set m dimensional features.
If vector Z1,Z2,...,ZmLinear independence, reference vector b0=(1,0 ..., 0)T, order
Then b1, b2..., bmPairwise orthogonal, turn into Orthogonal Vectors, by its unitization matrix X, wherein X={ X1,
X2..., Xm};
It is referred to as then Schmidt orthogonalizations to matrix X process from matrix Z.Initial data is being based on Schmidt just by above-mentioned
After the processing of friendshipization, the coupled relation between each dimensional feature of sample can be removed, and there is identical reference vector, after being easy to
Face is on considering influence of the change of certain one-dimensional characteristic to matching accuracy.
1.2.2 each dimensional characteristics normalization
If after uncoupling, the m dimensional feature set representations of n sample are:
In above formula, xijThe jth dimensional feature vector value of i-th of sample is represented, sample is normalized by dimension, if sample i jth
Dimensional feature xijIt is y after normalizationij, then x → y mapping relations are as follows:
[ymin,ymax] for setting normalization section, xmax=max (x1j,x2j,…xnj)T, xmin=min (x1j,x2j,…
xnj)T。
Then the sample set eigenmatrix after normalization is
In above formula, ynmThe m dimensional feature vector values of n-th of sample after normalizing are represented, after present invention normalization, are owned
Data are in the range of [0,1].
1.3 each dimensional characteristics belong to the probability of positive/negative sample
1.3.1 SVM is trained
Due to the otherness between sample, the sample set data after processing are divided into study collection L1Collect L with study2, use
Practise collection L1Sample training SVMs (Support Vector Machine, SVM), to gaussian kernel function c and punishment in SVM
Coefficient g carries out parameter optimization, chooses optimal parameter c and g, trains SVM models.Collect L with study2Test sample grader
Can, statistics is by the category attribute after obtained SVM classifier category of model, with reference to given positive and negative category attribute information,
The classification accuracy rate of positive/negative two class is obtained, as study collection L1Positive/negative sample class heart feature belongs to positive/negative sample probability P+、P-。
If study collection L2Quantity with positive attribute sample is n1, the quantity of the negative attribute sample of band is n2, collect L through study1Training
SVM classifier classification after, have k1Individual positive attribute sample is divided into negative sample, there is k2Individual negative attribute sample is divided into positive sample, fixed
Justice study collection L1The characteristic vector of the positive sample class heart isStudy collection L1The feature of the negative sample class heart isWherein, xi +、xi -Respectively study collection L1The feature of positive negative sample, then study collect L1Positive/negative sample class heart feature
Belong to positive and negative sample probability P+、P-For:
1.3.2 L2Sample belongs to the Probability p of positive sample per one-dimensional characteristicj +
If study collection L1The characteristics of mean of the positive attribute sample class heart isThe corresponding positive sample probability that belongs to is
P+, wherein xi +=(xi1,xi2,…xij) it is positive sample, sample characteristics Gaussian distributed x+~N (μ+,σ+2).It is assumed that study collection
It is in mapping relations that sample characteristics belong to the probability of positive sample with it, according to positive sample Gaussian Profile, can obtain L2Sample jth dimensional feature
Belong to positive sample Probability pj +For:
Wherein,For the average of positive sample jth dimensional feature,For the side of positive sample jth dimensional feature
Difference, C1、C2For linear coefficient;
As x=μjWhen, corresponding is the probability P that the positive class class heart belongs to positive sample+;
As x=+ ∞, now belong to the probability highest of positive sample, it is believed that be now 1;
As x=- ∞, the probability for now belonging to positive sample is minimum, it is believed that is now 0.
By this function with x=μjSegmentation, brings above formula into and obtains
1.3.3 L2Sample belongs to the probability of negative sample per one-dimensional characteristic
Similarly such as 1.3.2, if study collection L1The feature of the negative attribute sample class heart isIt is corresponding to belong to negative sample
This probability is P-, wherein xi -=(xi1,xi2,…xij) it is negative sample, sample characteristics Gaussian distributed x-~N (μ-,σ-2).It is false
It is in mapping relations that fixed study collection sample characteristics belong to the probability of negative sample with it, according to negative sample Gaussian Profile, can be learnt
Collect L2Sample jth dimensional feature belongs to the probability of negative sampleFor
Wherein,For the class heart of negative sample jth dimensional feature,For negative sample jth dimensional feature
Variance, C1'、C2' it is linear coefficient;
As x=μ-When, corresponding is the probability P that the negative class class heart belongs to negative sample-;
As x=- ∞, now belong to the probability highest of negative sample, it is believed that be now 1;
As x=+ ∞, the probability for now belonging to negative sample is minimum, it is believed that is now 0.
By this function withSegmentation, brings above formula into
1.3.4 L2Fitting percentage p of the sample per one-dimensional characteristicj:
L is obtained by this step2Sample jth dimensional feature belongs to the probability P of positive/negative samplej +、Pj -, and L2Sample is each
The fitting percentage p of dimensional featurej_match。
The sensitivity of 1.4 each dimensional characteristics
Pass through control variate method and corresponding SVM categories of model study collection L2Classification accuracy rate P(j,k), calculate study collection
L2Sensitivity of each dimensional characteristics of sample to suitability;
Consider that effect of the different sample characteristics to matching performance is different, we introduce the concept of " sensitivity ", specific as follows:
Change study collection L2Sample matrix jth dimensional feature vector xj, while control remaining m-1 dimensional feature vector constant, work as xj
=(k, k, k ... k)T, its sample matrix is
1.0 changes that k is successively from 0,0.1,0.2 ..., after sample matrix is pre-processed, with SVM training patterns before to X' points
Class, it is n to obtain correct classification samples number(j,k)If total sample number is n, then classification accuracy rate now
Record the maximum P of its classification accuracy ratejmaxWith minimum value Pjmin:
Pjmax=max { P(j,k)| k=0,0.1 ..., 1.0 }
Pjmin=min { P(j,k)| k=0,0.1 ..., 1.0 }
Numerical value after m dimension normalization, as the sensitivity of its corresponding dimensional characteristics fitting percentage, i.e., final fitting percentage
Contribution accuracy or weights.
Wherein, WjFor weights corresponding to jth dimensional feature vector.
1.5 calculate sample i fitting percentage
The study collection L obtained according to step 1.32The each dimensional characteristics vector of sample belongs to the probability and step of positive/negative sample
Rapid 1.4 obtained study collection L2The sensitivity of each dimensional characteristics vector of sample, is calculated study collection L2The adaptation of each sample
Rate, and learn collection L2The fitting percentage of sample and the new study collection L of its each dimensional characteristics information structure2Sample information;
The study collection L then finally given2The matching accuracy of middle sample is
A+b=1
Wherein, m represents the dimension of sampling feature vectors, WjRepresent sample matches performance to the sensitive of jth dimensional feature vector
Degree, Pj +Represent that sample j dimensional feature vectors belong to the probability of positive sample, Pj -Represent that sample jth dimensional feature vector belongs to negative sample
Probability, work as Pj +When → 1, Pj -→0;Work as Pj +When → 0, Pj -→1;A=0.5, b=0.5 are can be calculated, then
1.6 regression function models
The new study collection L obtained by step 1.52Sample set information, fitting recurrence obtain image suitability anticipation function
Model.
Collect sample for study, we utilize study collection L2The feature x of each sampleiThe adaptation being calculated with step 1.5
Rate PiForm new study collection L2Sample, sample point attribute information can be expressed as (xi, Pi).Use step 1.3.1 method pair
SVM parameters c and g carry out parameter optimization, choose parameter c and g when making SVM classifier classification performance optimal.Used here as Taiwan
The libsvm built-in functions of Lin Zhiren professors aid in realizing the foundation of sample fitting percentage forecast model, new using what is be previously obtained
Study collection L2Sample training SVM classifier model, you can to obtain study collection L2The suitability forecast model F (x) of sample.
We use Mean Square Error (mean squared error, MSE) and squared correlation coefficient (squared
Correlation coefficient, r2) verify the performance of forecast model.
Wherein, yiRepresent training sample i input adaptation rate, f (xi) fitting percentage that training sample i is predicted is represented, l is represented
Training sample number.
2 forecast periods
Utilize regression function model prediction SAR subgraphs fitting percentage to be assessed.
As shown in figure 3, it is part SAR image to be assessed in the embodiment of the present invention, for SAR image to be assessed, according to
The method for practising stage etch 1.1, the light and shade target density and the notable strength characteristic of structure of extraction extraction SAR training images, then press
Data prediction, the sample fitting percentage regressive prediction model that the data after processing pass through step 1.6 are carried out according to the method for step 1.2
The fitting percentage of SAR image to be assessed is predicted, i.e.,:
PMatchProbability=F (x1,x2,…,xm)
Wherein, PMatchProbabilityRepresent the fitting percentage of sample, xmRepresent the m dimensional features of sample.
As shown in figure 4, it is the fitting percentage prediction result of SAR image to be assessed in the embodiment of the present invention, wherein sample 1~12
For matching performance poor (0≤p < 0.4) SAR image;Sample 13~20 is schemed for the SAR of matching moderate (0.4≤p < 0.7)
Picture;Sample 21~31 is the SAR image of matching strong (0.7≤p≤1).Fig. 5 be in the embodiment of the present invention fitting percentage predicted with
Corresponding SAR image the result, wherein Fig. 5 (a) are the fitting percentage (0≤p < 0.4) of the poor SAR image of matching performance and prediction,
That is the fitting percentage of sample 1~12 and corresponding SAR image the result, Fig. 5 (b) are the moderate SAR image of matching performance and prediction
Fitting percentage (0.4≤p < 0.7), i.e. the fitting percentage of sample 13~20 and corresponding SAR image the result, Fig. 5 (c) is matching
The strong SAR image of performance and the fitting percentage (0.7≤p≤1) of prediction, the i.e. fitting percentage of sample 21~31 are tested with corresponding SAR image
Demonstrate,prove result.
Claims (8)
1. a kind of SAR image suitability Forecasting Methodology based on support vector regression, it is characterised in that methods described includes:
(1) the light and shade target density and the notable strength characteristic of structure of SAR training images are extracted, by characteristic set and is given positive and negative
Category attribute forms sample information corresponding to each SAR image, and sample information corresponding to all SAR training images forms study
Collection;
(2) characteristic for learning to concentrate is pre-processed, i.e., every two dimensional feature for the characteristic concentrated to study removes
Coupled relation, and the feature after removal coupled relation is normalized by dimensional characteristics;
(3) the study collection after data prediction is divided into study collection L1Collect L with study2, collect L using study1In sample training branch
Vector machine is held, obtains the SVM classifier model of positive/negative two generic attributes sample and the Gaussian Profile of positive/negative two classes sample characteristics
Characteristic;Collect L with study2Test sample classifier performance, count each sample and pass through the classification after SVM classifier category of model
Attribute, according to given positive and negative category attribute information, calculate study collection L1In positive/negative sample class heart feature belong to positive/negative sample
Probability P+、P-;
(4) study collection L is utilized1Positive/negative sample class heart feature and the probability and study collection L for belonging to positive/negative sample corresponding to it1
In the positive/negative each dimensional characteristics of two classes sample Gaussian distribution feature, obtain study and collect each sample belonging to positive/negative per dimensional feature
The mapping relations of sample probability, thus calculate each study collection L2The each dimensional characteristics of sample belong to the Probability p of positive/negative classificationj +、pj -, then calculate study collection L2The fitting percentage p of each dimensional characteristics in samplej_match, wherein, j represents the dimension of characteristic vector
Number sequence number, its value are 1 to m, and m represents the dimension of sampling feature vectors;
(5) control variate method and corresponding SVM categories of model study collection L are passed through2Classification accuracy rate P(j,k), calculate study collection L2
Sensitivity of each dimensional characteristics to suitability, wherein, k represents the value of each element in jth dimensional feature vector, k successively from 0,
0.1,0.2 ... 1.0 change;
(6) the study collection L obtained according to step (4)2The study collection L that each dimensional characteristics fitting percentage and step (5) obtain2Sample
The sensitivity of each dimensional characteristics, study collection L is calculated2The fitting percentage of sample;Study collection L2Sample fitting percentage and its each dimension
Spend characteristic information and form study collection L2New sample information;
(7) the new study collection L obtained by step (6)2Sample information, fitting recurrence obtain image suitability anticipation function model;
(8) for SAR image to be assessed, according to the method for step (1) (2), extract feature corresponding to SAR image to be assessed and
Data prediction is carried out, the data after processing predict the suitable of SAR image to be assessed by the suitability forecast model of step (7)
With rate.
2. the method as described in claim 1, it is characterised in that in the step (2) to study concentrate characteristic it is every
Two dimensional features remove coupled relation:
If eigenmatrix Z={ Z1,Z2,...,ZmRepresent the m dimensional features that study collects, reference vector b0=(1,0 ..., 0)T,
Order
<mrow>
<msub>
<mi>b</mi>
<mn>1</mn>
</msub>
<mo>=</mo>
<msub>
<mi>Z</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<mfrac>
<mrow>
<mo>(</mo>
<msub>
<mi>Z</mi>
<mn>1</mn>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
<mrow>
<mo>(</mo>
<msub>
<mi>b</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
</mfrac>
<msub>
<mi>b</mi>
<mn>0</mn>
</msub>
</mrow>
<mrow>
<msub>
<mi>b</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<msub>
<mi>Z</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<mfrac>
<mrow>
<mo>(</mo>
<msub>
<mi>Z</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
<mrow>
<mo>(</mo>
<msub>
<mi>b</mi>
<mn>1</mn>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
</mfrac>
<msub>
<mi>b</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<mfrac>
<mrow>
<mo>(</mo>
<msub>
<mi>Z</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
<mrow>
<mo>(</mo>
<msub>
<mi>b</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
</mfrac>
<msub>
<mi>b</mi>
<mn>0</mn>
</msub>
</mrow>
...
<mrow>
<msub>
<mi>b</mi>
<mi>m</mi>
</msub>
<mo>=</mo>
<msub>
<mi>Z</mi>
<mi>m</mi>
</msub>
<mo>-</mo>
<mfrac>
<mrow>
<mo>(</mo>
<msub>
<mi>Z</mi>
<mi>m</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
<mrow>
<mo>(</mo>
<msub>
<mi>b</mi>
<mn>0</mn>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>0</mn>
</msub>
<mo>)</mo>
</mrow>
</mfrac>
<msub>
<mi>b</mi>
<mn>0</mn>
</msub>
<mo>-</mo>
<mfrac>
<mrow>
<mo>(</mo>
<msub>
<mi>Z</mi>
<mi>m</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
<mrow>
<mo>(</mo>
<msub>
<mi>b</mi>
<mn>1</mn>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
</mfrac>
<msub>
<mi>b</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<mfrac>
<mrow>
<mo>(</mo>
<msub>
<mi>Z</mi>
<mi>m</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
<mrow>
<mo>(</mo>
<msub>
<mi>b</mi>
<mn>2</mn>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mn>2</mn>
</msub>
<mo>)</mo>
</mrow>
</mfrac>
<msub>
<mi>b</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<mo>...</mo>
<mo>-</mo>
<mfrac>
<mrow>
<mo>(</mo>
<msub>
<mi>Z</mi>
<mi>m</mi>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mrow>
<mi>m</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mrow>
<mo>(</mo>
<msub>
<mi>b</mi>
<mrow>
<mi>m</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>,</mo>
<msub>
<mi>b</mi>
<mrow>
<mi>m</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>)</mo>
</mrow>
</mfrac>
<msub>
<mi>b</mi>
<mrow>
<mi>m</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msub>
</mrow>
Then b1, b2..., bmPairwise orthogonal, turn into Orthogonal Vectors, by its unitization matrix X, wherein X=X1, X2 ...,
Xm }, n is total sample number;
<mrow>
<mi>X</mi>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mn>11</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>x</mi>
<mn>12</mn>
</msub>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mn>1</mn>
<mi>m</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mn>21</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>x</mi>
<mn>22</mn>
</msub>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mn>2</mn>
<mi>m</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mn>1</mn>
</mrow>
</msub>
</mtd>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mn>2</mn>
</mrow>
</msub>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mi>m</mi>
</mrow>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>X</mi>
<mn>1</mn>
</msub>
<mo>=</mo>
<mfrac>
<msub>
<mi>b</mi>
<mn>1</mn>
</msub>
<mrow>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>b</mi>
<mn>1</mn>
</msub>
<mo>|</mo>
<mo>|</mo>
</mrow>
</mfrac>
</mrow>
</mtd>
<mtd>
<mrow>
<msub>
<mi>X</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mfrac>
<msub>
<mi>b</mi>
<mn>2</mn>
</msub>
<mrow>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>b</mi>
<mn>2</mn>
</msub>
<mo>|</mo>
<mo>|</mo>
</mrow>
</mfrac>
</mrow>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mrow>
<msub>
<mi>X</mi>
<mi>m</mi>
</msub>
<mo>=</mo>
<mfrac>
<msub>
<mi>b</mi>
<mi>m</mi>
</msub>
<mrow>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>b</mi>
<mi>m</mi>
</msub>
<mo>|</mo>
<mo>|</mo>
</mrow>
</mfrac>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>.</mo>
</mrow>
。
3. method as claimed in claim 1 or 2, it is characterised in that to the feature after removal coupled relation in the step (2)
It is specially by dimensional characteristics normalization:
If sample i jth dimensional features xijIt is y after normalizationij, then x → y mapping relations are as follows:
<mrow>
<msub>
<mi>y</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>=</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>y</mi>
<mrow>
<mi>m</mi>
<mi>a</mi>
<mi>x</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>y</mi>
<mi>min</mi>
</msub>
<mo>)</mo>
</mrow>
<mfrac>
<mrow>
<msub>
<mi>x</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>x</mi>
<mi>min</mi>
</msub>
</mrow>
<mrow>
<msub>
<mi>x</mi>
<mrow>
<mi>m</mi>
<mi>a</mi>
<mi>x</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>x</mi>
<mi>min</mi>
</msub>
</mrow>
</mfrac>
<mo>+</mo>
<msub>
<mi>y</mi>
<mi>min</mi>
</msub>
</mrow>
[ymin,ymax] for setting normalization section, xmax=max (x1j,x2j,…xnj)T, xmin=min (x1j,x2j,…xnj)T;
Then the sample set eigenmatrix after normalization is
<mrow>
<mi>Y</mi>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mn>11</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>y</mi>
<mn>12</mn>
</msub>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<msub>
<mi>y</mi>
<mrow>
<mn>1</mn>
<mi>m</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mn>21</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>y</mi>
<mn>22</mn>
</msub>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<msub>
<mi>y</mi>
<mrow>
<mn>2</mn>
<mi>m</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mrow></mrow>
</mtd>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>y</mi>
<mrow>
<mi>n</mi>
<mn>1</mn>
</mrow>
</msub>
</mtd>
<mtd>
<msub>
<mi>y</mi>
<mrow>
<mi>n</mi>
<mn>2</mn>
</mrow>
</msub>
</mtd>
<mtd>
<mn>...</mn>
</mtd>
<mtd>
<msub>
<mi>y</mi>
<mrow>
<mi>n</mi>
<mi>m</mi>
</mrow>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
Wherein, ynmRepresent the m dimensional feature vector values of n-th of sample after normalizing.
4. method as claimed in claim 1 or 2, it is characterised in that numerology practises collection L in the step (3)1Positive/negative sample class
Heart feature belongs to the probability P of positive negative sample+、P-, it is specially:
If study collection L2Quantity with positive attribute sample is n1, the quantity of the negative attribute sample of band is n2, collect L through study1The SVM of training
After grader classification, there is k1Individual positive attribute sample is divided into negative sample, there is k2Individual negative attribute sample is divided into positive sample, orismology
Practise collection L1The feature of the positive sample class heart isN represents total sample number, study collection L1The feature of the negative sample class heart isWherein, xi +、xi -Respectively study collection L1The feature of positive negative sample, then study collect L1Positive/negative sample class heart feature
Belong to positive and negative sample probability P+、P-For:
<mrow>
<msub>
<mi>P</mi>
<mo>+</mo>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>n</mi>
<mn>1</mn>
</msub>
<mo>-</mo>
<msub>
<mi>k</mi>
<mn>1</mn>
</msub>
</mrow>
<msub>
<mi>n</mi>
<mn>1</mn>
</msub>
</mfrac>
</mrow>
<mrow>
<msub>
<mi>P</mi>
<mo>-</mo>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>n</mi>
<mn>2</mn>
</msub>
<mo>-</mo>
<msub>
<mi>k</mi>
<mn>2</mn>
</msub>
</mrow>
<msub>
<mi>n</mi>
<mn>2</mn>
</msub>
</mfrac>
<mo>.</mo>
</mrow>
5. method as claimed in claim 1 or 2, it is characterised in that obtain study collection in the step (4) as follows
L2The fitting percentage of each dimensional characteristics of sample:
(4.1)L2Sample belongs to the Probability p of positive sample per one-dimensional characteristicj +;
If L1The characteristics of mean of the positive attribute sample class heart isThe corresponding positive sample probability that belongs to is P+, wherein xi +=
(xi1,xi2,…xij) it is positive sample, sample characteristics Gaussian distributed x+~N (μ+,σ+2);It is assumed that study collects sample characteristics and its
The probability for belonging to positive sample is in mapping relations, according to positive sample Gaussian Profile, can obtain L2It is general that sample jth dimensional feature belongs to positive sample
Rate pj +For:
<mrow>
<msup>
<msub>
<mi>p</mi>
<mi>j</mi>
</msub>
<mo>+</mo>
</msup>
<mo>=</mo>
<msub>
<mi>C</mi>
<mn>1</mn>
</msub>
<mo>*</mo>
<mfrac>
<mn>1</mn>
<mrow>
<msqrt>
<mrow>
<mn>2</mn>
<mi>&pi;</mi>
</mrow>
</msqrt>
<msubsup>
<mi>&sigma;</mi>
<mi>j</mi>
<mo>+</mo>
</msubsup>
</mrow>
</mfrac>
<msubsup>
<mo>&Integral;</mo>
<mrow>
<mo>-</mo>
<mi>&infin;</mi>
</mrow>
<mi>x</mi>
</msubsup>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mfrac>
<msup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>-</mo>
<msubsup>
<mi>&mu;</mi>
<mi>j</mi>
<mo>+</mo>
</msubsup>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mrow>
<mn>2</mn>
<msubsup>
<mi>&sigma;</mi>
<mi>j</mi>
<mrow>
<mo>+</mo>
<mn>2</mn>
</mrow>
</msubsup>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<mi>d</mi>
<mi>x</mi>
<mo>+</mo>
<msub>
<mi>C</mi>
<mn>2</mn>
</msub>
</mrow>
Wherein,For the average of positive sample jth dimensional feature,For the variance of positive sample jth dimensional feature, C1、C2For linear coefficient;
As x=μjWhen, corresponding is the probability P that the positive class class heart belongs to positive sample+;
As x=+ ∞, now belong to the probability highest of positive sample, it is believed that be now 1;
As x=- ∞, the probability for now belonging to positive sample is minimum, it is believed that is now 0;
By this function with x=μjSegmentation, brings above formula into and obtains
<mrow>
<msup>
<msub>
<mi>p</mi>
<mi>j</mi>
</msub>
<mo>+</mo>
</msup>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mn>2</mn>
<msub>
<mi>P</mi>
<mo>+</mo>
</msub>
<mo>*</mo>
<mfrac>
<mn>1</mn>
<mrow>
<msqrt>
<mrow>
<mn>2</mn>
<mi>&pi;</mi>
</mrow>
</msqrt>
<msubsup>
<mi>&sigma;</mi>
<mi>j</mi>
<mo>+</mo>
</msubsup>
</mrow>
</mfrac>
<msubsup>
<mo>&Integral;</mo>
<mrow>
<mo>-</mo>
<mi>&infin;</mi>
</mrow>
<mi>x</mi>
</msubsup>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mfrac>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>-</mo>
<msubsup>
<mi>&mu;</mi>
<mi>j</mi>
<mo>+</mo>
</msubsup>
</mrow>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mrow>
<mn>2</mn>
<msubsup>
<mi>&sigma;</mi>
<mi>j</mi>
<mrow>
<mo>+</mo>
<mn>2</mn>
</mrow>
</msubsup>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<mi>d</mi>
<mi>x</mi>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>x</mi>
<mo>&le;</mo>
<msub>
<mi>&mu;</mi>
<mi>j</mi>
</msub>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mn>2</mn>
<mo>*</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<msub>
<mi>P</mi>
<mo>+</mo>
</msub>
<mo>)</mo>
</mrow>
<mfrac>
<mn>1</mn>
<mrow>
<msqrt>
<mrow>
<mn>2</mn>
<mi>&pi;</mi>
</mrow>
</msqrt>
<msubsup>
<mi>&sigma;</mi>
<mi>j</mi>
<mo>+</mo>
</msubsup>
</mrow>
</mfrac>
<msubsup>
<mo>&Integral;</mo>
<mrow>
<mo>-</mo>
<mi>&infin;</mi>
</mrow>
<mi>x</mi>
</msubsup>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mfrac>
<msup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>-</mo>
<msubsup>
<mi>&mu;</mi>
<mi>j</mi>
<mo>+</mo>
</msubsup>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mrow>
<mn>2</mn>
<msubsup>
<mi>&sigma;</mi>
<mi>j</mi>
<mrow>
<mo>+</mo>
<mn>2</mn>
</mrow>
</msubsup>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<mi>d</mi>
<mi>x</mi>
<mo>+</mo>
<mn>2</mn>
<msub>
<mi>P</mi>
<mo>+</mo>
</msub>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>x</mi>
<mo>&GreaterEqual;</mo>
<msub>
<mi>&mu;</mi>
<mi>j</mi>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
(4.2)L2Sample belongs to the probability of negative sample per one-dimensional characteristic
If L1The feature of the negative attribute sample class heart isThe corresponding negative sample probability that belongs to is P-, wherein xi -=(xi1,
xi2,…xij) it is negative sample, sample characteristics Gaussian distributed x-~N (μ-,σ -2);It is assumed that study collection sample characteristics belong to it
The probability of negative sample is in mapping relations, according to negative sample Gaussian Profile, can obtain L2Sample jth dimensional feature belongs to the general of negative sample
RateFor:
<mrow>
<msubsup>
<mi>p</mi>
<mi>j</mi>
<mo>-</mo>
</msubsup>
<mo>=</mo>
<msup>
<msub>
<mi>C</mi>
<mn>1</mn>
</msub>
<mo>&prime;</mo>
</msup>
<mo>*</mo>
<mrow>
<mo>(</mo>
<mn>1</mn>
<mo>-</mo>
<mfrac>
<mn>1</mn>
<mrow>
<msqrt>
<mrow>
<mn>2</mn>
<mi>&pi;</mi>
</mrow>
</msqrt>
<msubsup>
<mi>&sigma;</mi>
<mi>j</mi>
<mo>-</mo>
</msubsup>
</mrow>
</mfrac>
<msubsup>
<mo>&Integral;</mo>
<mrow>
<mo>-</mo>
<mi>&infin;</mi>
</mrow>
<mi>x</mi>
</msubsup>
<mi>exp</mi>
<mo>(</mo>
<mfrac>
<msup>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>-</mo>
<msubsup>
<mi>&mu;</mi>
<mi>j</mi>
<mo>-</mo>
</msubsup>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mrow>
<mn>2</mn>
<msup>
<msubsup>
<mi>&sigma;</mi>
<mi>j</mi>
<mo>-</mo>
</msubsup>
<mn>2</mn>
</msup>
</mrow>
</mfrac>
<mo>)</mo>
<mi>d</mi>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msup>
<msub>
<mi>C</mi>
<mn>2</mn>
</msub>
<mo>&prime;</mo>
</msup>
</mrow>
Wherein,For the average of negative sample jth dimensional feature,For the variance of negative sample jth dimensional feature, C1'、C2' it is linear system
Number;
As x=μ-When, corresponding is the probability P that the negative class class heart belongs to negative sample-;
As x=- ∞, now belong to the probability highest of negative sample, it is believed that be now 1;
As x=+ ∞, the probability for now belonging to negative sample is minimum, it is believed that is now 0;
By this function withSegmentation, brings above formula into
<mrow>
<msup>
<msub>
<mi>p</mi>
<mi>j</mi>
</msub>
<mo>-</mo>
</msup>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mn>2</mn>
<mo>*</mo>
<mrow>
<mo>(</mo>
<mrow>
<mn>1</mn>
<mo>-</mo>
<msub>
<mi>P</mi>
<mo>-</mo>
</msub>
</mrow>
<mo>)</mo>
</mrow>
<mo>*</mo>
<mrow>
<mo>(</mo>
<mrow>
<mn>1</mn>
<mo>-</mo>
<mfrac>
<mn>1</mn>
<mrow>
<msqrt>
<mrow>
<mn>2</mn>
<mi>&pi;</mi>
</mrow>
</msqrt>
<msubsup>
<mi>&sigma;</mi>
<mi>j</mi>
<mo>-</mo>
</msubsup>
</mrow>
</mfrac>
<msubsup>
<mo>&Integral;</mo>
<mrow>
<mo>-</mo>
<mi>&infin;</mi>
</mrow>
<mi>x</mi>
</msubsup>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mfrac>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>-</mo>
<msubsup>
<mi>&mu;</mi>
<mi>j</mi>
<mo>-</mo>
</msubsup>
</mrow>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mrow>
<mn>2</mn>
<msubsup>
<mi>&sigma;</mi>
<mi>j</mi>
<mrow>
<mo>-</mo>
<mn>2</mn>
</mrow>
</msubsup>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<mi>d</mi>
<mi>x</mi>
</mrow>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mn>2</mn>
<msub>
<mi>P</mi>
<mo>-</mo>
</msub>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>x</mi>
<mo>&le;</mo>
<msubsup>
<mi>&mu;</mi>
<mi>j</mi>
<mo>-</mo>
</msubsup>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mn>2</mn>
<msub>
<mi>P</mi>
<mo>-</mo>
</msub>
<mo>*</mo>
<mrow>
<mo>(</mo>
<mrow>
<mn>1</mn>
<mo>-</mo>
<mfrac>
<mn>1</mn>
<mrow>
<msqrt>
<mrow>
<mn>2</mn>
<mi>&pi;</mi>
</mrow>
</msqrt>
<msubsup>
<mi>&sigma;</mi>
<mi>j</mi>
<mo>-</mo>
</msubsup>
</mrow>
</mfrac>
<msubsup>
<mo>&Integral;</mo>
<mrow>
<mo>-</mo>
<mi>&infin;</mi>
</mrow>
<mi>x</mi>
</msubsup>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<mfrac>
<msup>
<mrow>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>-</mo>
<msubsup>
<mi>&mu;</mi>
<mi>j</mi>
<mo>-</mo>
</msubsup>
</mrow>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mrow>
<mn>2</mn>
<msubsup>
<mi>&sigma;</mi>
<mi>j</mi>
<mrow>
<mo>-</mo>
<mn>2</mn>
</mrow>
</msubsup>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<mi>d</mi>
<mi>x</mi>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>x</mi>
<mo>&GreaterEqual;</mo>
<msubsup>
<mi>&mu;</mi>
<mi>j</mi>
<mo>-</mo>
</msubsup>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
(4.3)L2Fitting percentage p of the sample per one-dimensional characteristicj:
<mrow>
<msub>
<mi>p</mi>
<mrow>
<mi>j</mi>
<mo>_</mo>
<mi>m</mi>
<mi>a</mi>
<mi>t</mi>
<mi>c</mi>
<mi>h</mi>
</mrow>
</msub>
<mo>=</mo>
<msubsup>
<mi>p</mi>
<mi>j</mi>
<mo>+</mo>
</msubsup>
<mo>-</mo>
<msubsup>
<mi>p</mi>
<mi>j</mi>
<mo>-</mo>
</msubsup>
</mrow>
L is obtained by this step2Sample jth dimensional feature belongs to the probability P of positive/negative samplej +、Pj -And L2Sample is per one-dimensional spy
The fitting percentage P of signj_match。
6. method as claimed in claim 1 or 2, it is characterised in that the step (5) as follows, is calculated
Practise collection L2Sensitivity of each dimensional characteristics of sample to suitability:
Change study collection L2Sample matrix jth dimensional feature vector xj, while control remaining m-1 dimensional feature vector constant, work as xj=
(k,k,k,…k)T, 1.0 changes that k is successively from 0,0.1,0.2 ..., its sample matrix is
<mrow>
<msup>
<mi>X</mi>
<mo>&prime;</mo>
</msup>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mn>11</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>x</mi>
<mn>12</mn>
</msub>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mi>k</mi>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mn>1</mn>
<mi>m</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mn>21</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>x</mi>
<mn>22</mn>
</msub>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mi>k</mi>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mn>2</mn>
<mi>m</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<mtable>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
</mtable>
</mtd>
<mtd>
<mtable>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
</mtable>
</mtd>
<mtd>
<mtable>
<mtr>
<mtd>
<mrow></mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow></mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow></mrow>
</mtd>
</mtr>
</mtable>
</mtd>
<mtd>
<mtable>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
</mtable>
</mtd>
<mtd>
<mtable>
<mtr>
<mtd>
<mrow></mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow></mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow></mrow>
</mtd>
</mtr>
</mtable>
</mtd>
<mtd>
<mtable>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
<mtr>
<mtd>
<mo>.</mo>
</mtd>
</mtr>
</mtable>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mn>1</mn>
</mrow>
</msub>
</mtd>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mn>2</mn>
</mrow>
</msub>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<mi>k</mi>
</mtd>
<mtd>
<mo>...</mo>
</mtd>
<mtd>
<msub>
<mi>x</mi>
<mrow>
<mi>n</mi>
<mi>m</mi>
</mrow>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
After sample matrix is pre-processed, X' is classified with SVM training patterns before, obtaining correct classification samples number is
n(j,k)If total sample number is n, then classification accuracy rate nowRecord the maximum of its classification accuracy rate
PjmaxWith minimum value Pjmin:
Pjmax=max { P(j,k)| k=0,0.1 ..., 1.0 }
Pjmin=min { P(j,k)| k=0,0.1 ..., 1.0 }
Numerical value after m dimension normalization, the sensitivity as its corresponding dimensional characteristics fitting percentage;
<mrow>
<msub>
<mi>W</mi>
<mi>j</mi>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>P</mi>
<mrow>
<mi>j</mi>
<mi>m</mi>
<mi>a</mi>
<mi>x</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>P</mi>
<mrow>
<mi>j</mi>
<mi>m</mi>
<mi>i</mi>
<mi>n</mi>
</mrow>
</msub>
</mrow>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>m</mi>
</munderover>
<mrow>
<mo>(</mo>
<msub>
<mi>P</mi>
<mrow>
<mi>j</mi>
<mi>m</mi>
<mi>a</mi>
<mi>x</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>P</mi>
<mrow>
<mi>j</mi>
<mi>m</mi>
<mi>i</mi>
<mi>n</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
</mrow>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>m</mi>
</munderover>
<msub>
<mi>W</mi>
<mi>j</mi>
</msub>
<mo>=</mo>
<mn>1</mn>
</mrow>
Wherein, WjFor the sensitivity of jth dimensional feature vector fitting percentage.
7. method as claimed in claim 1 or 2, it is characterised in that the step (6) as follows, obtains study collection L2
In i-th of sample fitting percentage Pi:
<mrow>
<msub>
<mi>P</mi>
<mi>i</mi>
</msub>
<mo>=</mo>
<mi>a</mi>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>m</mi>
</munderover>
<msub>
<mi>W</mi>
<msup>
<mi>j</mi>
<mo>*</mo>
</msup>
</msub>
<mrow>
<mo>(</mo>
<msup>
<msub>
<mi>P</mi>
<mi>j</mi>
</msub>
<mo>+</mo>
</msup>
<mo>-</mo>
<msup>
<msub>
<mi>P</mi>
<mi>j</mi>
</msub>
<mo>-</mo>
</msup>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mi>b</mi>
</mrow>
A+b=1
Wherein, m represents the dimension of sample characteristics, WjRepresent the sensitivity of jth dimensional feature vector fitting percentage, Pj +Represent L2Sample jth
Dimensional feature belongs to the probability of positive sample, Pj -Represent L2Sample jth dimensional feature belongs to the probability of negative sample,
Work as Pj +When → 1, Pj -→ 0, Pi→1;
Work as Pj +When → 0, Pj -→ 1, Pi→0;
A=0.5, b=0.5 are can be calculated, then
<mrow>
<msub>
<mi>P</mi>
<mi>i</mi>
</msub>
<mo>=</mo>
<mn>0.5</mn>
<mo>*</mo>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>m</mi>
</munderover>
<msub>
<mi>W</mi>
<msup>
<mi>j</mi>
<mo>*</mo>
</msup>
</msub>
<mrow>
<mo>(</mo>
<msup>
<msub>
<mi>P</mi>
<mi>j</mi>
</msub>
<mo>+</mo>
</msup>
<mo>-</mo>
<msup>
<msub>
<mi>P</mi>
<mi>j</mi>
</msub>
<mo>-</mo>
</msup>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mn>0.5.</mn>
</mrow>
8. method as claimed in claim 1 or 2, it is characterised in that the feature of extraction is in the step (1):
Light and shade target density:Ratio shared by light and shade target pixel points, it is complete that bright target pixel points represent that gray value is more than SAR image
The point of figure gray value 2/3, dark target pixel points represent that gray value is less than the point of SAR image full figure gray value 1/3;
The notable intensity of structure:Binary edge extraction is carried out to radar image, and utilizes connected component labeling method, is removed few as number connects
After the noise of logical field mark, mark sum of all pixels and the ratio of the wide wide average of height of image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510075677.6A CN104636758B (en) | 2015-02-12 | 2015-02-12 | A kind of SAR image suitability Forecasting Methodology based on support vector regression |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510075677.6A CN104636758B (en) | 2015-02-12 | 2015-02-12 | A kind of SAR image suitability Forecasting Methodology based on support vector regression |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104636758A CN104636758A (en) | 2015-05-20 |
CN104636758B true CN104636758B (en) | 2018-02-16 |
Family
ID=53215486
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510075677.6A Active CN104636758B (en) | 2015-02-12 | 2015-02-12 | A kind of SAR image suitability Forecasting Methodology based on support vector regression |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104636758B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106054189B (en) * | 2016-07-17 | 2018-06-05 | 西安电子科技大学 | Radar target identification method based on dpKMMDP models |
CN110246134A (en) * | 2019-06-24 | 2019-09-17 | 株洲时代电子技术有限公司 | A kind of rail defects and failures sorter |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102073873A (en) * | 2011-01-28 | 2011-05-25 | 华中科技大学 | Method for selecting SAR (spaceborne synthetic aperture radar) scene matching area on basis of SVM (support vector machine) |
CN102663436A (en) * | 2012-05-03 | 2012-09-12 | 武汉大学 | Self-adapting characteristic extracting method for optical texture images and synthetic aperture radar (SAR) images |
CN102902979A (en) * | 2012-09-13 | 2013-01-30 | 电子科技大学 | Method for automatic target recognition of synthetic aperture radar (SAR) |
CN103942749A (en) * | 2014-02-24 | 2014-07-23 | 西安电子科技大学 | Hyperspectral ground feature classification method based on modified cluster hypothesis and semi-supervised extreme learning machine |
-
2015
- 2015-02-12 CN CN201510075677.6A patent/CN104636758B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102073873A (en) * | 2011-01-28 | 2011-05-25 | 华中科技大学 | Method for selecting SAR (spaceborne synthetic aperture radar) scene matching area on basis of SVM (support vector machine) |
CN102663436A (en) * | 2012-05-03 | 2012-09-12 | 武汉大学 | Self-adapting characteristic extracting method for optical texture images and synthetic aperture radar (SAR) images |
CN102902979A (en) * | 2012-09-13 | 2013-01-30 | 电子科技大学 | Method for automatic target recognition of synthetic aperture radar (SAR) |
CN103942749A (en) * | 2014-02-24 | 2014-07-23 | 西安电子科技大学 | Hyperspectral ground feature classification method based on modified cluster hypothesis and semi-supervised extreme learning machine |
Non-Patent Citations (2)
Title |
---|
Research on Scene Infrared Image Simulation;Lamei Zou 等;《International Journal of Digital Content Technology and its Applications》;20120331;第6卷(第4期);第77-86页 * |
基于检测识别的实孔径雷达景象匹配定位方法;杨卫东 等;《华中科技大学学报》;20050228;第33卷(第2期);第25-27页 * |
Also Published As
Publication number | Publication date |
---|---|
CN104636758A (en) | 2015-05-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102609681B (en) | Face recognition method based on dictionary learning models | |
CN111080629A (en) | Method for detecting image splicing tampering | |
CN108388927A (en) | Small sample polarization SAR terrain classification method based on the twin network of depth convolution | |
CN109697469A (en) | A kind of self study small sample Classifying Method in Remote Sensing Image based on consistency constraint | |
CN110633758A (en) | Method for detecting and locating cancer region aiming at small sample or sample unbalance | |
CN112949408B (en) | Real-time identification method and system for target fish passing through fish channel | |
CN109978872B (en) | White matter microstructure characteristic screening system and method based on white matter fiber tracts | |
CN112766334B (en) | Cross-domain image classification method based on pseudo label domain adaptation | |
CN106203522B (en) | Hyperspectral image classification method based on three-dimensional non-local mean filtering | |
CN103761531A (en) | Sparse-coding license plate character recognition method based on shape and contour features | |
CN111160176A (en) | Fusion feature-based ground radar target classification method for one-dimensional convolutional neural network | |
CN105930846A (en) | Neighborhood information and SVGDL (support vector guide dictionary learning)-based polarimetric SAR image classification method | |
CN105718866A (en) | Visual target detection and identification method | |
CN102509123A (en) | Brain functional magnetic resonance image classification method based on complex network | |
CN108229551A (en) | A kind of Classification of hyperspectral remote sensing image method based on compact dictionary rarefaction representation | |
CN103310235B (en) | A kind of steganalysis method based on parameter identification and estimation | |
CN105138951B (en) | Human face portrait-photo array the method represented based on graph model | |
CN115272777B (en) | Semi-supervised image analysis method for power transmission scene | |
CN112419452B (en) | Rapid merging system and method for PD-L1 digital pathological section images of stomach cancer | |
CN111310719B (en) | Unknown radiation source individual identification and detection method | |
CN109993213A (en) | A kind of automatic identifying method for garment elements figure | |
CN104751184A (en) | Fully polarimetric SAR image classification method based on sparse strength statistics | |
CN114066848A (en) | FPCA appearance defect visual inspection system | |
CN104636758B (en) | A kind of SAR image suitability Forecasting Methodology based on support vector regression | |
CN112084842A (en) | Hydrological remote sensing image target identification method based on depth semantic model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |