CN104636758B - A kind of SAR image suitability Forecasting Methodology based on support vector regression - Google Patents

A kind of SAR image suitability Forecasting Methodology based on support vector regression Download PDF

Info

Publication number
CN104636758B
CN104636758B CN201510075677.6A CN201510075677A CN104636758B CN 104636758 B CN104636758 B CN 104636758B CN 201510075677 A CN201510075677 A CN 201510075677A CN 104636758 B CN104636758 B CN 104636758B
Authority
CN
China
Prior art keywords
mrow
msub
mtd
mtr
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510075677.6A
Other languages
Chinese (zh)
Other versions
CN104636758A (en
Inventor
杨卫东
王梓鉴
曹治国
邹腊梅
桑农
刘婧婷
张洁
刘晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201510075677.6A priority Critical patent/CN104636758B/en
Publication of CN104636758A publication Critical patent/CN104636758A/en
Application granted granted Critical
Publication of CN104636758B publication Critical patent/CN104636758B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)

Abstract

本发明公开了一种基于支持向量回归的雷达图像适配性预测方法。所述方法包括:学习阶段,提取SAR图像多维特征构成学习集;对学习集样本特征预处理后,将其分为学习集L1、L2,然后用学习集L1训练支持向量机,并用得到的SVM模型对学习集L2进行分类,根据分类正确率、样本特征和类心之间的距离计算各样本的适配率;之后利用学习集特征及其相应的适配率,拟合回归得到适配性预测函数模型;预测阶段,对待评估的SAR图像,提取对应特征作为测试样本数据,数据预处理后输入适配性预测函数模型,计算出该图像的适配率。本发明根据SAR图像的强度及纹理结构特征,建立起SAR图像适配率和特征信息之间的函数关系,通过实验验证了该方法能准确评估SAR图像的匹配性能。

The invention discloses a radar image adaptability prediction method based on support vector regression. The method includes: in the learning stage, extracting multi-dimensional features of SAR images to form a learning set; after preprocessing the sample features of the learning set, dividing it into learning sets L 1 and L 2 , and then using the learning set L 1 to train a support vector machine, and using The obtained SVM model classifies the learning set L2, and calculates the adaptation rate of each sample according to the classification accuracy rate, sample characteristics and the distance between the centroids; then uses the learning set features and their corresponding adaptation rates to fit the regression The adaptive prediction function model is obtained; in the prediction stage, the corresponding features of the SAR image to be evaluated are extracted as test sample data, and the data is preprocessed and input into the adaptive prediction function model to calculate the adaptation rate of the image. According to the intensity and texture structure characteristics of the SAR image, the invention establishes the functional relationship between the SAR image adaptation rate and the feature information, and the method is verified by experiments to accurately evaluate the matching performance of the SAR image.

Description

一种基于支持向量回归的SAR图像适配性预测方法A SAR Image Suitability Prediction Method Based on Support Vector Regression

技术领域technical field

本发明属于机器学习、模式识别、匹配模板技术领域,具体涉及一种基于支持向量回归的合成孔径雷达(Synthetic Aperture Radar,SAR)图像适配性预测方法,该方法在考虑SAR成像特性以及强度结构纹理特征的基础上,实现了对SAR图像的适配性能预测,通过本方法得到的适配性能预测结果准确有效。The invention belongs to the technical field of machine learning, pattern recognition, and template matching, and in particular relates to a method for predicting the suitability of a Synthetic Aperture Radar (SAR) image based on support vector regression. The method considers SAR imaging characteristics and intensity structure On the basis of the texture features, the prediction of the adaptation performance of the SAR image is realized, and the prediction result of the adaptation performance obtained by this method is accurate and effective.

背景技术Background technique

SAR景象匹配子区的选择是景象匹配的核心技术,主要是对SAR景象匹配区域的匹配定位性能(即适配性)进行分析、评估、预测,从而确定所选匹配子区是否适合匹配。目前为止,匹配区的选择尚无成熟方案,大部分任务是通过人工完成的,通常难以进行科学的分析,人工估计所选匹配子区的适配性能,很难满足实际应用的需求。并且目前为止,还没有一种对匹配区选取作出定量的、概率预测的方法。The selection of SAR scene matching sub-areas is the core technology of scene matching, which is mainly to analyze, evaluate and predict the matching positioning performance (ie adaptability) of the SAR scene matching area, so as to determine whether the selected matching sub-area is suitable for matching. So far, there is no mature scheme for the selection of matching regions. Most of the tasks are done manually, which is usually difficult to conduct scientific analysis. It is difficult to manually estimate the matching performance of the selected matching subregions, which is difficult to meet the needs of practical applications. And so far, there is no method for quantitative and probabilistic prediction of matching region selection.

在匹配子区的选择上,国内外学者进行了大量的研究,提出的主要方法有运用景象匹配子区的相似性、相关长度、灰度方差、互相关峰值特征、信息熵、纹理能量比、和多分辨率自相似测度等图像描述的特征参数来选择景象匹配子区。但是这些方法仅考虑单个因素对匹配性能的影响而在实验时将其它指标固定,缺少考虑这些因素的相关性,导致了景象匹配子区选择准则适应性不强,抗干扰性差。Scholars at home and abroad have done a lot of research on the selection of matching sub-regions, and the main methods proposed include using the similarity of scene matching sub-regions, correlation length, gray variance, cross-correlation peak characteristics, information entropy, texture energy ratio, The feature parameters of image description such as multi-resolution self-similarity measure are used to select scene matching sub-regions. However, these methods only consider the influence of a single factor on the matching performance and fix other indicators in the experiment, and lack of consideration of the correlation of these factors, resulting in poor adaptability and poor anti-interference of scene matching sub-region selection criteria.

在现有公开的文献中,关于SAR图像适配性能预测的方法尚未形成成熟的解决方案,并在工程实践中应用,也没有对SAR图像适配性作出定量的、概率预测的方案。In the existing published literature, the method of SAR image adaptation performance prediction has not yet formed a mature solution, and has been applied in engineering practice, and there is no quantitative and probabilistic prediction scheme for SAR image adaptation.

发明内容Contents of the invention

本发明针对SAR景象匹配系统中对SAR图像适配性能评估问题,提出一种基于支持向量回归机的SAR图像适配性能预测方法,具体包括:The present invention aims at the SAR image adaptation performance evaluation problem in the SAR scene matching system, and proposes a SAR image adaptation performance prediction method based on a support vector regression machine, which specifically includes:

(1)提取SAR训练图像的明暗目标密度和结构显著强度特征,由特征集合和给定的正负类别属性构成每个SAR图像对应的样本信息,所有SAR训练图像对应的样本信息构成学习集;(1) Extract the light and dark object density and structurally significant intensity features of the SAR training image, and the sample information corresponding to each SAR image is composed of the feature set and the given positive and negative category attributes, and the sample information corresponding to all SAR training images constitutes a learning set;

(2)将学习样本特征数据进行预处理,即对学习集中样本集数据的每两维特征去除耦合关系,并对并对去除耦合关系后的特征按维度归一化;(2) Preprocessing the learning sample feature data, that is, removing the coupling relationship for each two-dimensional feature of the sample set data in the learning set, and normalizing the features after the coupling relationship is removed;

(3)将数据预处理后的学习集分为学习集L1和学习集L2,使用学习集L1中的样本训练支持向量机,得到正/负两类属性样本的SVM分类器模型、以及正/负两类样本特征的高斯分布特性;用学习集L2样本测试分类器性能,统计每个样本通过SVM分类器模型分类之后的类别属性,根据给定的正负类别属性信息,计算学习集L1中正/负样本类心特征属于正/负样本的概率P+、P-(3) Divide the learning set after data preprocessing into learning set L 1 and learning set L 2 , use the samples in learning set L 1 to train the support vector machine, and obtain the SVM classifier model of positive/negative attribute samples, And the Gaussian distribution characteristics of the positive/negative sample features; use the learning set L 2 samples to test the performance of the classifier, and count the category attributes of each sample after being classified by the SVM classifier model. According to the given positive and negative category attribute information, calculate The probability P + , P - of the positive/negative sample centroid feature in the learning set L 1 belongs to the positive/negative sample;

(4)利用学习集L1正/负样本类心特征和其对应的属于正/负样本的概率、以及学习集中L1正/负两类样本每个维度特征的高斯分布特性,得到学习集各个样本每维特征属于正/负样本概率的映射关系,由此计算出每个学习集L2样本各个维度特征属于正/负类别的概率pj +、pj -,继而计算出学习集L2样本各个维度特征的适配率pj_match(4) Using the learning set L 1 positive/negative sample centroid feature and its corresponding probability of belonging to the positive/negative sample, and the Gaussian distribution characteristics of each dimension feature of the learning set L 1 positive/negative samples to obtain the learning set Each dimension feature of each sample belongs to the mapping relationship of the probability of positive/negative samples, thus calculating the probability p j + , p j - of each dimension feature of each sample in each learning set L 2 samples belonging to the positive/negative category, and then calculating the learning set L 2 The adaptation rate p j_match of each dimension feature of the sample;

(5)通过控制变量法和对应的SVM模型分类学习集L2的分类正确率P(j,k),计算学习集L2各个维度特征对适配性的灵敏度;(5) Through the control variable method and the classification accuracy P (j,k) of the corresponding SVM model classification learning set L2, calculate the sensitivity of each dimension feature of the learning set L2 to the adaptability ;

(6)根据步骤(4)得到的学习集L2各个维度特征适配率和步骤(5)得到的学习集L2样本各个维度特征的灵敏度,计算得到学习集L2样本适配率;学习集L2样本适配率和其各个维度特征信息构成学习集L2的新的样本信息;(6) According to the learning set L obtained in step (4) 2 each dimension feature adaptation rate and the learning set L obtained in step (5) The sensitivity of each dimension feature of the sample is calculated to obtain the learning set L 2 sample adaptation rate ; The sample adaptation rate of the set L 2 and its feature information of each dimension constitute the new sample information of the learning set L 2 ;

(7)由步骤(5)得到的学习集L2新的样本信息,拟合回归得到图像适配性预测函数模型;( 7 ) learning set L2 new sample information obtained by step (5), fitting regression to obtain the image adaptability prediction function model;

(8)对于待评估SAR图像,依照步骤(1)(2)的方法,提取SAR图像的各个维度特征信息并且进行数据预处理,处理后的数据通过步骤(7)的样本适配性预测模型预测出待评估SAR图像的适配率。(8) For the SAR image to be evaluated, according to the method of steps (1) and (2), extract the feature information of each dimension of the SAR image and perform data preprocessing, and the processed data pass the sample adaptability prediction model of step (7) The adaptation rate of the SAR image to be evaluated is predicted.

与现有技术相比,本发明的技术效果体现在:Compared with prior art, technical effect of the present invention is reflected in:

在现有技术中,关于SAR图像匹配性能评估的方法尚未形成成熟的解决方案,大部分任务是通过人工完成的,通常难以进行科学的分析,人工估计所选匹配子区的匹配性能,很难满足实际应用的需求。本发明利用训练得到支撑向量回归机模型,建立起SAR图像的适配率和特征信息之间的函数关系,预测出了SAR子区适配性能,并且将领域拓展到概率预测,使得结果更加精确。克服了人工主观评估筛选子区的缺陷,提高了稳定性,改善了筛选的SAR匹配子区的质量。In the prior art, the method for evaluating the matching performance of SAR images has not yet formed a mature solution. Most of the tasks are done manually, and it is usually difficult to conduct scientific analysis. It is difficult to manually estimate the matching performance of the selected matching sub-region. meet the needs of practical applications. The present invention obtains the support vector regression machine model by training, establishes the functional relationship between the adaptation rate of the SAR image and the feature information, predicts the adaptation performance of the SAR sub-area, and expands the field to probability prediction, making the result more accurate . It overcomes the defects of artificial subjective evaluation of screening sub-regions, improves the stability, and improves the quality of the screened SAR matching sub-regions.

本发明提供了一种基于支持向量回归机的SAR图像匹配性能评估方法,该方法对SAR图像提取特征,训练支撑向量机模型,再用支撑向量机分类学习样本,根据分类结果、样本高斯分布特性得到各样本的适配率,之后利用具有适配率的样本数据训练支撑向量回归机,拟合出回归预测函数模型;最后,利用函数模型评估待评估SAR图像,得到适配率。本发明根据SAR成像特性和子区结构强度纹理特征,综合使用了多种机器学习和模式识别方法,实现了对SAR图像的适配性预测,形成一套系统的SAR图像适配性预测方法,并将匹配区选取的领域拓展到概率预测,有效地改进了基于人工筛选的方法,提高了筛选SAR匹配子区的准确性,对SAR匹配子区选取的研究发展具有重大意义。The invention provides a SAR image matching performance evaluation method based on a support vector regression machine. The method extracts features from the SAR image, trains a support vector machine model, and uses the support vector machine to classify learning samples. According to the classification result and the sample Gaussian distribution characteristics The fitting rate of each sample is obtained, and then the support vector regression machine is trained by using the sample data with the fitting rate to fit the regression prediction function model; finally, the function model is used to evaluate the SAR image to be evaluated to obtain the fitting rate. According to the SAR imaging characteristics and sub-area structural strength and texture features, the present invention comprehensively uses a variety of machine learning and pattern recognition methods, realizes the adaptability prediction of SAR images, forms a set of systematic SAR image adaptability prediction methods, and Extending the field of matching area selection to probability prediction effectively improves the method based on manual screening and improves the accuracy of screening SAR matching sub-areas, which is of great significance to the research and development of SAR matching sub-area selection.

附图说明Description of drawings

图1为本发明基于SVM的SAR图像适配性方法的总体流程图;Fig. 1 is the overall flowchart of the SAR image adaptability method based on SVM of the present invention;

图2为本发明实施例中部分SAR图像;Fig. 2 is a partial SAR image in the embodiment of the present invention;

图3为本发明实施例中部分待评估SAR图像;Fig. 3 is some SAR images to be evaluated in the embodiment of the present invention;

图4为本发明实施例中预测的SAR图像适配概率预测结果图;FIG. 4 is a prediction result diagram of SAR image adaptation probability predicted in an embodiment of the present invention;

图5本发明实施例中预测的适配率与对应SAR图像验证结果图,其中:Fig. 5 is a diagram of the predicted adaptation rate and the corresponding SAR image verification result in the embodiment of the present invention, wherein:

图5(a)为匹配性能差的SAR图像及预测的适配率(0≤p<0.4);Figure 5(a) shows the SAR image with poor matching performance and the predicted fitting rate (0≤p<0.4);

图5(b)为匹配性能适中的SAR图像及预测的适配率(0.4≤p<0.7);Figure 5(b) shows the SAR image with moderate matching performance and the predicted matching rate (0.4≤p<0.7);

图5(c)为匹配性能强的SAR图像及预测的适配率(0.7≤p≤1)。Figure 5(c) shows the SAR image with strong matching performance and the predicted matching rate (0.7≤p≤1).

具体实施方式detailed description

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。此外,下面所描述的本发明各个实施方式中所涉及到的技术特征只要彼此之间未构成冲突就可以相互组合。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below may be combined with each other as long as they do not constitute a conflict with each other.

本发明提供了一种基于支持向量回归机的SAR图像适配性预测方法,其总体流程如图1所示,该方法具体过程为:The present invention provides a SAR image adaptability prediction method based on a support vector regression machine, and its overall flow is as shown in Figure 1, and the specific process of the method is:

1学习阶段1 learning stage

1.1数据准备阶段1.1 Data preparation stage

提取SAR图像的均匀度、散度、明暗目标密度、结构显著强度等强度及纹理结构多维特征,以及标定SAR图像与实时图匹配结果作为类别属性,多维特征和类别属性作为每个SAR图像对应的样本信息,所有SAR图像对应的样本构成样本集。本发明选取明暗目标密度和结构显著强度这两维特征和类别属性作为样本信息。Extract the multi-dimensional features of the intensity and texture structure of the SAR image, such as uniformity, divergence, light and dark target density, and significant structural strength, and calibrate the matching results of the SAR image and the real-time image as the category attribute, and the multi-dimensional feature and category attribute as the corresponding SAR image Sample information, the samples corresponding to all SAR images constitute a sample set. The present invention selects the two-dimensional features and category attributes of light and dark object density and structural salience strength as sample information.

1.1.1特征提取1.1.1 Feature Extraction

如图1所示,为本发明实施例中部分SAR训练样本图像。提取每幅SAR图像的散度,均匀度,明暗目标密度、结构显著强度等多维特征信息;As shown in FIG. 1 , it is a part of SAR training sample images in the embodiment of the present invention. Extract the multi-dimensional feature information of each SAR image, such as divergence, uniformity, light and dark target density, and significant structural strength;

均匀度r:Uniformity r:

其中,其中μ是灰度均值,σ是灰度标准差。Among them, where μ is the mean value of the gray level, and σ is the standard deviation of the gray level.

散度div:divergence div:

其中,σ1分别表示灰度值小于SAR图像灰度均值μ的像素集合的标准差。相应地,σ2表示灰度值大于SAR图像灰度均值μ的像素集合的标准差。Among them, σ 1 respectively represents the standard deviation of the pixel set whose gray value is less than the mean value μ of the gray value of the SAR image. Correspondingly, σ2 represents the standard deviation of the pixel set whose gray value is greater than the mean value μ of the gray value of the SAR image.

明暗目标密度:明暗目标像素点所占的比例,明目标像素点表示灰度值大于SAR图像全图灰度值2/3的点,暗目标像素点表示灰度值小于SAR图像全图灰度值1/3的点。Light and dark target density: the proportion of bright and dark target pixels. Bright target pixels represent points whose gray value is greater than 2/3 of the full image gray value of the SAR image, and dark target pixels represent points whose gray value is smaller than the full image gray value of the SAR image. Points worth 1/3.

结构显著强度:对雷达图像进行二值边缘提取,并且利用连通域标记法,去掉少像数连通域标记的噪声后,标记像素总数与图像宽高宽均值的比值。Structural salience strength: the ratio of the total number of marked pixels to the mean value of image width, height, and width after binary edge extraction is performed on the radar image, and the connected domain marking method is used to remove the noise of the connected domain marking with a small number of pixels.

本发明通过实验最后选取明暗目标密度和结构显著强度这两维特征。The present invention finally selects the two-dimensional features of light and dark target density and structural conspicuous strength through experiments.

1.1.2正负类别属性1.1.2 Positive and negative category attributes

根据SAR图像和实时图匹配的结果标记每个样本的类别属性,若适合匹配,则标记为+1,为适配区样本;若不适合匹配,则标记为-1,为非适配区样本。Mark the category attribute of each sample according to the matching results of the SAR image and the real-time image. If it is suitable for matching, it will be marked as +1, which is a sample in the adaptation area; if it is not suitable for matching, it will be marked as -1, and it is a sample in the non-adaptation area .

提取SAR图像的明暗目标密度、结构显著强度这两维特征信息和正负类别属性作为每个图像对应的样本信息,所有SAR图像对应的样本构成样本集。The two-dimensional feature information of bright and dark target density, structural salience intensity and positive and negative category attributes of the SAR image are extracted as the sample information corresponding to each image, and the samples corresponding to all SAR images constitute the sample set.

1.2数据预处理1.2 Data preprocessing

对样本集特征数据进行数据预处理,即对样本集特征数据的每两维特征去除耦合关系,并按维度特征归一化;Perform data preprocessing on the feature data of the sample set, that is, remove the coupling relationship for each two-dimensional feature of the feature data of the sample set, and normalize according to the dimension feature;

1.2.1去耦合关系1.2.1 Decoupling relationship

分析样本空间的每两维特征间的关系分布,可以看出每两维特征间的线性关系不明显,可以认为是非线性相关的。我们使用Schmidt正交化处理,去除特征间的耦合关系。这里用特征矩阵Z={Z1,Z2,...,Zm}表示样本集的m维特征。Analyzing the relationship distribution between each two-dimensional feature in the sample space, it can be seen that the linear relationship between each two-dimensional feature is not obvious, and it can be considered as a nonlinear correlation. We use Schmidt orthogonalization to remove the coupling relationship between features. Here, the feature matrix Z={Z 1 , Z 2 , . . . , Z m } is used to represent the m-dimensional feature of the sample set.

如果向量Z1,Z2,...,Zm线性无关,基准向量为b0=(1,0,...,0)T,令If the vectors Z 1 , Z 2 ,...,Z m are linearly independent, the reference vector is b 0 =(1,0,...,0) T , let

则b1,b2,…,bm两两正交,成为正交向量组,将其单位化得矩阵X,其中X={X1,X2,…,Xm};Then b 1 , b 2 ,..., b m are orthogonal to each other and become an orthogonal vector group, which is unitized to obtain a matrix X, where X={X 1 , X 2 ,...,X m };

则称从矩阵Z到矩阵X的过程为Schmidt正交化。原始数据经过上述基于Schmidt正交化的处理之后,可以去除样本各维特征间的耦合关系,并且具有相同的基准向量,便于后面关于考虑某一维特征的变化对匹配正确率的影响。Then the process from matrix Z to matrix X is called Schmidt orthogonalization. After the original data is processed based on the above-mentioned Schmidt orthogonalization, the coupling relationship between the characteristics of each dimension of the sample can be removed, and it has the same reference vector, which is convenient for considering the impact of the change of a certain dimension characteristic on the matching accuracy later.

1.2.2各维度特征归一化1.2.2 Normalization of features in each dimension

设去耦合后,n个样本的m维特征集表示为:After decoupling, the m-dimensional feature set of n samples is expressed as:

上式中,xij表示第i个样本的第j维特征向量值,将样本按维度归一化,设样本i第j维特征xij归一化后为yij,则x→y的映射关系如下:In the above formula, x ij represents the j-th dimension feature vector value of the i-th sample, the sample is normalized according to the dimension, and the j-th dimension feature x ij of the sample i is normalized to y ij , then the mapping of x→y The relationship is as follows:

[ymin,ymax]为设定的归一化区间,xmax=max(x1j,x2j,…xnj)T,xmin=min(x1j,x2j,…xnj)T[y min ,y max ] is the set normalization interval, x max =max(x 1j ,x 2j ,…x nj ) T , x min =min(x 1j ,x 2j ,…x nj ) T .

则归一化后的样本集特征矩阵为Then the normalized feature matrix of the sample set is

上式中,ynm表示归一化后第n个样本的第m维特征向量值,本发明归一化之后,所有数据均在[0,1]范围内。In the above formula, y nm represents the m-th dimension eigenvector value of the n-th sample after normalization. After normalization in the present invention, all data are in the range of [0,1].

1.3各维度特征属于正/负样本的概率1.3 The probability that each dimension feature belongs to a positive/negative sample

1.3.1训练支撑向量机1.3.1 Training Support Vector Machine

由于样本之间的差异性,将处理后的样本集数据分为学习集L1和学习集L2,使用学习集L1样本训练支持向量机(Support Vector Machine,SVM),对SVM中高斯核函数c和惩罚系数g进行参数优化,选取最佳的参数c和g,训练出SVM模型。用学习集L2样本测试分类器性能,统计通过得到的SVM分类器模型分类之后的类别属性,参照给定的正负类别属性信息,得到正/负两类的分类正确率,即为学习集L1正/负样本类心特征属于正/负样本概率P+、P-Due to the difference between samples, the processed sample set data is divided into learning set L 1 and learning set L 2 , using the learning set L 1 samples to train Support Vector Machine (Support Vector Machine, SVM), and the Gaussian kernel in SVM The function c and the penalty coefficient g are optimized for parameters, and the best parameters c and g are selected to train the SVM model. Use the learning set L 2 samples to test the performance of the classifier, count the category attributes after the classification of the obtained SVM classifier model, and refer to the given positive and negative category attribute information to obtain the positive/negative classification accuracy rate, which is the learning set L 1 Positive/negative sample centroid features belong to positive/negative sample probabilities P + , P - .

设学习集L2带正属性样本的数量为n1,带负属性样本的数量为n2,经学习集L1训练的SVM分类器分类后,有k1个正属性样本被分为负样本,有k2个负属性样本被分为正样本,定义学习集L1正样本类心的特征向量为学习集L1负样本类心的特征为其中,xi +、xi -分别为学习集L1正负样本的特征,则学习集L1正/负样本类心特征属于正负样本概率P+、P-为:Suppose the number of samples with positive attributes in the learning set L 2 is n 1 , and the number of samples with negative attributes is n 2 , after being classified by the SVM classifier trained on the learning set L 1 , there are k 1 positive attribute samples that are divided into negative samples , there are k 2 samples with negative attributes are divided into positive samples, and the eigenvectors defining the centroids of the learning set L 1 positive samples are The characteristics of the learning set L 1 negative sample centroids are Among them, x i + , x i - are the characteristics of the positive and negative samples of the learning set L 1 respectively, then the probability P + , P - of the positive/negative sample centroid features of the learning set L 1 belonging to the positive and negative samples is:

1.3.2 L2样本每一维特征属于正样本的概率pj + 1.3.2 Probability p j +

设学习集L1正属性样本类心的均值特征为对应的属于正样本概率为P+,其中xi +=(xi1,xi2,…xij)为正样本,样本特征服从高斯分布x+~N(μ++2)。假定学习集样本特征和其属于正样本的概率呈映射关系,根据正样本高斯分布,可得L2样本第j维特征属于正样本概率pj +为:Let the mean characteristic of the center of positive attribute samples in the learning set L1 be The corresponding probability of belonging to a positive sample is P + , where x i + = (x i1 , x i2 ,… xij ) is a positive sample, and the sample features obey the Gaussian distribution x + ~N(μ ++2 ). Assuming that the sample characteristics of the learning set and the probability of belonging to the positive sample are in a mapping relationship, according to the Gaussian distribution of the positive sample, the probability p j + of the j-th dimension feature of the L 2 sample belonging to the positive sample is:

其中,为正样本第j维特征的均值,为正样本第j维特征的方差,C1、C2为线性系数;in, is the mean value of the j-th dimension feature of the positive sample, is the variance of the j-th dimension feature of the positive sample, and C 1 and C 2 are linear coefficients;

当x=μj时,对应的是正类类心属于正样本的概率P+When x=μ j , it corresponds to the probability P + that the positive class centroid belongs to the positive sample;

当x=+∞时,此时属于正样本的概率最高,认为此时为1;When x=+∞, the probability of belonging to the positive sample is the highest at this time, and it is considered to be 1 at this time;

当x=-∞时,此时属于正样本的概率最低,认为此时为0。When x=-∞, the probability of belonging to the positive sample is the lowest at this time, and it is considered to be 0 at this time.

将此函数以x=μj分段,带入上式得Substituting this function with x=μ j into the above formula, we get

1.3.3 L2样本每一维特征属于负样本的概率 1.3.3 The probability that each dimension feature of the L2 sample belongs to a negative sample

同理如1.3.2,设学习集L1负属性样本类心的特征为对应的属于负样本概率为P-,其中xi -=(xi1,xi2,…xij)为负样本,样本特征服从高斯分布x-~N(μ--2)。假定学习集样本特征和其属于负样本的概率呈映射关系,根据负样本高斯分布,可得到学习集L2样本第j维特征属于负样本的概率In the same way as in 1.3.2, let the learning set L 1 negative attribute sample centroid feature be The corresponding probability of belonging to a negative sample is P - , where x i - = (x i1 , x i2 ,…x ij ) is a negative sample, and the sample features obey the Gaussian distribution x - ~N(μ --2 ). Assuming that the sample characteristics of the learning set and the probability of belonging to the negative sample are in a mapping relationship, according to the Gaussian distribution of the negative sample, the probability of the j - th dimension feature of the learning set L2 sample belonging to the negative sample can be obtained for

其中,为负样本第j维特征的类心,为负样本第j维特征的方差,C1'、C2'为线性系数;in, is the centroid of the j-th dimension feature of the negative sample, is the variance of the j-th dimension feature of the negative sample, and C 1 ', C 2 ' are linear coefficients;

当x=μ-时,对应的是负类类心属于负样本的概率P-When x=μ - , it corresponds to the probability P - that the negative class centroid belongs to the negative sample;

当x=-∞时,此时属于负样本的概率最高,认为此时为1;When x=-∞, the probability of belonging to the negative sample is the highest at this time, and it is considered to be 1 at this time;

当x=+∞时,此时属于负样本的概率最低,认为此时为0。When x=+∞, the probability of belonging to the negative sample is the lowest at this time, and it is considered to be 0 at this time.

将此函数以分段,带入上式replace this function with Segmentation, into the above formula

1.3.4 L2样本每一维特征的适配率pj1.3.4 The adaptation rate p j of each dimension feature of L 2 samples:

通过本步骤得到L2样本第j维特征属于正/负样本的概率Pj +、Pj -,以及L2样本每一维特征的适配率pj_matchThrough this step, the probability P j + , P j - of the j-th dimension feature of the L 2 sample belonging to the positive/negative sample, and the adaptation rate p j_match of each dimension feature of the L 2 sample are obtained.

1.4各维度特征的灵敏度1.4 Sensitivity of features in each dimension

通过控制变量法和对应的SVM模型分类学习集L2的分类正确率P(j,k),计算学习集L2样本各个维度特征对适配性的灵敏度;Through the control variable method and the classification accuracy P (j,k) of the corresponding SVM model classification learning set L2, calculate the sensitivity of each dimension feature of the learning set L2 sample to the adaptability ;

考虑不同样本特征对匹配性能的作用不同,我们引入“灵敏度”的概念,具体如下:Considering that different sample features have different effects on matching performance, we introduce the concept of "sensitivity", as follows:

改变学习集L2样本矩阵第j维特征向量xj,同时控制其余m-1维特征向量不变,当xj=(k,k,k,…k)T,其样本矩阵为Change the feature vector x j of the j-th dimension of the sample matrix of the learning set L 2 , while controlling the remaining feature vectors of the m-1 dimension to remain unchanged. When x j =(k,k,k,…k) T , the sample matrix is

k依次从0,0.1,0.2…1.0变化,将样本矩阵预处理后,用之前的SVM训练模型对X'分类,得到正确分类样本个数为n(j,k),设总的样本数为n,则此时的分类正确率记录其分类正确率的最大值Pjmax与最小值Pjmink changes from 0, 0.1, 0.2...1.0 in turn, after preprocessing the sample matrix, use the previous SVM training model to classify X', and the number of correctly classified samples is n (j,k) , and the total number of samples is n, then the classification accuracy at this time Record the maximum value P jmax and minimum value P jmin of its classification accuracy:

Pjmax=max{P(j,k)|k=0,0.1,...,1.0}P jmax =max{P (j,k) |k=0,0.1,...,1.0}

Pjmin=min{P(j,k)|k=0,0.1,...,1.0}P jmin =min{P (j,k) |k=0,0.1,...,1.0}

m个维度归一化之后的数值,作为其对应维度特征适配率的灵敏度,即最终适配率的的贡献正确率或者权值。The normalized value of the m dimensions is used as the sensitivity of the feature adaptation rate of the corresponding dimension, that is, the contribution accuracy or weight of the final adaptation rate.

其中,Wj为第j维特征向量对应的权值。Among them, W j is the weight corresponding to the feature vector of the jth dimension.

1.5计算样本i的适配率1.5 Calculate the fitting rate of sample i

根据步骤1.3得到的学习集L2样本各个维度特征向量属于正/负样本的概率和步骤1.4得到的学习集L2样本各个维度特征向量的灵敏度,计算得到学习集L2每个样本的适配率,且学习集L2样本的适配率和其各个维度特征信息构成新的学习集L2样本信息;According to the probability of each dimension feature vector of the learning set L2 sample obtained in step 1.3 belonging to positive/negative samples and the sensitivity of each dimension feature vector of the learning set L2 sample obtained in step 1.4, the adaptation of each sample of the learning set L2 is calculated Rate, and the adaptation rate of the learning set L 2 samples and its feature information of each dimension constitute the new learning set L 2 sample information;

则最终得到的学习集L2中样本的匹配正确率为Then the matching accuracy of the samples in the final learning set L 2 is

a+b=1a+b=1

其中,m表示样本特征向量的维数,Wj表示样本匹配性能对第j维特征向量的灵敏度,Pj +表示样本j维特征向量属于正样本的概率,Pj -表示样本第j维特征向量属于负样本的概率,当Pj +→1时,Pj -→0;当Pj +→0时,Pj -→1;计算可得a=0.5,b=0.5,则Among them, m represents the dimension of the sample feature vector, W j represents the sensitivity of the sample matching performance to the j-dimensional feature vector, P j + represents the probability that the j-dimensional feature vector of the sample belongs to a positive sample, and P j - represents the j-dimensional feature of the sample The probability that the vector belongs to the negative sample, when P j + →1, P j - →0; when P j + →0, P j - →1; the calculation can be a=0.5, b=0.5, then

1.6回归函数模型1.6 Regression function model

由步骤1.5得到的新的学习集L2样本集信息,拟合回归得到图像适配性预测函数模型。From the new learning set L 2 sample set information obtained in step 1.5, fit the regression to obtain the image adaptability prediction function model.

对于学习集样本,我们利用学习集L2每个样本的特征xi和步骤1.5计算得到的适配率Pi组成新的学习集L2样本,样本点属性信息可以表示为(xi,Pi)。使用步骤1.3.1的方法对SVM参数c和g进行参数寻优,选取使SVM分类器分类性能最佳时的参数c和g。这里使用台湾林智仁教授的libsvm库函数来辅助实现样本适配率预测模型的建立,利用前面得到的新的学习集L2样本训练SVM分类器模型,即可以得到学习集L2样本的适配性预测模型F(x)。For the learning set samples, we use the feature x i of each sample in the learning set L 2 and the adaptation rate P i calculated in step 1.5 to form a new learning set L 2 sample, and the attribute information of the sample points can be expressed as ( xi , P i ). Use the method of step 1.3.1 to optimize the parameters of SVM parameters c and g, and select the parameters c and g that make the classification performance of the SVM classifier the best. Here, the libsvm library function of Professor Lin Zhiren from Taiwan is used to assist in the establishment of the sample adaptation rate prediction model. Using the new learning set L 2 samples obtained earlier to train the SVM classifier model, the adaptation of the learning set L 2 samples can be obtained. Sex prediction model F(x).

我们用平均平方误差(mean squared error,MSE)和平方相关系数(squaredcorrelation coefficient,r2)来验证预测模型的性能。We used mean squared error (MSE) and squared correlation coefficient (r 2 ) to verify the performance of the predictive model.

其中,yi表示训练样本i的输入适配率,f(xi)表示训练样本i预测的适配率,l表示训练样本数目。Among them, y i represents the input adaptation rate of training sample i, f( xi ) represents the predicted adaptation rate of training sample i, and l represents the number of training samples.

2预测阶段2 Prediction stage

利用回归函数模型预测待评估SAR子图像适配率。The regression function model is used to predict the adaptation rate of the SAR sub-image to be evaluated.

如图3所示,为本发明实施例中部分待评估SAR图像,对于待评估SAR图像,依照学习阶段步骤1.1的方法,提取提取SAR训练图像的明暗目标密度和结构显著强度特征,再按照步骤1.2的方法进行数据预处理,处理后的数据通过步骤1.6的样本适配率回归预测模型预测出待评估SAR图像的适配率,即:As shown in Figure 3, it is part of the SAR images to be evaluated in the embodiment of the present invention. For the SAR images to be evaluated, according to the method of step 1.1 in the learning stage, extract the light and dark target density and structurally significant intensity features of the SAR training image, and then follow the steps The method of 1.2 is used for data preprocessing, and the processed data is used to predict the adaptation rate of the SAR image to be evaluated through the sample adaptation rate regression prediction model in step 1.6, namely:

PMatchProbability=F(x1,x2,…,xm)P MatchProbability = F(x 1 ,x 2 ,...,x m )

其中,PMatchProbability表示样本的适配率,xm表示样本的第m维特征。Among them, P MatchProbability represents the fitting rate of the sample, and x m represents the m-th dimension feature of the sample.

如图4所示,为本发明实施例中待评估SAR图像的适配率预测结果,其中样本1~12为匹配性能差(0≤p<0.4)的SAR图像;样本13~20为匹配性适中(0.4≤p<0.7)的SAR图像;样本21~31为匹配性强(0.7≤p≤1)的SAR图像。图5为本发明实施例中预测的适配率与对应SAR图像验证结果,其中图5(a)为匹配性能差的SAR图像及预测的适配率(0≤p<0.4),即样本1~12的适配率与对应SAR图像验证结果,图5(b)为匹配性能适中的SAR图像及预测的适配率(0.4≤p<0.7),即样本13~20的适配率与对应SAR图像验证结果,图5(c)为匹配性能强的SAR图像及预测的适配率(0.7≤p≤1),即样本21~31的适配率与对应SAR图像验证结果。As shown in Figure 4, it is the prediction result of the fitting rate of the SAR image to be evaluated in the embodiment of the present invention, wherein samples 1-12 are SAR images with poor matching performance (0≤p<0.4); samples 13-20 are matching Moderate (0.4≤p<0.7) SAR images; samples 21-31 are SAR images with strong matching (0.7≤p≤1). Figure 5 shows the predicted adaptation rate and the corresponding SAR image verification results in the embodiment of the present invention, where Figure 5(a) shows the SAR image with poor matching performance and the predicted adaptation rate (0≤p<0.4), that is, sample 1 The matching rate of ~12 and the corresponding SAR image verification results, Figure 5(b) shows the SAR image with moderate matching performance and the predicted matching rate (0.4≤p<0.7), that is, the matching rate of samples 13-20 and the corresponding SAR image verification results, Figure 5(c) shows the SAR images with strong matching performance and the predicted adaptation rate (0.7≤p≤1), that is, the adaptation rate of samples 21-31 and the corresponding SAR image verification results.

Claims (8)

1. A SAR image adaptability prediction method based on support vector regression is characterized by comprising the following steps:
(1) extracting light and shade target density and structure significant intensity characteristics of SAR training images, forming sample information corresponding to each SAR image by a characteristic set and given positive and negative category attributes, and forming a learning set by the sample information corresponding to all the SAR training images;
(2) preprocessing the characteristic data in the learning set, namely removing the coupling relation of each two-dimensional characteristic of the characteristic data in the learning set, and normalizing the characteristic with the coupling relation removed according to the dimension characteristic;
(3) dividing the learning set after data preprocessing into a learning set L1And a learning set L2Using learning set L1The sample training support vector machine in the method obtains SVM classifier models of positive/negative attribute samples and Gaussian distribution characteristics of positive/negative sample characteristics; using learning sets L2The sample tests the performance of the classifier, the category attribute of each sample after being classified by the SVM classifier model is counted, and the learning set L is calculated according to the given positive and negative category attribute information1Probability P that class center feature of middle positive/negative sample belongs to positive/negative sample+、P-
(4) Using learning sets L1Class-heart feature of positive/negative sample and corresponding probability of belonging to positive/negative sample, and learning set L1The Gaussian distribution characteristics of each dimension characteristic of the medium positive/negative samples are obtained, the mapping relation of the probability that each dimension characteristic of each sample of the learning set belongs to the positive/negative sample is obtained, and therefore L of each learning set is calculated2Probability p of each dimension feature of sample belonging to positive/negative categoryj +、pj -Then calculates the learning set L2Adaptation rate p of each dimension characteristic in samplej_matchJ represents the dimension serial number of the characteristic vector, the value of j is 1 to m, and m represents the dimension of the sample characteristic vector;
(5) classification learning set L by controlling variable method and corresponding SVM model2Is classified with accuracy P(j,k)Calculating a learning set L2Sensitivity of each dimension feature to adaptability, wherein k represents the value of each element in a j-dimension feature vector, and k sequentially changes from 0,0.1,0.2 … 1.0.0;
(6) the learning set L obtained according to the step (4)2Adaptation rate of each dimension characteristic and learning set L obtained in step (5)2The sensitivity of each dimension characteristic of the sample is calculated to obtain a learning set L2The adaptation rate of the sample; learning set L2The sample adaptation rate and the characteristic information of each dimension form a learning set L2New sample information;
(7) new learning set L obtained from step (6)2The information on the samples is used to determine,fitting regression to obtain an image suitability prediction function model;
(8) and (3) for the SAR image to be evaluated, extracting the corresponding characteristics of the SAR image to be evaluated according to the methods in the steps (1) and (2), preprocessing data, and predicting the adaptation rate of the SAR image to be evaluated through the adaptability prediction model in the step (7) by the processed data.
2. The method of claim 1, wherein the step (2) of removing the coupling relation for each two-dimensional feature of the feature data in the learning set is specifically:
let the characteristic matrix Z ═ Z1,Z2,...,ZmDenotes the m-dimensional features of the learning set, with the reference vector b0=(1,0,...,0)TLet us order
<mrow> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>Z</mi> <mn>1</mn> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mfrac> <msub> <mi>b</mi> <mn>0</mn> </msub> </mrow>
<mrow> <msub> <mi>b</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>Z</mi> <mn>2</mn> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mfrac> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mfrac> <msub> <mi>b</mi> <mn>0</mn> </msub> </mrow>
...
<mrow> <msub> <mi>b</mi> <mi>m</mi> </msub> <mo>=</mo> <msub> <mi>Z</mi> <mi>m</mi> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mi>m</mi> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mfrac> <msub> <mi>b</mi> <mn>0</mn> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mi>m</mi> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mfrac> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mi>m</mi> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mfrac> <msub> <mi>b</mi> <mn>2</mn> </msub> <mo>-</mo> <mo>...</mo> <mo>-</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>Z</mi> <mi>m</mi> </msub> <mo>,</mo> <msub> <mi>b</mi> <mrow> <mi>m</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mrow> <mi>m</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>b</mi> <mrow> <mi>m</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> </mfrac> <msub> <mi>b</mi> <mrow> <mi>m</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> </mrow>
Then b is1,b2,…,bmForming orthogonal vector groups by pairwise orthogonality, and unitizing the orthogonal vector groups to obtain a matrix X, wherein X is { X1, X2, … and Xm }, and n is the total number of samples;
<mrow> <mi>X</mi> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mn>11</mn> </msub> </mtd> <mtd> <msub> <mi>x</mi> <mn>12</mn> </msub> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msub> <mi>x</mi> <mrow> <mn>1</mn> <mi>m</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>x</mi> <mn>21</mn> </msub> </mtd> <mtd> <msub> <mi>x</mi> <mn>22</mn> </msub> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msub> <mi>x</mi> <mrow> <mn>2</mn> <mi>m</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>x</mi> <mrow> <mi>n</mi> <mn>1</mn> </mrow> </msub> </mtd> <mtd> <msub> <mi>x</mi> <mrow> <mi>n</mi> <mn>2</mn> </mrow> </msub> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msub> <mi>x</mi> <mrow> <mi>n</mi> <mi>m</mi> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow>
<mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <msub> <mi>b</mi> <mn>1</mn> </msub> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>|</mo> <mo>|</mo> </mrow> </mfrac> </mrow> </mtd> <mtd> <mrow> <msub> <mi>X</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <msub> <mi>b</mi> <mn>2</mn> </msub> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>b</mi> <mn>2</mn> </msub> <mo>|</mo> <mo>|</mo> </mrow> </mfrac> </mrow> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <mrow> <msub> <mi>X</mi> <mi>m</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>b</mi> <mi>m</mi> </msub> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>b</mi> <mi>m</mi> </msub> <mo>|</mo> <mo>|</mo> </mrow> </mfrac> </mrow> </mtd> </mtr> </mtable> <mo>.</mo> </mrow>
3. the method according to claim 1 or 2, wherein the normalization of the features after the coupling relationship is removed in the step (2) according to the dimensional features specifically comprises:
let the i j dimension of the sample be the feature xijNormalized to yijThen the mapping of x → y is as follows:
<mrow> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>y</mi> <mi>min</mi> </msub> <mo>)</mo> </mrow> <mfrac> <mrow> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>min</mi> </msub> </mrow> <mrow> <msub> <mi>x</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>min</mi> </msub> </mrow> </mfrac> <mo>+</mo> <msub> <mi>y</mi> <mi>min</mi> </msub> </mrow>
[ymin,ymax]for a set normalization interval, xmax=max(x1j,x2j,…xnj)T,xmin=min(x1j,x2j,…xnj)T
The normalized sample set feature matrix is
<mrow> <mi>Y</mi> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>y</mi> <mn>11</mn> </msub> </mtd> <mtd> <msub> <mi>y</mi> <mn>12</mn> </msub> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msub> <mi>y</mi> <mrow> <mn>1</mn> <mi>m</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mn>21</mn> </msub> </mtd> <mtd> <msub> <mi>y</mi> <mn>22</mn> </msub> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msub> <mi>y</mi> <mrow> <mn>2</mn> <mi>m</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mrow> <mi>n</mi> <mn>1</mn> </mrow> </msub> </mtd> <mtd> <msub> <mi>y</mi> <mrow> <mi>n</mi> <mn>2</mn> </mrow> </msub> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <msub> <mi>y</mi> <mrow> <mi>n</mi> <mi>m</mi> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow>
Wherein, ynmAnd representing the m-dimension characteristic vector value of the n-th sample after normalization.
4. The method according to claim 1 or 2, wherein the learning set L is calculated in the step (3)1Probability P that class-center feature of positive/negative sample belongs to positive and negative sample+、P-The method specifically comprises the following steps:
set learning set L2The number of samples with positive attribute is n1The number of samples with negative attribute is n2Learning set L1After the trained SVM classifier is classified, k is1A positive attribute sample is divided into negative samples with k2Dividing the negative attribute sample into positive samples, defining a learning set L1The class center of the positive sample is characterized in thatn represents the total number of samples, learning set L1The class center of the negative sample is characterized in thatWherein x isi +、xi -Are respectively a learning set L1Features of the positive and negative samples, then the learning set L1The class center feature of the positive/negative sample belongs to the probability P of the positive and negative samples+、P-Comprises the following steps:
<mrow> <msub> <mi>P</mi> <mo>+</mo> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>n</mi> <mn>1</mn> </msub> <mo>-</mo> <msub> <mi>k</mi> <mn>1</mn> </msub> </mrow> <msub> <mi>n</mi> <mn>1</mn> </msub> </mfrac> </mrow>
<mrow> <msub> <mi>P</mi> <mo>-</mo> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>n</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>k</mi> <mn>2</mn> </msub> </mrow> <msub> <mi>n</mi> <mn>2</mn> </msub> </mfrac> <mo>.</mo> </mrow>
5. the method according to claim 1 or 2, wherein the learning set L is obtained in the step (4) as follows2Adaptation rate of each dimension characteristic of the sample:
(4.1)L2probability p that each dimension feature of a sample belongs to a positive samplej +
Is provided with L1The mean value of the class centers of the positive attribute samples is characterized asCorresponding to a probability of belonging to a positive sample of P+Wherein x isi +=(xi1,xi2,…xij) For positive samples, the sample characteristics obey a Gaussian distribution x+~N(μ++2) (ii) a Assuming that the sample characteristics of the learning set and the probability of the learning set belonging to the positive sample are in a mapping relation, and obtaining L according to the Gaussian distribution of the positive sample2Sample j-th dimension feature belongs to positive sample probability pj +Comprises the following steps:
<mrow> <msup> <msub> <mi>p</mi> <mi>j</mi> </msub> <mo>+</mo> </msup> <mo>=</mo> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>*</mo> <mfrac> <mn>1</mn> <mrow> <msqrt> <mrow> <mn>2</mn> <mi>&amp;pi;</mi> </mrow> </msqrt> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mo>+</mo> </msubsup> </mrow> </mfrac> <msubsup> <mo>&amp;Integral;</mo> <mrow> <mo>-</mo> <mi>&amp;infin;</mi> </mrow> <mi>x</mi> </msubsup> <mi>exp</mi> <mrow> <mo>(</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msubsup> <mi>&amp;mu;</mi> <mi>j</mi> <mo>+</mo> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mn>2</mn> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mrow> <mo>+</mo> <mn>2</mn> </mrow> </msubsup> </mrow> </mfrac> <mo>)</mo> </mrow> <mi>d</mi> <mi>x</mi> <mo>+</mo> <msub> <mi>C</mi> <mn>2</mn> </msub> </mrow>
wherein,is the mean of the j-th dimension features of the positive samples,variance of j-th dimension feature of positive sample, C1、C2Is a linear coefficient;
when x is mujWhen, the probability P that the positive class center belongs to the positive sample corresponds to+
When x is + ∞, the probability of belonging to a positive sample is highest at this time, and is considered to be 1 at this time;
when x is ═ infinity, the probability of belonging to a positive sample is lowest at this time, and it is considered to be 0 at this time;
the function is expressed as x ═ mujIs segmented into
<mrow> <msup> <msub> <mi>p</mi> <mi>j</mi> </msub> <mo>+</mo> </msup> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>2</mn> <msub> <mi>P</mi> <mo>+</mo> </msub> <mo>*</mo> <mfrac> <mn>1</mn> <mrow> <msqrt> <mrow> <mn>2</mn> <mi>&amp;pi;</mi> </mrow> </msqrt> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mo>+</mo> </msubsup> </mrow> </mfrac> <msubsup> <mo>&amp;Integral;</mo> <mrow> <mo>-</mo> <mi>&amp;infin;</mi> </mrow> <mi>x</mi> </msubsup> <mi>exp</mi> <mrow> <mo>(</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>-</mo> <msubsup> <mi>&amp;mu;</mi> <mi>j</mi> <mo>+</mo> </msubsup> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mn>2</mn> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mrow> <mo>+</mo> <mn>2</mn> </mrow> </msubsup> </mrow> </mfrac> <mo>)</mo> </mrow> <mi>d</mi> <mi>x</mi> </mrow> </mtd> <mtd> <mrow> <mi>x</mi> <mo>&amp;le;</mo> <msub> <mi>&amp;mu;</mi> <mi>j</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>2</mn> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>P</mi> <mo>+</mo> </msub> <mo>)</mo> </mrow> <mfrac> <mn>1</mn> <mrow> <msqrt> <mrow> <mn>2</mn> <mi>&amp;pi;</mi> </mrow> </msqrt> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mo>+</mo> </msubsup> </mrow> </mfrac> <msubsup> <mo>&amp;Integral;</mo> <mrow> <mo>-</mo> <mi>&amp;infin;</mi> </mrow> <mi>x</mi> </msubsup> <mi>exp</mi> <mrow> <mo>(</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msubsup> <mi>&amp;mu;</mi> <mi>j</mi> <mo>+</mo> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mn>2</mn> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mrow> <mo>+</mo> <mn>2</mn> </mrow> </msubsup> </mrow> </mfrac> <mo>)</mo> </mrow> <mi>d</mi> <mi>x</mi> <mo>+</mo> <mn>2</mn> <msub> <mi>P</mi> <mo>+</mo> </msub> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mrow> <mi>x</mi> <mo>&amp;GreaterEqual;</mo> <msub> <mi>&amp;mu;</mi> <mi>j</mi> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
(4.2)L2Probability of each dimension feature of sample belonging to negative sample
Is provided with L1The negative attribute sample class center is characterized byCorresponding to a probability of belonging to a negative example of P-Wherein x isi -=(xi1,xi2,…xij) For negative samples, the sample characteristics obey a Gaussian distribution x-~N(μ- -2) (ii) a Assuming that the sample characteristics of the learning set and the probability of the learning set belonging to the negative sample are in a mapping relation, and obtaining L according to the Gaussian distribution of the negative sample2Probability that j-th dimension feature of sample belongs to negative sampleComprises the following steps:
<mrow> <msubsup> <mi>p</mi> <mi>j</mi> <mo>-</mo> </msubsup> <mo>=</mo> <msup> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>&amp;prime;</mo> </msup> <mo>*</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mfrac> <mn>1</mn> <mrow> <msqrt> <mrow> <mn>2</mn> <mi>&amp;pi;</mi> </mrow> </msqrt> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mo>-</mo> </msubsup> </mrow> </mfrac> <msubsup> <mo>&amp;Integral;</mo> <mrow> <mo>-</mo> <mi>&amp;infin;</mi> </mrow> <mi>x</mi> </msubsup> <mi>exp</mi> <mo>(</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>-</mo> <msubsup> <mi>&amp;mu;</mi> <mi>j</mi> <mo>-</mo> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mn>2</mn> <msup> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mo>-</mo> </msubsup> <mn>2</mn> </msup> </mrow> </mfrac> <mo>)</mo> <mi>d</mi> <mi>x</mi> <mo>)</mo> </mrow> <mo>+</mo> <msup> <msub> <mi>C</mi> <mn>2</mn> </msub> <mo>&amp;prime;</mo> </msup> </mrow>
wherein,is the mean of the j-th dimension features of the negative samples,variance of j-th dimension characteristic of negative sample, C1'、C2' is a linear coefficient;
when x is mu-When, the probability P that the negative class center belongs to the negative sample corresponds to-
When x is ═ infinity, the probability of belonging to a negative sample is highest at this time, and it is considered to be 1 at this time;
when x is + ∞, the probability of belonging to the negative sample is the lowest, and the probability is considered to be 0;
by applying the function toSegmented into the above formula
<mrow> <msup> <msub> <mi>p</mi> <mi>j</mi> </msub> <mo>-</mo> </msup> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>2</mn> <mo>*</mo> <mrow> <mo>(</mo> <mrow> <mn>1</mn> <mo>-</mo> <msub> <mi>P</mi> <mo>-</mo> </msub> </mrow> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <mrow> <mn>1</mn> <mo>-</mo> <mfrac> <mn>1</mn> <mrow> <msqrt> <mrow> <mn>2</mn> <mi>&amp;pi;</mi> </mrow> </msqrt> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mo>-</mo> </msubsup> </mrow> </mfrac> <msubsup> <mo>&amp;Integral;</mo> <mrow> <mo>-</mo> <mi>&amp;infin;</mi> </mrow> <mi>x</mi> </msubsup> <mi>exp</mi> <mrow> <mo>(</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>-</mo> <msubsup> <mi>&amp;mu;</mi> <mi>j</mi> <mo>-</mo> </msubsup> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mn>2</mn> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mrow> <mo>-</mo> <mn>2</mn> </mrow> </msubsup> </mrow> </mfrac> <mo>)</mo> </mrow> <mi>d</mi> <mi>x</mi> </mrow> <mo>)</mo> </mrow> <mo>+</mo> <mn>2</mn> <msub> <mi>P</mi> <mo>-</mo> </msub> <mo>-</mo> <mn>1</mn> </mrow> </mtd> <mtd> <mrow> <mi>x</mi> <mo>&amp;le;</mo> <msubsup> <mi>&amp;mu;</mi> <mi>j</mi> <mo>-</mo> </msubsup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>2</mn> <msub> <mi>P</mi> <mo>-</mo> </msub> <mo>*</mo> <mrow> <mo>(</mo> <mrow> <mn>1</mn> <mo>-</mo> <mfrac> <mn>1</mn> <mrow> <msqrt> <mrow> <mn>2</mn> <mi>&amp;pi;</mi> </mrow> </msqrt> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mo>-</mo> </msubsup> </mrow> </mfrac> <msubsup> <mo>&amp;Integral;</mo> <mrow> <mo>-</mo> <mi>&amp;infin;</mi> </mrow> <mi>x</mi> </msubsup> <mi>exp</mi> <mrow> <mo>(</mo> <mfrac> <msup> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>-</mo> <msubsup> <mi>&amp;mu;</mi> <mi>j</mi> <mo>-</mo> </msubsup> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <mn>2</mn> <msubsup> <mi>&amp;sigma;</mi> <mi>j</mi> <mrow> <mo>-</mo> <mn>2</mn> </mrow> </msubsup> </mrow> </mfrac> <mo>)</mo> </mrow> <mi>d</mi> <mi>x</mi> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>x</mi> <mo>&amp;GreaterEqual;</mo> <msubsup> <mi>&amp;mu;</mi> <mi>j</mi> <mo>-</mo> </msubsup> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
(4.3)L2Adaptation rate p of each dimension characteristic of samplej
<mrow> <msub> <mi>p</mi> <mrow> <mi>j</mi> <mo>_</mo> <mi>m</mi> <mi>a</mi> <mi>t</mi> <mi>c</mi> <mi>h</mi> </mrow> </msub> <mo>=</mo> <msubsup> <mi>p</mi> <mi>j</mi> <mo>+</mo> </msubsup> <mo>-</mo> <msubsup> <mi>p</mi> <mi>j</mi> <mo>-</mo> </msubsup> </mrow>
L is obtained by this step2Probability P that j-th dimension feature of sample belongs to positive/negative samplej +、Pj -And L2Adaptation rate P of each dimension characteristic of samplej_match
6. The method according to claim 1 or 2, wherein the step (5) is performed by calculating the learning set L as follows2Sensitivity of each dimensional feature of the sample to suitability:
changing the learning set L2J-th dimension eigenvector x of sample matrixjSimultaneously controlling the rest m-1 dimensional feature vectors to be unchanged when x isj=(k,k,k,…k)TK varies sequentially from 0,0.1,0.2 … 1.0.0 with a sample matrix of
<mrow> <msup> <mi>X</mi> <mo>&amp;prime;</mo> </msup> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mn>11</mn> </msub> </mtd> <mtd> <msub> <mi>x</mi> <mn>12</mn> </msub> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <mi>k</mi> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <msub> <mi>x</mi> <mrow> <mn>1</mn> <mi>m</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>x</mi> <mn>21</mn> </msub> </mtd> <mtd> <msub> <mi>x</mi> <mn>22</mn> </msub> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <mi>k</mi> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <msub> <mi>x</mi> <mrow> <mn>2</mn> <mi>m</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mtable> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> </mtable> </mtd> <mtd> <mtable> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> </mtable> </mtd> <mtd> <mtable> <mtr> <mtd> <mrow></mrow> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> </mtr> </mtable> </mtd> <mtd> <mtable> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> </mtable> </mtd> <mtd> <mtable> <mtr> <mtd> <mrow></mrow> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> </mtr> <mtr> <mtd> <mrow></mrow> </mtd> </mtr> </mtable> </mtd> <mtd> <mtable> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> </mtable> </mtd> </mtr> <mtr> <mtd> <msub> <mi>x</mi> <mrow> <mi>n</mi> <mn>1</mn> </mrow> </msub> </mtd> <mtd> <msub> <mi>x</mi> <mrow> <mi>n</mi> <mn>2</mn> </mrow> </msub> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <mi>k</mi> </mtd> <mtd> <mo>...</mo> </mtd> <mtd> <msub> <mi>x</mi> <mrow> <mi>n</mi> <mi>m</mi> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow>
After preprocessing the sample matrix, classifying X' by using the previous SVM training model to obtain n samples with correct classification number(j,k)If the total number of samples is n, the classification accuracy at this time is determinedRecording the maximum value P of the classification accuracyjmaxAnd a minimum value Pjmin
Pjmax=max{P(j,k)|k=0,0.1,...,1.0}
Pjmin=min{P(j,k)|k=0,0.1,...,1.0}
The values after the m dimensionalities are normalized are used as the sensitivity of the adaptation rate of the corresponding dimensionality characteristics;
<mrow> <msub> <mi>W</mi> <mi>j</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>P</mi> <mrow> <mi>j</mi> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>P</mi> <mrow> <mi>j</mi> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </mrow> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mrow> <mi>j</mi> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>P</mi> <mrow> <mi>j</mi> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow>
<mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>W</mi> <mi>j</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow>
wherein, WjAnd adapting the sensitivity of the rate for the j-th dimension of the feature vector.
7. A method according to claim 1 or 2, characterised in thatThe step (6) obtains the learning set L as follows2Adaptation rate P of the ith samplei
<mrow> <msub> <mi>P</mi> <mi>i</mi> </msub> <mo>=</mo> <mi>a</mi> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>W</mi> <msup> <mi>j</mi> <mo>*</mo> </msup> </msub> <mrow> <mo>(</mo> <msup> <msub> <mi>P</mi> <mi>j</mi> </msub> <mo>+</mo> </msup> <mo>-</mo> <msup> <msub> <mi>P</mi> <mi>j</mi> </msub> <mo>-</mo> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mi>b</mi> </mrow>
a+b=1
Where m denotes the dimension of the sample feature, WjSensitivity, P, representing adaptation rate of j-th dimension feature vectorj +Represents L2Probability that the j-th feature of a sample belongs to a positive sample, Pj -Represents L2The probability that the j-th dimension feature of a sample belongs to a negative sample,
when P is presentj +→ 1 time, Pj -→0,Pi→1;
When P is presentj +Time → 0, Pj -→1,Pi→0;
If a is 0.5 and b is 0.5, then
<mrow> <msub> <mi>P</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>0.5</mn> <mo>*</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>W</mi> <msup> <mi>j</mi> <mo>*</mo> </msup> </msub> <mrow> <mo>(</mo> <msup> <msub> <mi>P</mi> <mi>j</mi> </msub> <mo>+</mo> </msup> <mo>-</mo> <msup> <msub> <mi>P</mi> <mi>j</mi> </msub> <mo>-</mo> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mn>0.5.</mn> </mrow>
8. The method of claim 1 or 2, wherein the features extracted in step (1) are:
bright and dark target density: the proportion of light and dark target pixel points is that the light target pixel points represent points with gray values larger than 2/3 of the SAR image full-image gray value, and the dark target pixel points represent points with gray values smaller than 1/3 of the SAR image full-image gray value;
the structure has obvious strength: and (3) performing binary edge extraction on the radar image, and removing noise of the connected domain mark with less number of pixels by using a connected domain marking method to mark the ratio of the total number of pixels to the image width, height and width mean value.
CN201510075677.6A 2015-02-12 2015-02-12 A kind of SAR image suitability Forecasting Methodology based on support vector regression Active CN104636758B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510075677.6A CN104636758B (en) 2015-02-12 2015-02-12 A kind of SAR image suitability Forecasting Methodology based on support vector regression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510075677.6A CN104636758B (en) 2015-02-12 2015-02-12 A kind of SAR image suitability Forecasting Methodology based on support vector regression

Publications (2)

Publication Number Publication Date
CN104636758A CN104636758A (en) 2015-05-20
CN104636758B true CN104636758B (en) 2018-02-16

Family

ID=53215486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510075677.6A Active CN104636758B (en) 2015-02-12 2015-02-12 A kind of SAR image suitability Forecasting Methodology based on support vector regression

Country Status (1)

Country Link
CN (1) CN104636758B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106054189B (en) * 2016-07-17 2018-06-05 西安电子科技大学 Radar target identification method based on dpKMMDP models
CN110246134A (en) * 2019-06-24 2019-09-17 株洲时代电子技术有限公司 A kind of rail defects and failures sorter

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073873A (en) * 2011-01-28 2011-05-25 华中科技大学 Method for selecting SAR (spaceborne synthetic aperture radar) scene matching area on basis of SVM (support vector machine)
CN102663436A (en) * 2012-05-03 2012-09-12 武汉大学 Self-adapting characteristic extracting method for optical texture images and synthetic aperture radar (SAR) images
CN102902979A (en) * 2012-09-13 2013-01-30 电子科技大学 Method for automatic target recognition of synthetic aperture radar (SAR)
CN103942749A (en) * 2014-02-24 2014-07-23 西安电子科技大学 Hyperspectral ground feature classification method based on modified cluster hypothesis and semi-supervised extreme learning machine

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073873A (en) * 2011-01-28 2011-05-25 华中科技大学 Method for selecting SAR (spaceborne synthetic aperture radar) scene matching area on basis of SVM (support vector machine)
CN102663436A (en) * 2012-05-03 2012-09-12 武汉大学 Self-adapting characteristic extracting method for optical texture images and synthetic aperture radar (SAR) images
CN102902979A (en) * 2012-09-13 2013-01-30 电子科技大学 Method for automatic target recognition of synthetic aperture radar (SAR)
CN103942749A (en) * 2014-02-24 2014-07-23 西安电子科技大学 Hyperspectral ground feature classification method based on modified cluster hypothesis and semi-supervised extreme learning machine

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Research on Scene Infrared Image Simulation;Lamei Zou 等;《International Journal of Digital Content Technology and its Applications》;20120331;第6卷(第4期);第77-86页 *
基于检测识别的实孔径雷达景象匹配定位方法;杨卫东 等;《华中科技大学学报》;20050228;第33卷(第2期);第25-27页 *

Also Published As

Publication number Publication date
CN104636758A (en) 2015-05-20

Similar Documents

Publication Publication Date Title
CN111160176B (en) Fusion feature-based ground radar target classification method for one-dimensional convolutional neural network
CN106203523B (en) The hyperspectral image classification method of the semi-supervised algorithm fusion of decision tree is promoted based on gradient
CN112232371B (en) American license plate recognition method based on YOLOv3 and text recognition
CN113159048A (en) Weak supervision semantic segmentation method based on deep learning
CN106952643A (en) A Clustering Method of Recording Devices Based on Gaussian Mean Supervector and Spectral Clustering
CN112862093B (en) Graphic neural network training method and device
CN102968620B (en) Scene recognition method based on layered Gaussian hybrid model
CN108830243A (en) Hyperspectral image classification method based on capsule network
CN104252625A (en) Sample adaptive multi-feature weighted remote sensing image method
CN103593855A (en) Clustered image splitting method based on particle swarm optimization and spatial distance measurement
CN113723492A (en) Hyperspectral image semi-supervised classification method and device for improving active deep learning
CN105427313A (en) Deconvolutional network and adaptive inference network based SAR image segmentation method
CN107478418A (en) A kind of rotating machinery fault characteristic automatic extraction method
CN103177265A (en) High-definition image classification method based on kernel function and sparse coding
CN106250913B (en) A kind of combining classifiers licence plate recognition method based on local canonical correlation analysis
CN111540203B (en) Method for adjusting green light passing time based on fast-RCNN
CN111914922B (en) Hyperspectral image classification method based on local convolution and cavity convolution
CN111639688B (en) Local interpretation method of Internet of things intelligent model based on linear kernel SVM
CN104077771B (en) A kind of weighting method realizes the mixed model image partition method of space limitation
CN104636758B (en) A kind of SAR image suitability Forecasting Methodology based on support vector regression
CN109523514A (en) To the batch imaging quality assessment method of Inverse Synthetic Aperture Radar ISAR
CN111652177A (en) Signal feature extraction method based on deep learning
CN116310466A (en) Small sample image classification method based on local irrelevant area screening graph neural network
CN110689092A (en) Sole pattern image depth clustering method based on data guidance
CN111325158B (en) CNN and RFC-based integrated learning polarized SAR image classification method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant