CN113159157B - Improved low-frequency UWB SAR foliage covert target fusion change detection method - Google Patents
Improved low-frequency UWB SAR foliage covert target fusion change detection method Download PDFInfo
- Publication number
- CN113159157B CN113159157B CN202110407968.6A CN202110407968A CN113159157B CN 113159157 B CN113159157 B CN 113159157B CN 202110407968 A CN202110407968 A CN 202110407968A CN 113159157 B CN113159157 B CN 113159157B
- Authority
- CN
- China
- Prior art keywords
- image
- detection
- target
- detection threshold
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
Description
技术领域technical field
本发明涉及雷达技术领域,更具体地,涉及一种改进的低频UWB SAR叶簇隐蔽目标融合变化检测方法。The invention relates to the field of radar technology, and more specifically, to an improved low-frequency UWB SAR leaf cluster covert target fusion change detection method.
背景技术Background technique
在现代战争中,作战双方越来越注重对己方军事目标的隐蔽,同时提高对敌方隐蔽军事目标的探测侦察能力。因此,对隐蔽目标探测技术的研究能为我国新型战场侦察/制导武器装备的研制提供重要的理论和技术支持,具有重要的军事意义。此外,我国边境地理形势复杂,很多区域丛林密布,这为邻国在边境附近布设军事目标和调整军事防务提供了便利。由于具有丛林遮蔽,常规雷达系统无法穿透丛林对敌方进行有效探测侦察。因此,迫切需要研发先进体制雷达系统以及配套的检测系统,以提高对隐蔽军事目标的探测侦察能力。In modern warfare, both sides pay more and more attention to the concealment of their own military targets, and at the same time improve the detection and reconnaissance capabilities of the enemy's hidden military targets. Therefore, the research on concealed target detection technology can provide important theoretical and technical support for the development of new battlefield reconnaissance/guided weapons and equipment in our country, and has important military significance. In addition, the geographical situation of my country's borders is complicated, and many areas are densely covered with jungles, which provides convenience for neighboring countries to deploy military targets near the borders and adjust their military defenses. Due to the shelter of the jungle, conventional radar systems cannot penetrate the jungle to effectively detect and reconnaissance the enemy. Therefore, there is an urgent need to develop advanced institutional radar systems and supporting detection systems to improve the detection and reconnaissance capabilities of hidden military targets.
低频超宽带合成孔径雷达(UWB SAR)具有良好叶簇穿透探测性能和高分辨成像能力,已成为隐蔽目标探测侦察的重要手段。由于丛林探测环境复杂和低频UWB体制,使获得的UWB SAR图像经常存在许多非目标点的强散射点(如粗壮的树干)。这些非目标的斑状散射点使得对于不同时相的低频UWB SAR图像在进行变化检测时总会存在许多虚警点,从而增加了叶簇隐蔽目标变化检测的难度。Low-frequency ultra-wideband synthetic aperture radar (UWB SAR) has good foliage penetration detection performance and high-resolution imaging capability, and has become an important means of concealed target detection and reconnaissance. Due to the complex forest detection environment and the low-frequency UWB system, there are often many non-target strong scattering points (such as thick tree trunks) in the obtained UWB SAR images. These non-target speckled scattering points always have many false alarm points when performing change detection on low-frequency UWB SAR images of different phases, which increases the difficulty of detecting changes in leaf clusters.
目前,低频UWB SAR图像目标检测方法主要包括三类,第一类是基于图像像素级的变化检测方法,通过对不同时相图像中同一点像素点的比较分析,观测该像素点是否发生变化,常见的方法有差值法和比值法等。基于图像像素级的变化检测方法实现较为简单,但是单纯的像素灰度值的分析受噪声影响大,实际检测效果往往不佳。第二类方法是基于图像特征级的变化检测方法,通过对像素周围的区域进行特征提取,将提取到的特征进行综合分析,实现对变化特征点的检测,常用的算法有Edgeworth法、灰度共生矩阵方法。基于图像特征级的变化检测方法在进行对比时采用的是图像纹理特征、边缘特征等信息,具有较强的稳定性和抗干扰能力,实现相对基于图像像素级的变化检测方法复杂。第三类方法是基于图像目标级的变化检测方法,先对单时相的图像进行提取特征和图像分类,获取图像中的目标点的属性信息,再对分类后的图像进行变化检测。与其他两种算法相比较,基于图像目标级的变化检测方法是一种更高层次的变化检测技术,其研究和应用的难度极大,目前仍处于起步阶段。At present, the low-frequency UWB SAR image target detection methods mainly include three types. The first type is based on the change detection method at the image pixel level. Through the comparison and analysis of the same pixel point in different phase images, it is observed whether the pixel point has changed. The common methods include the difference method and the ratio method. The implementation of the change detection method based on image pixel level is relatively simple, but the analysis of simple pixel gray value is greatly affected by noise, and the actual detection effect is often not good. The second type of method is a change detection method based on the image feature level. By extracting the features of the area around the pixel and comprehensively analyzing the extracted features, the detection of the change feature points is realized. Commonly used algorithms include the Edgeworth method and the gray level co-occurrence matrix method. The change detection method based on the image feature level uses image texture features, edge features and other information for comparison, which has strong stability and anti-interference ability, but it is relatively complicated to implement the change detection method based on the image pixel level. The third type of method is a change detection method based on the image target level. First, extract features and classify the single-temporal image, obtain the attribute information of the target point in the image, and then perform change detection on the classified image. Compared with the other two algorithms, the change detection method based on the image object level is a higher level change detection technology, and its research and application are extremely difficult, and it is still in its infancy.
上述的三种方法大多都针对于SAR图像中目标点的某一种变化检测信息进行变化测,单一变化检测方法很难充分利用好SAR图像中的变化检测信息,容易出现缺失检测目标点的情况。近些年来,融合多种方法的目标检测越来越受到了学者们的关注。将多种低频UWB SAR叶簇隐蔽目标检测方法进行改进与融合,能够获得更好的检测效果。目前,在变化检测领域比较常用的融合变化检测方法有:马尔科夫模型法、以及拉普拉斯特征映射支持向量描述(LE-SVDD)法等。基于LE-SVDD的融合变化检测方法能够融合多种算法,比单一检测算法具有更好的检测性能。但是,传统的基于LE-SVDD的融合变化检测方法由于其采用的子算法和检测阈值设置不合理,导致融合检测效果不好,难以实现低频UWB SAR图像的高效高精度目标检测。Most of the above three methods are aimed at the change detection of a certain change detection information of the target point in the SAR image. It is difficult for a single change detection method to make full use of the change detection information in the SAR image, and it is easy to miss the detection target point. In recent years, target detection that combines multiple methods has attracted more and more attention from scholars. Improvement and fusion of various low-frequency UWB SAR leaf cluster concealed target detection methods can obtain better detection results. At present, the commonly used fusion change detection methods in the field of change detection include: Markov model method, and Laplacian eigenmap support vector description (LE-SVDD) method, etc. The fusion change detection method based on LE-SVDD can fuse multiple algorithms and has better detection performance than a single detection algorithm. However, due to the unreasonable sub-algorithm and detection threshold setting of the traditional fusion change detection method based on LE-SVDD, the fusion detection effect is not good, and it is difficult to achieve efficient and high-precision target detection in low-frequency UWB SAR images.
发明内容Contents of the invention
本发明提供一种改进的低频UWB SAR叶簇隐蔽目标融合变化检测方法,该方法能够有效抑制低频UWB SAR图像存在的许多非目标的强散射点,减少了变化检测过程中虚警点的发生。The invention provides an improved low-frequency UWB SAR leaf cluster covert target fusion change detection method, which can effectively suppress many non-target strong scattering points existing in the low-frequency UWB SAR image, and reduce the occurrence of false alarm points in the change detection process.
为了达到上述技术效果,本发明的技术方案如下:In order to achieve the above-mentioned technical effect, the technical scheme of the present invention is as follows:
一种改进的低频UWB SAR叶簇隐蔽目标融合变化检测方法,包括以下步骤:An improved low-frequency UWB SAR leaf cluster covert target fusion change detection method, including the following steps:
S1:对检测图像和参考图像进行预处理;S1: Preprocessing the detection image and the reference image;
S2:求出基于改进杂波分布模型的图像分割差值法的检验量图像和检测阈值;S2: Calculate the inspection quantity image and detection threshold of the image segmentation difference method based on the improved clutter distribution model;
S3:求出一维Edgeworth法和广义Laguerre多项式法的检验量图像和检测阈值;S3: Calculate the test quantity image and detection threshold of the one-dimensional Edgeworth method and the generalized Laguerre polynomial method;
S4:将三个检验量图像和检测阈值都输入到LE-SVDD分类器中进行训练,并对测试样本进行目标与非目标判别,即为最终变化检测结果。S4: Input the three test quantity images and the detection threshold into the LE-SVDD classifier for training, and perform target and non-target discrimination on the test sample, which is the final change detection result.
进一步地,所述步骤S1的过程包括:Further, the process of step S1 includes:
对检测图像和参考图像进行预处理,进行双方向的相对辐射校正处理,去除图像中由于系统响应、天气条件非目标变化因素引起的图像灰度值变化,Preprocess the detected image and the reference image, perform bidirectional relative radiation correction processing, and remove image gray value changes caused by system response and non-target changes in weather conditions in the image,
步骤S1的具体过程是:The specific process of step S1 is:
对于观测区域的每一个像素点,都采用15×15的像素邻域计算参考图像和检测图像间该点的相关系数值,当相关系数大于0.5时,则将该点设为无变化点,对于检测图像和参考图像中整个图像中的无变化点,分别设为x1,x2...,xt和y1,y2...,yt;采用最小二乘法进行双方向线性估计,获得对应的线性系数 和/>用线性系数为无变化点分配权值进行加权最小二乘估计,得到对应的线性系数/>和/>后继续进行分配权值,直到线性系数稳定下来,即为ka、ba、ko和bo,此时:For each pixel in the observation area, a 15×15 pixel neighborhood is used to calculate the correlation coefficient value of the point between the reference image and the detection image. When the correlation coefficient is greater than 0.5, the point is set as a no-change point. For the detection image and reference image The no-change points in the entire image are respectively set to x 1 , x 2 ..., x t and y 1 , y 2 ..., y t ; the least square method is used for bidirectional linear estimation to obtain the corresponding linear coefficient and /> Use linear coefficients to assign weights to no-change points for weighted least squares estimation, and obtain corresponding linear coefficients /> and /> Then continue to assign weights until the linear coefficients stabilize, namely k a , b a , k o and b o , at this time:
式中,与/>分别为不变点x1,x2...,xt和y1,y2...,yt的均值;In the formula, with /> are respectively the mean values of invariant points x 1 , x 2 ..., x t and y 1 , y 2 ..., y t ;
对于检测图像的所有点,采用线性变换进行变换,得到校正后的检测图像。For all points of the detection image, a linear transformation is applied Transformation is performed to obtain the corrected detection image.
进一步地,所述步骤S2的过程包括:Further, the process of step S2 includes:
基于改进杂波分布模型的图像分割差值法是一种像素级的变化检测方法,通过对不同时相的图像的同一点进行相减得到差值检验量图像,然后对差值检验量图像进行分割,并对分割后的图像非目标区域的概率密度分布进行估计,并设定一定的虚警概率值,在概率密度分布中得到对应的检测阈值,大于该检测阈值,则视为变化检测点,由此得到第一个子算法的检验量图像和检测阈值,The image segmentation difference method based on the improved clutter distribution model is a pixel-level change detection method. The difference test image is obtained by subtracting the same point of the image in different time phases, and then the difference test image is segmented, and the probability density distribution of the non-target area of the segmented image is estimated, and a certain false alarm probability value is set. The corresponding detection threshold is obtained in the probability density distribution. If it is greater than the detection threshold, it is regarded as a change detection point, and thus the test image and detection threshold of the first sub-algorithm are obtained.
所述步骤S2的具体过程是:The concrete process of described step S2 is:
设S1、S2分别为不同时刻观测所得SAR的强度图像,S1为不含目标的参考图像,S2为检测图像,Zdif为由S1和S2构成的差值检验量图像:Let S 1 and S 2 be the intensity images of SAR observed at different times respectively, S 1 is the reference image without the target, S 2 is the detection image, and Z dif is the difference inspection image composed of S 1 and S 2 :
首先,对参考图像进行图像分割,采用OTSU图像分割算法,设定参考图像的等效视数为1,图像初始分割数为N=2,对参考图像进行分割,参考图像分割后的各类子区域的误差平方除以均值平方的最小值小于等效视数时,则停止进行分割,否则N=N+1,继续分割图像,直到退出循环;First, carry out image segmentation to the reference image, adopt OTSU image segmentation algorithm, set the equivalent number of views of the reference image to be 1, the initial number of image divisions is N=2, segment the reference image, when the minimum value of the error square divided by the mean square of the various sub-regions after the reference image segmentation is less than the equivalent number of views, then stop segmenting, otherwise N=N+1, continue to segment the image until exiting the loop;
接着,对各区域的杂波分布进行估计,假定差值检验量图像的各区域的杂波分布为Fi(zdif),i∈N,设定虚假概率为10-8,对应各区域的检测阈值为 利用图像差值法对图像S2中新出现的目标进行变化检测的数学表达式为:Next, estimate the clutter distribution of each area, assuming that the clutter distribution of each area of the difference test image is F i (z dif ), i∈N, set the false probability to 10 -8 , and the detection threshold corresponding to each area is The mathematical expression for the change detection of the new target in the image S2 by using the image difference method is:
式中,zdif、s1、s2分别为图像Zdif、S1、S2中同一像素点的灰度值,Tdif为根据虚警率设置的检测阈值。In the formula, z dif , s 1 , s 2 are the gray value of the same pixel in the image Z dif , S 1 , S 2 respectively, and T dif is the detection threshold set according to the false alarm rate.
进一步地,所述步骤S3的过程包括:Further, the process of step S3 includes:
一维Edgeworth法中,先以观测区域的每一个像素点为中心,基于Edgeworth展开式对其邻域内的像素灰度值概率密度函数进行基于高斯概率密度分布模型的估计,在此基础上基于K-L散度理论对多时相图像间各点的概率密度函数差异大小进行分析,从而得到关于概率密度差异的检验量图像;广义Laguerre多项式法则以广义Laguerre多项式为基础,对其邻域内的像素灰度值概率密度函数进行基于伽马概率密度分布模型的估计,再通过K-L散度得到检验量图像;然后,对检验量图像进行图像灰度值统计,发现统计量的曲线在左侧出现骤降的趋势,采用骤降曲线中的拐点作为检验阈值;在检测时采用最小二乘法对密度曲线进行拟合,并取拟合曲线中竖轴等于0的横轴值近似为检测阈值,得到两个子算法的检验量图像和检测阈值,In the one-dimensional Edgeworth method, each pixel in the observation area is first centered, and the probability density function of the pixel gray value in its neighborhood is estimated based on the Edgeworth expansion based on the Gaussian probability density distribution model. The value probability density function is estimated based on the gamma probability density distribution model, and then the inspection quantity image is obtained through the K-L divergence; then, the image gray value statistics are performed on the inspection quantity image, and the curve of the statistical quantity is found to have a sharp drop on the left, and the inflection point in the steep drop curve is used as the inspection threshold; the least square method is used to fit the density curve during detection, and the horizontal axis value of the vertical axis equal to 0 in the fitted curve is approximated as the detection threshold, and the inspection quantity image and detection threshold of the two sub-algorithms are obtained.
所述步骤S3的具体过程是:The concrete process of described step S3 is:
一维Edgeworth法和广义Laguerre多项式法都是通过先对概率密度分布进行估计,然后采用K-L散度进行概率密度分布检验量图像的计算;对于检测图像和参考图像的对应一点,需要计算该点邻域所对应的均值、方差、三阶累积量、四阶累积量统计信息,然后代入K-L散度推导出的相关公式,求出概率分布差异值;Both the one-dimensional Edgeworth method and the generalized Laguerre polynomial method first estimate the probability density distribution, and then use the K-L divergence to calculate the probability density distribution inspection quantity image; for the corresponding point of the detection image and the reference image, it is necessary to calculate the mean value, variance, third-order cumulant, and fourth-order cumulant statistical information corresponding to the neighborhood of the point, and then substitute into the relevant formula derived from the K-L divergence to obtain the difference value of the probability distribution;
对于一维Edgeworth法和广义Laguerre多项式法所得到的检验量图像,由于检测图像中非目标点的个数远远大于目标点的个数,对检验量图像进行灰度统计量统计,发现非目标点集中在左侧,且出现骤降趋势,目标点集中在右侧,采用统计量曲线拐点作为检测阈值,在右侧采用拟合检验量图像的统计量曲线的方法,获得y=ax+b拟合曲线,曲线中y=0对应的x值即为检测阈值。For the inspection quantity image obtained by the one-dimensional Edgeworth method and the generalized Laguerre polynomial method, since the number of non-target points in the detection image is far greater than the number of target points, grayscale statistics are carried out on the inspection quantity image. It is found that the non-target points are concentrated on the left side, and there is a sudden drop trend, and the target points are concentrated on the right side. The inflection point of the statistical curve is used as the detection threshold, and the method of fitting the statistical quantity curve of the inspection volume image is used on the right side to obtain the y=ax+b fitting curve. The x value corresponding to y=0 in the curve is detection threshold.
进一步地,步骤S4的过程包括:Further, the process of step S4 includes:
SVDD分类器首先将变化区域和无变化区域的变化区域检测量记为目标类样本和外点样本,而后利用预先提取的目标类样本构成的训练样本集对SVDD进行训练,以期在核特征空间构建一个包含所有训练样本的最小超球体,并以此超球体为基础对观测场景中的变化检测量进行分类;LE-SVDD在SVDD的基础上,设定三个子算法检验量图像中一点都大于对应检验量图像的检测阈值和标准差为训练样本,三个子算法中存在一个或两个检验量图像对应点大于检测阈值和标准差为测试类样本,将训练样本输入到LE-SVDD分类器中进行训练,并对测试样本进行目标与非目标判别,即为最终变化检测结果,The SVDD classifier first records the change area detections of the change area and the no change area as target samples and outlier samples, and then uses the training sample set composed of pre-extracted target samples to train SVDD in order to construct a minimum hypersphere containing all training samples in the kernel feature space, and classify the change detection quantities in the observation scene based on this hypersphere; LE-SVDD sets the detection threshold and standard deviation of the three sub-algorithm test images as training samples based on SVDD. In the sub-algorithm, if there are one or two corresponding points of the test quantity image greater than the detection threshold and standard deviation, it is a test sample, and the training sample is input into the LE-SVDD classifier for training, and the target and non-target discrimination is performed on the test sample, which is the final change detection result.
所述步骤S4的具体过程是:The concrete process of described step S4 is:
对于检测图像S1和参考图像S2,设基于改进杂波分布模型的图像分割差值法的检验量图像为I1,检测阈值为T1,采用一维Edgeworth法和广义Laguerre多项式法求出的检验量图像分别为I2、I3,检测阈值分别为T2、T3;For the detection image S 1 and the reference image S 2 , let the inspection image of the image segmentation difference method based on the improved clutter distribution model be I 1 , the detection threshold be T 1 , the inspection image obtained by the one-dimensional Edgeworth method and the generalized Laguerre polynomial method be I 2 , I 3 , and the detection threshold be T 2 , T 3 ;
构建样本数据集为A={(i1,i2,i3)|i1∈I1,i2∈I2,i3∈I3},i1、i2、i3都是检验量图像I1、I2、I3中对应的一点;Construct a sample data set as A={(i 1 , i 2 , i 3 )|i 1 ∈ I 1 , i 2 ∈ I 2 , i 3 ∈ I 3 }, i 1 , i 2 , and i 3 are all corresponding points in the test image I 1 , I 2 , I 3 ;
构建进行训练的数据集为B={(i1,i2,i3)|i1-T1>σ1,i2-T2>σ2,i3-T3>σ3},其中σ1、σ2、σ3分别表示检验量图像I1、I2、I3的标准差;Construct the data set for training as B={(i 1 ,i 2 ,i 3 )|i 1 -T 1 >σ 1 ,i 2 -T 2 >σ 2 ,i 3 -T 3 >σ 3 }, where σ 1 , σ 2 , and σ 3 represent the standard deviations of the test image I 1 , I 2 , and I 3 respectively;
测试数据集为C={(i1,i2,i3)||i1-T1|≤σ1或|i2-T2|≤σ2或|i3-T|3≤σ3};The test data set is C={(i 1 ,i 2 ,i 3 )||i 1 -T 1 |≤σ 1 or |i 2 -T 2 |≤σ 2 or |i 3 -T| 3 ≤σ 3 };
已知非目标区域为D={(i1,i2,i3)|i1-T1<σ1或i2-T2<σ2或i3-T<σ3};The known non-target area is D={(i 1 ,i 2 ,i 3 )|i 1 -T 1 <σ 1 or i 2 -T 2 <σ 2 or i 3 -T<σ 3 };
采用序贯最小优化算法利用训练数据集B进行训练,然后使用训练好的LE-SVDD分类器对测试数据集C进行判别,从而得到检测图像上目标点的信息,进而实现高效高精度低频UWB SAR叶簇隐蔽目标融合变化检测。The sequential minimum optimization algorithm is used to train the training data set B, and then the trained LE-SVDD classifier is used to discriminate the test data set C, so as to obtain the information of the target point on the detection image, and then realize the high-efficiency and high-precision low-frequency UWB SAR leaf cluster covert target fusion change detection.
与现有技术相比,本发明技术方案的有益效果是:Compared with the prior art, the beneficial effects of the technical solution of the present invention are:
本发明采用了基于LE-SVDD的目标融合变化检测技术,能够有效抑制低频UWB SAR图像存在的许多非目标的强散射点,减少了变化检测过程中虚警点的发生。在保持高目标检测性能的同时,提高了目标检测效率,从而实现了对低频UWB SAR的高效高精度检测,获得了隐蔽目标的位置信息。该方法适用于对隐藏在树林中的导弹发射架、坦克等军事目标进行变化检测,获取叶簇隐蔽目标部署变化信息。The invention adopts the target fusion change detection technology based on LE-SVDD, which can effectively suppress many non-target strong scattering points existing in low-frequency UWB SAR images, and reduces the occurrence of false alarm points in the change detection process. While maintaining high target detection performance, the target detection efficiency is improved, thereby realizing high-efficiency and high-precision detection of low-frequency UWB SAR, and obtaining the position information of hidden targets. This method is suitable for detecting changes in military targets such as missile launchers and tanks hidden in the forest, and obtaining information about changes in the deployment of hidden targets in leaf clusters.
附图说明Description of drawings
图1为本发明方法流程图;Fig. 1 is a flow chart of the method of the present invention;
图2为本发明在仿真实验中,所采用的检验图像和参考图像数据;Fig. 2 is the inspection image and reference image data adopted in the simulation experiment of the present invention;
图3为基于改进杂波分布模型的图像分割差值法的检测结果;Figure 3 is the detection result of the image segmentation difference method based on the improved clutter distribution model;
图4为Edgeworth法和广义Laguerre多项式法的检测结果和统计概率分布图;Fig. 4 is the detection result and statistical probability distribution figure of Edgeworth method and generalized Laguerre polynomial method;
图5为改进的低频UWB SAR叶簇隐蔽目标融合变化检测方法的检测结果。Figure 5 shows the detection results of the improved low-frequency UWB SAR leaf cluster covert target fusion change detection method.
具体实施方式Detailed ways
附图仅用于示例性说明,不能理解为对本专利的限制;The accompanying drawings are for illustrative purposes only and cannot be construed as limiting the patent;
为了更好说明本实施例,附图某些部件会有省略、放大或缩小,并不代表实际产品的尺寸;In order to better illustrate this embodiment, some parts in the drawings will be omitted, enlarged or reduced, and do not represent the size of the actual product;
对于本领域技术人员来说,附图中某些公知结构及其说明可能省略是可以理解的。For those skilled in the art, it is understandable that some well-known structures and descriptions thereof may be omitted in the drawings.
下面结合附图和实施例对本发明的技术方案做进一步的说明。The technical solutions of the present invention will be further described below in conjunction with the accompanying drawings and embodiments.
如图1所示,一种改进的低频UWB SAR叶簇隐蔽目标融合变化检测方法,包括以下步骤:As shown in Figure 1, an improved low-frequency UWB SAR leaf cluster covert target fusion change detection method includes the following steps:
S1:对检测图像和参考图像进行预处理;S1: Preprocessing the detection image and the reference image;
S2:求出基于改进杂波分布模型的图像分割差值法的检验量图像和检测阈值;S2: Calculate the inspection quantity image and detection threshold of the image segmentation difference method based on the improved clutter distribution model;
S3:求出一维Edgeworth法和广义Laguerre多项式法的检验量图像和检测阈值;S3: Calculate the test quantity image and detection threshold of the one-dimensional Edgeworth method and the generalized Laguerre polynomial method;
S4:将三个检验量图像和检测阈值都输入到LE-SVDD分类器中进行训练,并对测试样本进行目标与非目标判别,即为最终变化检测结果。S4: Input the three test quantity images and the detection threshold into the LE-SVDD classifier for training, and perform target and non-target discrimination on the test sample, which is the final change detection result.
通过仿真实验,对本发明改进的低频UWB SAR叶簇隐蔽目标融合变化检测方法进行了验证,理论分析和仿真实验结果证明了本发明的有效性。Through the simulation experiment, the improved low-frequency UWB SAR leaf cluster covert target fusion change detection method of the present invention is verified, and the theoretical analysis and simulation experiment results prove the effectiveness of the present invention.
在仿真实验中,所采用的检验图像和参考图像如图2所示。该检测图像和参考图像所用的数据集是瑞典CARABAS-II VHF SAR系统在甚高频波段(20-90MHz)下的成像截取图片,成像分辨率为2.5m×2.5m。在检测图像可以观察到25个车辆目标点,在参考图像中没有目标点信息。In the simulation experiment, the test image and reference image used are shown in Figure 2. The data set used for the detection image and the reference image is the imaging capture image of the Swedish CARABAS-II VHF SAR system in the VHF band (20-90MHz), and the imaging resolution is 2.5m×2.5m. 25 vehicle target points can be observed in the detection image, and there is no target point information in the reference image.
如图3所示,基于改进杂波分布模型的图像分割差值法是一种像素级的变化检测方法,通过对不同时相的图像的同一点进行相减得到差值检验量图像,然后对差值检验量图像进行分割,并对分割后的图像非目标区域的概率密度分布进行估计,并设定一定的虚警概率值,在概率密度分布中得到对应的检测阈值,大于该检测阈值,则视为变化检测点,由此得到第一个子算法的检验量图像和检测阈值。As shown in Figure 3, the image segmentation difference method based on the improved clutter distribution model is a pixel-level change detection method. The difference test image is obtained by subtracting the same point of the image in different phases, and then the difference test image is segmented, and the probability density distribution of the non-target area of the segmented image is estimated, and a certain false alarm probability value is set. The corresponding detection threshold is obtained in the probability density distribution. If it is greater than the detection threshold, it is regarded as a change detection point, and thus the test image and detection threshold of the first sub-algorithm are obtained.
步骤S2的具体过程是:The concrete process of step S2 is:
设S1、S2分别为不同时刻观测所得SAR的强度图像,S1为不含目标的参考图像,S2为检测图像,Zdif为由S1和S2构成的差值检验量图像:Let S 1 and S 2 be the intensity images of SAR observed at different times respectively, S 1 is the reference image without the target, S 2 is the detection image, and Z dif is the difference inspection image composed of S 1 and S 2 :
首先,对参考图像进行图像分割,采用OTSU图像分割算法,设定参考图像的等效视数为1,图像初始分割数为N=2,对参考图像进行分割,参考图像分割后的各类子区域的误差平方除以均值平方的最小值小于等效视数时,则停止进行分割,否则N=N+1,继续分割图像,直到退出循环;First, carry out image segmentation to the reference image, adopt OTSU image segmentation algorithm, set the equivalent number of views of the reference image to be 1, the initial number of image divisions is N=2, segment the reference image, when the minimum value of the error square divided by the mean square of the various sub-regions after the reference image segmentation is less than the equivalent number of views, then stop segmenting, otherwise N=N+1, continue to segment the image until exiting the loop;
接着,对各区域的杂波分布进行估计,假定差值检验量图像的各区域的杂波分布为Fi(zdif),i∈N,设定虚假概率为10-8,对应各区域的检测阈值为 利用图像差值法对图像S2中新出现的目标进行变化检测的数学表达式为:Next, estimate the clutter distribution of each area, assuming that the clutter distribution of each area of the difference test image is F i (z dif ), i∈N, set the false probability to 10 -8 , and the detection threshold corresponding to each area is The mathematical expression for the change detection of the new target in the image S2 by using the image difference method is:
式中,zdif、s1、s2分别为图像Zdif、S1、S2中同一像素点的灰度值,Tdif为根据虚警率设置的检测阈值。由图3可知,基于改进杂波分布模型的图像分割差值法对图像有较好的检测效果,能够识别出大部分的目标点,但仍有一些目标点缺失。In the formula, z dif , s 1 , s 2 are the gray value of the same pixel in the image Z dif , S 1 , S 2 respectively, and T dif is the detection threshold set according to the false alarm rate. It can be seen from Figure 3 that the image segmentation difference method based on the improved clutter distribution model has a good detection effect on the image, and can identify most of the target points, but there are still some target points missing.
如图4所示,一维Edgeworth法中,先以观测区域的每一个像素点为中心,基于Edgeworth展开式对其邻域内的像素灰度值概率密度函数进行基于高斯概率密度分布模型的估计,在此基础上基于K-L散度理论对多时相图像间各点的概率密度函数差异大小进行分析,从而得到关于概率密度差异的检验量图像;广义Laguerre多项式法则以广义Laguerre多项式为基础,对其邻域内的像素灰度值概率密度函数进行基于伽马概率密度分布模型的估计,再通过K-L散度得到检验量图像;然后,对检验量图像进行图像灰度值统计,发现统计量的曲线在左侧出现骤降的趋势,采用骤降曲线中的拐点作为检验阈值;在检测时采用最小二乘法对密度曲线进行拟合,并取拟合曲线中竖轴等于0的横轴值近似为检测阈值,得到两个子算法的检验量图像和检测阈值。As shown in Figure 4, in the one-dimensional Edgeworth method, each pixel in the observation area is first centered, and the probability density function of the pixel gray value in its neighborhood is estimated based on the Edgeworth expansion based on the Gaussian probability density distribution model. The pixel gray value probability density function is estimated based on the gamma probability density distribution model, and then the test image is obtained through the K-L divergence; then, the image gray value of the test image is counted, and the curve of the statistic is found to have a sharp drop on the left, and the inflection point in the drop curve is used as the test threshold; the least square method is used to fit the density curve during detection, and the horizontal axis value of the fitted curve is equal to 0.
步骤S3的具体过程是:The concrete process of step S3 is:
一维Edgeworth法和广义Laguerre多项式法都是通过先对概率密度分布进行估计,然后采用K-L散度进行概率密度分布检验量图像的计算;对于检测图像和参考图像的对应一点,需要计算该点邻域所对应的均值、方差、三阶累积量、四阶累积量统计信息,然后代入K-L散度推导出的相关公式,求出概率分布差异值;Both the one-dimensional Edgeworth method and the generalized Laguerre polynomial method first estimate the probability density distribution, and then use the K-L divergence to calculate the probability density distribution inspection quantity image; for the corresponding point of the detection image and the reference image, it is necessary to calculate the mean value, variance, third-order cumulant, and fourth-order cumulant statistical information corresponding to the neighborhood of the point, and then substitute into the relevant formula derived from the K-L divergence to obtain the difference value of the probability distribution;
对于一维Edgeworth法和广义Laguerre多项式法所得到的检验量图像,由于检测图像中非目标点的个数远远大于目标点的个数,对检验量图像进行灰度统计量统计,发现非目标点集中在左侧,且出现骤降趋势,目标点集中在右侧,采用统计量曲线拐点作为检测阈值,在右侧采用拟合检验量图像的统计量曲线的方法,获得y=ax+b拟合曲线,曲线中y=0对应的x值即为检测阈值。由图4可知,对概率分布曲线进行拟合,采用曲线中的近似拐点作为检测阈值进行检验量图像的检测,能够检测出几乎所有目标,具有良好的检测性能,但检测结果存在一些虚警点。与基于改进杂波分布模型的图像分割差值法不一样,Edgeworth法与广义Laguerre多项式法是对图像的统计分布特征进行差异性分析,但单一检测对象的变化检测,难以兼顾好检测概率和虚警率。For the inspection quantity image obtained by the one-dimensional Edgeworth method and the generalized Laguerre polynomial method, since the number of non-target points in the detection image is far greater than the number of target points, grayscale statistics are carried out on the inspection quantity image. It is found that the non-target points are concentrated on the left side, and there is a sudden drop trend, and the target points are concentrated on the right side. The inflection point of the statistical curve is used as the detection threshold, and the method of fitting the statistical quantity curve of the inspection volume image is used on the right side to obtain the y=ax+b fitting curve. The x value corresponding to y=0 in the curve is detection threshold. It can be seen from Figure 4 that by fitting the probability distribution curve and using the approximate inflection point in the curve as the detection threshold to detect the inspection image, almost all targets can be detected, with good detection performance, but there are some false alarm points in the detection results. Different from the image segmentation difference method based on the improved clutter distribution model, the Edgeworth method and the generalized Laguerre polynomial method analyze the differences in the statistical distribution characteristics of the image, but it is difficult to take into account the detection probability and the false alarm rate for the change detection of a single detection object.
如图5所示,SVDD分类器首先将变化区域和无变化区域的变化区域检测量记为目标类样本和外点样本,而后利用预先提取的目标类样本构成的训练样本集对SVDD进行训练,以期在核特征空间构建一个包含所有训练样本的最小超球体,并以此超球体为基础对观测场景中的变化检测量进行分类;LE-SVDD在SVDD的基础上,设定三个子算法检验量图像中一点都大于对应检验量图像的检测阈值和标准差为训练样本,三个子算法中存在一个或两个检验量图像对应点大于检测阈值和标准差为测试类样本,将训练样本输入到LE-SVDD分类器中进行训练,并对测试样本进行目标与非目标判别,即为最终变化检测结果,As shown in Figure 5, the SVDD classifier first records the detection quantities of the change regions in the change region and the non-change region as target samples and outlier samples, and then uses the training sample set composed of pre-extracted target samples to train SVDD, in order to construct a minimum hypersphere containing all training samples in the kernel feature space, and classify the change detection quantities in the observation scene based on this hypersphere; on the basis of SVDD, LE-SVDD sets the detection threshold and standard deviation of the three sub-algorithms. In the three sub-algorithms, there are one or two corresponding points of the test quantity image greater than the detection threshold and standard deviation as the test class sample, and the training sample is input into the LE-SVDD classifier for training, and the target and non-target discrimination is performed on the test sample, which is the final change detection result.
步骤S4的具体过程是:The concrete process of step S4 is:
对于检测图像S1和参考图像S2,设基于改进杂波分布模型的图像分割差值法的检验量图像为I1,检测阈值为T1,采用一维Edgeworth法和广义Laguerre多项式法求出的检验量图像分别为I2、I3,检测阈值分别为T2、T3;For the detection image S 1 and the reference image S 2 , let the inspection image of the image segmentation difference method based on the improved clutter distribution model be I 1 , the detection threshold be T 1 , the inspection image obtained by the one-dimensional Edgeworth method and the generalized Laguerre polynomial method be I 2 , I 3 , and the detection threshold be T 2 , T 3 ;
构建样本数据集为A={(i1,i2,i3)|i1∈I1,i2∈I2,i3∈I3},i1、i2、i3都是检验量图像I1、I2、I3中对应的一点;Construct a sample data set as A={(i 1 , i 2 , i 3 )|i 1 ∈ I 1 , i 2 ∈ I 2 , i 3 ∈ I 3 }, i 1 , i 2 , and i 3 are all corresponding points in the test image I 1 , I 2 , I 3 ;
构建进行训练的数据集为B={(i1,i2,i3)|i1-T1>σ1,i2-T2>σ2,i3-T3>σ3},其中σ1、σ2、σ3分别表示检验量图像I1、I2、I3的标准差;Construct the data set for training as B={(i 1 ,i 2 ,i 3 )|i 1 -T 1 >σ 1 ,i 2 -T 2 >σ 2 ,i 3 -T 3 >σ 3 }, where σ 1 , σ 2 , and σ 3 represent the standard deviations of the test image I 1 , I 2 , and I 3 respectively;
测试数据集为C={(i1,i2,i3)||i1-T1|≤σ1或|i2-T2|≤σ2或|i3-T|3≤σ3};The test data set is C={(i 1 ,i 2 ,i 3 )||i 1 -T 1 |≤σ 1 or |i 2 -T 2 |≤σ 2 or |i 3 -T| 3 ≤σ 3 };
已知非目标区域为D={(i1,i2,i3)|i1-T1<σ1或i2-T2<σ2或i3-T<σ3};The known non-target area is D={(i 1 ,i 2 ,i 3 )|i 1 -T 1 <σ 1 or i 2 -T 2 <σ 2 or i 3 -T<σ 3 };
采用序贯最小优化算法利用训练数据集B进行训练,然后使用训练好的LE-SVDD分类器对测试数据集C进行判别,从而得到检测图像上目标点的信息,进而实现高效高精度低频UWB SAR叶簇隐蔽目标融合变化检测。从图5可知,本发明对低频UWB SAR图像具有良好的检测效果,能够检测出不同时相SAR图像的变化信息,在保持一定检测概率的同时,降低了虚警率,而且检测速度较传统的基于LE-SVDD的叶簇隐蔽目标融合变化检测方法更快。因此,本发明方法是一种低频UWB SAR图像的高效高精度变化检测方法。The sequential minimum optimization algorithm is used to train the training data set B, and then the trained LE-SVDD classifier is used to discriminate the test data set C, so as to obtain the information of the target point on the detection image, and then realize the high-efficiency and high-precision low-frequency UWB SAR leaf cluster covert target fusion change detection. It can be seen from Fig. 5 that the present invention has a good detection effect on low-frequency UWB SAR images, and can detect the change information of SAR images in different phases. While maintaining a certain detection probability, the false alarm rate is reduced, and the detection speed is faster than the traditional LE-SVDD-based fusion change detection method of leaf cluster concealed targets. Therefore, the method of the present invention is an efficient and high-precision change detection method for low-frequency UWB SAR images.
相同或相似的标号对应相同或相似的部件;The same or similar reference numerals correspond to the same or similar components;
附图中描述位置关系的用于仅用于示例性说明,不能理解为对本专利的限制;The positional relationship described in the drawings is only for illustrative purposes and cannot be construed as a limitation to this patent;
显然,本发明的上述实施例仅仅是为清楚地说明本发明所作的举例,而并非是对本发明的实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动。这里无需也无法对所有的实施方式予以穷举。凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明权利要求的保护范围之内。Apparently, the above-mentioned embodiments of the present invention are only examples for clearly illustrating the present invention, rather than limiting the implementation of the present invention. For those of ordinary skill in the art, other changes or changes in different forms can be made on the basis of the above description. It is not necessary and impossible to exhaustively list all the implementation manners here. All modifications, equivalent replacements and improvements made within the spirit and principles of the present invention shall be included within the protection scope of the claims of the present invention.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110407968.6A CN113159157B (en) | 2021-04-15 | 2021-04-15 | Improved low-frequency UWB SAR foliage covert target fusion change detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110407968.6A CN113159157B (en) | 2021-04-15 | 2021-04-15 | Improved low-frequency UWB SAR foliage covert target fusion change detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113159157A CN113159157A (en) | 2021-07-23 |
CN113159157B true CN113159157B (en) | 2023-07-25 |
Family
ID=76868045
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110407968.6A Expired - Fee Related CN113159157B (en) | 2021-04-15 | 2021-04-15 | Improved low-frequency UWB SAR foliage covert target fusion change detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113159157B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113870193A (en) * | 2021-09-06 | 2021-12-31 | 中山大学 | Low-frequency UWB SAR image target change detection method based on deep learning |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102005049A (en) * | 2010-11-16 | 2011-04-06 | 西安电子科技大学 | Unilateral generalized gaussian model-based threshold method for SAR (Source Address Register) image change detection |
CN102968790A (en) * | 2012-10-25 | 2013-03-13 | 西安电子科技大学 | Remote sensing image change detection method based on image fusion |
CN105005983A (en) * | 2015-04-13 | 2015-10-28 | 西南科技大学 | SAR image background clutter modeling and target detection method |
CN106023229A (en) * | 2016-06-02 | 2016-10-12 | 中国矿业大学 | SAR image change detection method by combining half Gauss model and Gauss model |
CN109919910A (en) * | 2019-01-25 | 2019-06-21 | 合肥工业大学 | SAR image change detection method based on difference map fusion and improved level set |
-
2021
- 2021-04-15 CN CN202110407968.6A patent/CN113159157B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102005049A (en) * | 2010-11-16 | 2011-04-06 | 西安电子科技大学 | Unilateral generalized gaussian model-based threshold method for SAR (Source Address Register) image change detection |
CN102968790A (en) * | 2012-10-25 | 2013-03-13 | 西安电子科技大学 | Remote sensing image change detection method based on image fusion |
CN105005983A (en) * | 2015-04-13 | 2015-10-28 | 西南科技大学 | SAR image background clutter modeling and target detection method |
CN106023229A (en) * | 2016-06-02 | 2016-10-12 | 中国矿业大学 | SAR image change detection method by combining half Gauss model and Gauss model |
CN109919910A (en) * | 2019-01-25 | 2019-06-21 | 合肥工业大学 | SAR image change detection method based on difference map fusion and improved level set |
Also Published As
Publication number | Publication date |
---|---|
CN113159157A (en) | 2021-07-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108510467B (en) | SAR image target identification method based on depth deformable convolution neural network | |
CN102496016B (en) | Infrared target detection method based on space-time cooperation framework | |
Kou et al. | Infrared small target detection based on the improved density peak global search and human visual local contrast mechanism | |
Hu et al. | An infrared target intrusion detection method based on feature fusion and enhancement | |
CN107491731A (en) | A kind of Ground moving target detection and recognition methods towards precision strike | |
Kim | Analysis of small infrared target features and learning-based false detection removal for infrared search and track | |
CN109389609B (en) | Interactive self-feedback infrared target detection method based on FART neural network | |
CN105549009A (en) | SAR image CFAR target detection method based on super pixels | |
Chen et al. | Change detection algorithm for multi-temporal remote sensing images based on adaptive parameter estimation | |
Zhu et al. | Improving hyperspectral anomaly detection with a simple weighting strategy | |
Der et al. | Probe-based automatic target recognition in infrared imagery | |
CN113159157B (en) | Improved low-frequency UWB SAR foliage covert target fusion change detection method | |
Du et al. | Multiple frames based infrared small target detection method using CNN | |
Bi et al. | A hierarchical salient-region based algorithm for ship detection in remote sensing images | |
CN110310263B (en) | A method for detecting residential areas in SAR images based on saliency analysis and background priors | |
Li et al. | A CFAR detector based on a robust combined method with spatial information and sparsity regularization in non-homogeneous Weibull clutter | |
CN107729903A (en) | SAR image object detection method based on area probability statistics and significance analysis | |
Ning et al. | Background Modeling and Fuzzy Clustering for Motion Detection from Video. | |
CN115861669A (en) | Infrared dim target detection method based on clustering idea | |
CN116843906A (en) | Target multi-angle intrinsic feature mining method based on Laplace feature mapping | |
Chen et al. | Summary about detection and tracking of infrared small targets | |
Ma et al. | Gradient field divergence-based small target detection in infrared images | |
Zhu et al. | Exploratory Research on Defense against Natural Adversarial Examples in Image Classification. | |
Li et al. | Ground target recognition based on imaging LADAR point cloud data | |
Huang et al. | Passive ground camouflage target recognition based on gray feature and texture feature in SAR images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20230725 |