CN109002848A - A kind of detection method of small target based on Feature Mapping neural network - Google Patents

A kind of detection method of small target based on Feature Mapping neural network Download PDF

Info

Publication number
CN109002848A
CN109002848A CN201810729648.0A CN201810729648A CN109002848A CN 109002848 A CN109002848 A CN 109002848A CN 201810729648 A CN201810729648 A CN 201810729648A CN 109002848 A CN109002848 A CN 109002848A
Authority
CN
China
Prior art keywords
neural network
target
detection
weak
spindle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810729648.0A
Other languages
Chinese (zh)
Other versions
CN109002848B (en
Inventor
谢春芝
高志升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Chengan Technology Co ltd
Shenzhen Wanzhida Technology Co ltd
Original Assignee
Xihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xihua University filed Critical Xihua University
Priority to CN201810729648.0A priority Critical patent/CN109002848B/en
Publication of CN109002848A publication Critical patent/CN109002848A/en
Application granted granted Critical
Publication of CN109002848B publication Critical patent/CN109002848B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of detection method of small target based on Feature Mapping neural network, are related to Dim targets detection field;It includes step 1: building, training spindle-type deep neural network;Step 2: by the amplitude figure that the spindle-type deep neural network that the Weak target image input of acquisition has been trained obtains targets improvement and background inhibits;Step 3: amplitude figure completes Dim targets detection using constant false alarm rate method.The present invention uses spindle network structure, the powerful expression ability of network is improved, solves the problems, such as that existing Weak target is caused detection accuracy low by noise and interference effect, has reached and has improved the powerful expression ability of network, under high-noise environment, the effect of Dim targets detection precision is improved.

Description

一种基于特征映射神经网络的弱小目标检测方法A Weak Target Detection Method Based on Feature Mapping Neural Network

技术领域technical field

本发明涉及弱小目标检测领域,尤其是一种基于特征映射神经网络的弱小目标检测方法。The invention relates to the field of weak and small target detection, in particular to a weak and small target detection method based on a feature mapping neural network.

背景技术Background technique

被动毫米波PMMW和红外成像具有无辐射、穿透能力强的优良特性,其在军事领域中的应用日益受到关注,因此在毫米波和红外成像下对弱小目标检测进行研究具有十分重要的意义。弱小目标检测技术近年来得到了快速发展,弱小目标指直径为3-5个像素的目标,但针对毫米波和红外成像条件下弱小目标高精度检测依然面临极大的困难:首先,目标的成像距离一般较远,所检测到的目标面积较小,信噪比较低,无纹理特征可提取;第二,目标成像通常受到复杂背景的干扰,大量的杂波、噪声,还有一些边缘信息如:云边缘、海天基线、建筑物边缘等的存在,造成了目标淹没于背景之中。Passive millimeter-wave PMMW and infrared imaging have excellent characteristics of no radiation and strong penetrating ability, and their application in the military field has attracted increasing attention. Therefore, it is of great significance to study weak and small target detection under millimeter-wave and infrared imaging. Weak and small target detection technology has developed rapidly in recent years. Weak and small targets refer to targets with a diameter of 3-5 pixels. However, high-precision detection of weak and small targets under millimeter-wave and infrared imaging conditions still faces great difficulties: First, the imaging distance of the target Generally far away, the detected target area is small, the signal-to-noise ratio is low, and no texture features can be extracted; second, target imaging is usually interfered by complex backgrounds, a large amount of clutter, noise, and some edge information such as : The existence of cloud edges, sea-sky baselines, building edges, etc., causes the target to be submerged in the background.

针对弱小目标检测,近年来学术界提出了一系列检测方法;背景抑制方法是弱小目标检测中最常见的方法,该方法通过估计待检测图像的背景,在此基础上进行目标检测。它主要分为三类检测方法:第一类是基于滤波的方法,通过图像滤波来估计背景,最终使目标得到增强;其在背景较简单的情况下抑制背景的效果较好,在背景较复杂、信噪比较低的情况,虚警概率增高,检测精度下降;第二类是基于回归的方法,回归方法又可以分为线性回归和非线性回归,经典的线性回归方法依赖于特定的背景杂波模型和寻求假设模型的参数估计;而非线性回归方法仅依赖于数据本身来估计回归函数;核回归算法NRRKR为典型的非线性回归算法;在实际应用中,由于缺乏背景杂波的先验知识,非线性回归方法更适合复杂背景条件下弱小目标的检测,但其存在明显的不足:每一个局部区域都需要进行多次回归迭代,整体算法效率极低;第三类方法是依据局部对比度差异对背景进行抑制,对目标进行增强,完成对目标的检测。这类方法在背景较简单的情况下检测效果较好,而在复杂背景下容易增加虚警目标数量且易受噪声的影响。For weak and small target detection, a series of detection methods have been proposed in recent years in academia; the background suppression method is the most common method in weak and small target detection. This method estimates the background of the image to be detected and performs target detection on this basis. It is mainly divided into three types of detection methods: the first type is based on filtering methods, which estimate the background through image filtering, and finally enhance the target; the effect of suppressing the background is better when the background is relatively simple, and the effect of suppressing the background is better when the background is more complex. , When the signal-to-noise ratio is low, the false alarm probability increases and the detection accuracy decreases; the second type is a regression-based method, which can be divided into linear regression and nonlinear regression. The classic linear regression method depends on a specific background The clutter model and the parameter estimation of the hypothesis-seeking model; the nonlinear regression method only relies on the data itself to estimate the regression function; the kernel regression algorithm NRRKR is a typical nonlinear regression algorithm; in practical applications, due to the lack of background clutter prior According to empirical knowledge, the nonlinear regression method is more suitable for the detection of weak and small targets under complex background conditions, but it has obvious shortcomings: each local area needs to perform multiple regression iterations, and the overall algorithm efficiency is extremely low; the third type of method is based on local The contrast difference suppresses the background, enhances the target, and completes the detection of the target. This kind of method has better detection effect in the case of simple background, but it is easy to increase the number of false alarm targets and be affected by noise in complex background.

除了背景抑制方法外,还有一种基于机器学习的检测方法,该类方法用模式分类的思想去解决目标检测问题,它分别对目标和背景进行训练建模,然后根据判别规则判定测试图像的子图像块是否含有目标,如NLPCA、SPCA、FLD等。后来,随着稀疏表示理论的出现,为解决弱小目标检测问题带来了新的方法。基于图像稀疏表示的红外弱小目标检测算法SR,该方法采用二元高斯模型生成目标字典,继而通过背景子块与目标子块在目标字典中稀疏系数的差异来判断目标的位置。高斯字典作为典型的结构化过完备字典只适用于高斯分布的弱小目标,而对于非结构性的目标,其稀疏表示系数不足以区分目标和背景杂波。后来,王等人提出了多尺度自适应形态的稀疏字典来检测弱小目标,通过采用不同大小的原子来描述图像的不同成分,捕获图像更为细微的局部特征,提高检测精度;接着,陈等人提出了基于稀疏度的方法,该方法以离线学习的方式手动构造有区别性的双字典来提高稀疏表示的差异;后来,有人提出的新方法:基于空时联合稀疏重构弱小运动目标检测算法STCSR,该方法首先通过学习序列图像的内容构建自适应型态过完备空时字典,然后利用多元高斯模型从过完备字典中提取出目标空时字典和背景空时字典,将多帧图像分别在目标空时字典和背景空时字典进行稀疏重构,利用重构差异来区分目标和背景。针对该方法在字典学习方面的不足,有人提出了改进的ISR方法,该方法提出一种显著背景和目标双字典构造方法,具有更好的目标和背景建模能力。以上方法都在一定程度上提高了检测精度,但上述方法均存在缺点:一方面容易受到噪声的干扰,另一方面目标与背景稀疏特征的差异不明显,容易混杂在一起,增加了检测难度。In addition to the background suppression method, there is also a detection method based on machine learning. This type of method uses the idea of pattern classification to solve the problem of target detection. Whether the image block contains the target, such as NLPCA, SPCA, FLD, etc. Later, with the emergence of sparse representation theory, a new method was brought to solve the problem of weak and small target detection. Infrared weak and small target detection algorithm SR based on image sparse representation, this method uses a binary Gaussian model to generate a target dictionary, and then judges the position of the target by the difference between the sparse coefficients of the background sub-block and the target sub-block in the target dictionary. As a typical structured over-complete dictionary, Gaussian dictionary is only suitable for weak targets with Gaussian distribution, while for unstructured targets, its sparse representation coefficients are not enough to distinguish targets from background clutter. Later, Wang et al. proposed a sparse dictionary with multi-scale adaptive morphology to detect weak and small targets. By using atoms of different sizes to describe different components of the image, the more subtle local features of the image were captured and the detection accuracy was improved; then, Chen et al. People proposed a method based on sparsity, which manually constructs a discriminative double dictionary in an offline learning manner to improve the difference in sparse representation; later, a new method was proposed: weak and small moving target detection based on space-time joint sparse reconstruction Algorithm STCSR, this method first constructs an adaptive type over-complete space-time dictionary by learning the content of sequence images, and then uses the multivariate Gaussian model to extract the target space-time dictionary and background space-time dictionary from the over-complete dictionary, and divides the multi-frame images into Sparse reconstruction is performed on the target space-time dictionary and the background space-time dictionary, and the reconstruction difference is used to distinguish the target from the background. Aiming at the shortcomings of this method in dictionary learning, an improved ISR method is proposed, which proposes a salient background and target double dictionary construction method, which has better target and background modeling capabilities. The above methods have improved the detection accuracy to a certain extent, but the above methods have disadvantages: on the one hand, they are easily disturbed by noise, on the other hand, the difference between the sparse features of the target and the background is not obvious, and they are easy to mix together, which increases the difficulty of detection.

因此需要一种弱小目标检测方法可以克服弱小目标的噪声因素的同时提高检测精度。Therefore, there is a need for a weak and small target detection method that can overcome the noise factor of weak and small targets and improve the detection accuracy at the same time.

发明内容Contents of the invention

本发明的目的在于:本发明提供了一种基于特征映射神经网络的弱小目标检测方法,解决了现有弱小目标受噪声和干扰影响导致检测精度低的问题。The object of the present invention is: the present invention provides a method for detecting weak and small targets based on a feature mapping neural network, which solves the problem of low detection accuracy caused by existing weak and small targets being affected by noise and interference.

本发明采用的技术方案如下:The technical scheme that the present invention adopts is as follows:

一种基于特征映射神经网络的弱小目标检测方法,包括如下步骤:A weak and small target detection method based on a feature map neural network, comprising the steps of:

步骤1:构建、训练纺锤型深度神经网络;Step 1: Construct and train the spindle deep neural network;

步骤2:将采集的弱小目标图像输入已训练的纺锤型深度神经网络获取目标增强和背景抑制的幅值图;Step 2: Input the collected weak target image into the trained spindle deep neural network to obtain the amplitude map of target enhancement and background suppression;

步骤3:幅值图采用恒虚警率方法完成弱小目标检测。Step 3: The amplitude map uses the constant false alarm rate method to complete weak and small target detection.

优选地,所述步骤1包括如下步骤:Preferably, said step 1 includes the following steps:

步骤1.1:构建包括输入层、解码层、编码层和softmax输出层的纺锤型深度神经网络的结构;Step 1.1: Construct the structure of a spindle-type deep neural network including an input layer, a decoding layer, an encoding layer and a softmax output layer;

步骤1.2:采用交叉验证方法确定纺锤型深度神经网络的超参数获得纺锤型深度神经网络;Step 1.2: using the cross-validation method to determine the hyperparameters of the spindle-shaped deep neural network to obtain the spindle-shaped deep neural network;

步骤1.3:构建训练数据集;Step 1.3: Construct the training data set;

步骤1.4:将训练数据集输入纺锤型深度神经网络采用无监督方式进行训练获得初始化网络权重完成训练。Step 1.4: Input the training data set into the spindle-type deep neural network and train in an unsupervised manner to obtain initialized network weights to complete the training.

优选地,所述解码层、编码层训练计算如下:Preferably, the training calculation of the decoding layer and encoding layer is as follows:

hk=σ(WkX+bk)h k =σ(W k X+b k )

其中,Wk表示权重矩阵,bk表示偏置向量,σ表示激活函数,X={x1,x2,...,xm}表示当前层的输入,hk表示当前层的输出。Among them, W k represents the weight matrix, b k represents the bias vector, σ represents the activation function, X={x 1 , x 2 ,...,x m } represents the input of the current layer, and h k represents the output of the current layer.

优选地,所述步骤2包括如下步骤:Preferably, said step 2 includes the following steps:

步骤2.1:将采集的弱小目标图像输入已训练的纺锤型深度神经网络后,将弱小目标样本标签设置为1,背景样本标签设置为0;Step 2.1: After inputting the collected weak target image into the trained spindle deep neural network, set the weak target sample label to 1 and the background sample label to 0;

步骤2.2:采用滑动窗口的方法对弱小目标图像进行判别获得图像中弱小目标的概率值;Step 2.2: Use the sliding window method to discriminate the weak and small target image to obtain the probability value of the weak and small target in the image;

步骤2.3:采用输出层logistic回归的结果即多个概率值作为窗口坐标点响应幅值获取多个弱小目标对应的目标增强和背景抑制的幅值图。Step 2.3: Use the output layer logistic regression results, that is, multiple probability values as the response amplitude of the window coordinate point to obtain the target enhancement and background suppression amplitude maps corresponding to multiple weak targets.

优选地,所述步骤3包括如下步骤:Preferably, said step 3 includes the following steps:

步骤3.1:幅值图中的每个值采用滑动窗口提取子块进行检测,输入纺锤型深度神经网络获取幅值;Step 3.1: Each value in the magnitude map uses a sliding window to extract sub-blocks For detection, input the spindle-type deep neural network to obtain the amplitude;

步骤3.2:采用恒虚警率对检测后的每个幅值进行统计获取虚警概率;Step 3.2: Use the constant false alarm rate to perform statistics on each detected amplitude to obtain the false alarm probability;

其中,T表示似然比检测的阈值,表示滑动窗口子块的均值,p表示窗口子块内点的个数,Pfa表示恒虚警率检测设定的虚警概率,τCFAR表示检测阈值,F1,p-1CFAR)表示中心F随机变量的累积分布函数;Among them, T represents the threshold of likelihood ratio detection, Represents the mean value of the sliding window sub-block, p represents the number of points in the window sub-block, P fa represents the false alarm probability set by the constant false alarm rate detection, τ CFAR represents the detection threshold, F 1,p-1CFAR ) Represents the cumulative distribution function of the central F random variable;

步骤3.3:根据虚警概率计算候选目标总数,将虚警概率从高到低排序,从多个幅值中检测定位目标。Step 3.3: Calculate the total number of candidate targets according to the false alarm probability, sort the false alarm probability from high to low, and detect and locate the target from multiple amplitudes.

优选地,所述步骤1.3中的构建训练数据集包括如下步骤:Preferably, the construction training data set in said step 1.3 includes the following steps:

步骤1.3.1:在不包含弱小目标图像中随机产生坐标点作为仿真目标,提取N*N窗口区域作为背景样本;Step 1.3.1: Randomly generate coordinate points in the image that does not contain the weak target as the simulation target, and extract the N*N window area as the background sample;

步骤1.3.2:在背景样本中采用二维高斯强度模型加上一个仿真目标作为目标样本,二维高斯模型如下:Step 1.3.2: Use a two-dimensional Gaussian intensity model plus a simulation target as the target sample in the background sample. The two-dimensional Gaussian model is as follows:

其中,(x0,y0)表示目标图像的中心位置,s(i,j)表示目标图像在位置(i,j)的像素值,sE表示生成目标的强度,其值是(0,1]之间的随机数,σx和σy分别表示水平和垂直散布参数,其值介于[0,2]之间;Among them, (x 0 , y 0 ) represents the center position of the target image, s(i,j) represents the pixel value of the target image at position (i, j), s E represents the intensity of the generated target, and its value is (0, 1], σ x and σ y represent the horizontal and vertical spread parameters respectively, and their values are between [0, 2];

步骤1.3.3:调整目标样本的不同参数生成不同信噪比的弱小目标完成训练数据集的构建。Step 1.3.3: Adjust different parameters of the target sample to generate weak targets with different signal-to-noise ratios to complete the construction of the training data set.

综上所述,由于采用了上述技术方案,本发明的有益效果是:In summary, owing to adopting above-mentioned technical scheme, the beneficial effect of the present invention is:

1.本发明采用纺锤形网络结构,首先将低维的弱小目标区块特征映射到高维空间,然后通过编码神经网络提取高分辨特征,完成背景和目标判别,根据网络判别输出的强度得到背景抑制目标增强的图像,最后采用基于恒虚警率的检测方法完成对弱小目标检测,解决了现有弱小目标受噪声和干扰影响导致检测精度低的问题,达到了提高网络强大的表示能力,在高噪声环境下,提高弱小目标检测精度的效果;1. The present invention adopts a spindle-shaped network structure, first maps low-dimensional weak target block features to high-dimensional space, then extracts high-resolution features through an encoding neural network, completes background and target discrimination, and obtains background according to the intensity of network discrimination output Suppress the enhanced image of the target, and finally use the detection method based on the constant false alarm rate to complete the detection of weak and small targets, which solves the problem of low detection accuracy caused by the influence of noise and interference on existing weak and small targets, and achieves the improvement of the powerful representation ability of the network. In a high-noise environment, the effect of improving the detection accuracy of weak and small targets;

2.本发明对不同场景下的毫米波和红外图像进行检测,在网络的前端实现解码操作,使得网络具有更强大的表示能力,首先将弱小目标区域像素特征映射到高维特征空间,再将高维特征通过编码映射到易于判别的低维特征空间,达到更低的虚警率、更高的检测精度、更强的鲁棒性的效果;2. The present invention detects millimeter-wave and infrared images in different scenarios, and realizes the decoding operation at the front end of the network, so that the network has more powerful representation capabilities. First, the pixel features of weak and small target areas are mapped to a high-dimensional feature space, and then High-dimensional features are encoded and mapped to low-dimensional feature spaces that are easy to distinguish, achieving lower false alarm rates, higher detection accuracy, and stronger robustness;

3.本发明网络采用的无监督的学习方式预训练,构建更深层次结构,通过无监督学习得到的初始化网络权重可以提高网络稳定性,以避免直接训练深度神经网络过程中的局部最小问题,同时无监督学习获取一系列相关数据集的内部特征,去除输入数据的冗余成分,利于获得高可判别特征,进一步提高网络的判别精度;3. The unsupervised learning method pre-training adopted by the network of the present invention builds a deeper hierarchical structure, and the initialization network weight obtained through unsupervised learning can improve network stability, so as to avoid the local minimum problem in the process of directly training the deep neural network, and at the same time Unsupervised learning acquires the internal features of a series of related data sets and removes redundant components of the input data, which is conducive to obtaining highly discriminable features and further improving the discriminative accuracy of the network;

4.本发明的恒虚警率方法中采用滑块进行检测,根据实际情况假设两个目标距离有一定的距离时采用滑块进行恒虚假率检测,一方面可以加快检测速度,另一方面利于提高检测精度。4. adopt slide block to detect in the constant false alarm rate method of the present invention, adopt slide block to carry out constant false rate detection when assuming that two target distances have certain distance according to actual situation, can accelerate detection speed on the one hand, be beneficial to on the other hand Improve detection accuracy.

附图说明Description of drawings

为了更清楚地说明本发明实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本发明的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to illustrate the technical solutions of the embodiments of the present invention more clearly, the accompanying drawings used in the embodiments will be briefly introduced below. It should be understood that the following drawings only show some embodiments of the present invention, and thus It should be regarded as a limitation on the scope, and those skilled in the art can also obtain other related drawings based on these drawings without creative work.

图1为本发明的纺锤型深度神经网络结构示意图;Fig. 1 is the schematic diagram of spindle type deep neural network structure of the present invention;

图2为本发明的纺锤型深度神经网络具体实施示意图;Fig. 2 is the specific implementation schematic diagram of the spindle type deep neural network of the present invention;

图3为本发明的训练数据二维图;Fig. 3 is the two-dimensional diagram of training data of the present invention;

图4为本发明的训练数据三维映射图;Fig. 4 is the training data three-dimensional map of the present invention;

图5为本发明的训练数据二维映射图;Fig. 5 is the training data two-dimensional map of the present invention;

图6为本发明的训练数据示意图;Fig. 6 is the training data schematic diagram of the present invention;

图7为本发明的仿真测试数据集数据示意图;Fig. 7 is the data schematic diagram of simulation test data set of the present invention;

图8为本发明的仿真图背景抑制后检测结果示意图;Fig. 8 is a schematic diagram of detection results after the background suppression of the simulation diagram of the present invention;

图9为本发明的SNR<10时检测概率Pd与虚警率Pfa之间的关系图;Fig. 9 is the relationship figure between detection probability P d and false alarm rate P fa when SNR<10 of the present invention;

图10为本发明的10<SNR<20时检测概率Pd与虚警率Pfa之间的关系图;Fig. 10 is the relationship diagram between detection probability P d and false alarm rate P fa when 10<SNR<20 of the present invention;

图11为本发明的20<SNR<30时检测概率Pd与虚警率Pfa之间的关系图;Fig. 11 is the relationship diagram between the detection probability Pd and the false alarm rate Pfa when 20<SNR<30 of the present invention;

图12为本发明的30<SNR<40时检测概率Pd与虚警率Pfa之间的关系图;Fig. 12 is the relationship diagram between the detection probability P d and the false alarm rate P fa when 30<SNR<40 of the present invention;

图13为本发明的Pfa=1e-4之下,Pd随SNR的变化效果检测图;Fig. 13 is a detection diagram of the variation effect of Pd with SNR under P fa = 1e-4 of the present invention;

图14为本发明的Pfa=10e-4,Pd随SNR的变化效果检测图;Fig. 14 is P fa =10e-4 of the present invention, P d changes the effect detection diagram with SNR;

图15为本发明的Pfa=20e-4,Pd随SNR的变化效果检测图;Fig. 15 is P fa =20e-4 of the present invention, P d changes effect detection figure with SNR;

图16为本发明的Pfa=30e-4,Pd随SNR的变化效果检测图;Fig. 16 is P fa =30e-4 of the present invention, P d changes effect detection figure with SNR;

图17为本发明的DL算法的效果示意放大图。FIG. 17 is a schematic enlarged view of the effect of the DL algorithm of the present invention.

具体实施方式Detailed ways

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明,即所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本发明实施例的组件可以以各种不同的配置来布置和设计。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, and are not intended to limit the present invention, that is, the described embodiments are only some of the embodiments of the present invention, but not all of the embodiments. The components of the embodiments of the invention generally described and illustrated in the figures herein may be arranged and designed in a variety of different configurations.

因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。基于本发明的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。Accordingly, the following detailed description of the embodiments of the invention provided in the accompanying drawings is not intended to limit the scope of the claimed invention, but merely represents selected embodiments of the invention. Based on the embodiments of the present invention, all other embodiments obtained by those skilled in the art without making creative efforts belong to the protection scope of the present invention.

需要说明的是,术语“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。It should be noted that relative terms such as the terms "first" and "second" are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply any relationship between these entities or operations. There is no such actual relationship or order between them. Furthermore, the term "comprises", "comprises" or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus comprising a set of elements includes not only those elements, but also includes elements not expressly listed. other elements of or also include elements inherent in such a process, method, article, or apparatus. Without further limitations, an element defined by the phrase "comprising a ..." does not exclude the presence of additional identical elements in the process, method, article or apparatus comprising said element.

技术问题:解决了现有弱小目标受噪声和干扰影响导致检测精度低的问题Technical problem: Solve the problem of low detection accuracy caused by existing weak and small targets affected by noise and interference

技术手段:Technical means:

一种基于特征映射神经网络的弱小目标检测方法,包括如下步骤:A weak and small target detection method based on a feature map neural network, comprising the steps of:

步骤1:构建、训练纺锤型深度神经网络;Step 1: Construct and train the spindle deep neural network;

步骤2:将采集的弱小目标图像输入已训练的纺锤型深度神经网络获取目标增强和背景抑制的幅值图;Step 2: Input the collected weak target image into the trained spindle deep neural network to obtain the amplitude map of target enhancement and background suppression;

步骤3:幅值图采用恒虚警率方法完成弱小目标检测。Step 3: The amplitude map uses the constant false alarm rate method to complete weak and small target detection.

步骤1包括如下步骤:Step 1 includes the following steps:

步骤1.1:构建包括输入层、解码层、编码层和softmax输出层的纺锤型深度神经网络的结构;Step 1.1: Construct the structure of a spindle-type deep neural network including an input layer, a decoding layer, an encoding layer and a softmax output layer;

步骤1.2:采用交叉验证方法确定纺锤型深度神经网络的超参数获得纺锤型深度神经网络;Step 1.2: using the cross-validation method to determine the hyperparameters of the spindle-shaped deep neural network to obtain the spindle-shaped deep neural network;

步骤1.3:构建训练数据集;Step 1.3: Construct the training data set;

步骤1.4:将训练数据集输入纺锤型深度神经网络采用无监督方式进行训练获得初始化网络权重完成训练。Step 1.4: Input the training data set into the spindle-type deep neural network and train in an unsupervised manner to obtain initialized network weights to complete the training.

解码层、编码层训练计算如下:The decoding layer and encoding layer training calculation are as follows:

hk=σ(WkX+bk)h k =σ(W k X+b k )

其中,Wk表示权重矩阵,bk表示偏置向量,σ表示激活函数,X={x1,x2,...,xm}表示当前层的输入,hk表示当前层的输出。Among them, W k represents the weight matrix, b k represents the bias vector, σ represents the activation function, X={x 1 , x 2 ,...,x m } represents the input of the current layer, and h k represents the output of the current layer.

步骤2包括如下步骤:Step 2 includes the following steps:

步骤2.1:将采集的弱小目标图像输入已训练的纺锤型深度神经网络后,将弱小目标样本标签设置为1,背景样本标签设置为0;Step 2.1: After inputting the collected weak target image into the trained spindle deep neural network, set the weak target sample label to 1 and the background sample label to 0;

步骤2.2:采用滑动窗口的方法对弱小目标图像进行判别获得图像中弱小目标的概率值;Step 2.2: Use the sliding window method to discriminate the weak and small target image to obtain the probability value of the weak and small target in the image;

步骤2.3:采用输出层logistic回归的结果即多个概率值作为窗口坐标点响应幅值获取多个弱小目标对应的目标增强和背景抑制的幅值图。Step 2.3: Use the output layer logistic regression results, that is, multiple probability values as the response amplitude of the window coordinate point to obtain the target enhancement and background suppression amplitude maps corresponding to multiple weak targets.

步骤3包括如下步骤:Step 3 includes the following steps:

步骤3.1:幅值图中的每个值采用滑动窗口提取子块进行检测,输入纺锤型深度神经网络获取幅值;Step 3.1: Each value in the magnitude map uses a sliding window to extract sub-blocks For detection, input the spindle-type deep neural network to obtain the amplitude;

步骤3.2:采用恒虚警率对检测后的每个幅值进行统计获取虚警概率;Step 3.2: Use the constant false alarm rate to perform statistics on each detected amplitude to obtain the false alarm probability;

其中,T表示似然比检测的阈值,表示滑动窗口子块的均值,p表示窗口子块内点的个数,Pfa表示恒虚警率检测设定的虚警概率,τCFAR表示检测阈值,F1,p-1CFAR)表示中心F随机变量的累积分布函数;Among them, T represents the threshold of likelihood ratio detection, Represents the mean value of the sliding window sub-block, p represents the number of points in the window sub-block, P fa represents the false alarm probability set by the constant false alarm rate detection, τ CFAR represents the detection threshold, F 1,p-1CFAR ) Represents the cumulative distribution function of the central F random variable;

步骤3.3:根据虚警概率计算候选目标总数,将虚警概率从高到低排序,从多个幅值中检测定位目标。Step 3.3: Calculate the total number of candidate targets according to the false alarm probability, sort the false alarm probability from high to low, and detect and locate the target from multiple amplitudes.

步骤1.3中的构建训练数据集包括如下步骤:The construction of the training data set in step 1.3 includes the following steps:

步骤1.3.1:在不包含弱小目标图像中随机产生坐标点作为仿真目标,提取N*N窗口区域作为背景样本;Step 1.3.1: Randomly generate coordinate points in the image that does not contain the weak target as the simulation target, and extract the N*N window area as the background sample;

步骤1.3.2:在背景样本中采用二维高斯强度模型加上一个仿真目标作为目标样本,二维高斯模型如下:Step 1.3.2: Use a two-dimensional Gaussian intensity model plus a simulation target as the target sample in the background sample. The two-dimensional Gaussian model is as follows:

其中,(x0,y0)表示目标图像的中心位置,s(i,j)表示目标图像在位置(i,j)的像素值,sE表示生成目标的强度,其值是(0,1]之间的随机数,σx和σy分别表示水平和垂直散布参数,其值介于[0,2]之间;Among them, (x 0 , y 0 ) represents the center position of the target image, s(i,j) represents the pixel value of the target image at position (i, j), s E represents the intensity of the generated target, and its value is (0, 1], σ x and σ y represent the horizontal and vertical spread parameters respectively, and their values are between [0, 2];

步骤1.3.3:调整目标样本的不同参数生成不同信噪比的弱小目标完成训练数据集的构建。Step 1.3.3: Adjust different parameters of the target sample to generate weak targets with different signal-to-noise ratios to complete the construction of the training data set.

技术效果:Technical effect:

本发明采用纺锤形网络结构,首先将低维的弱小目标区块特征映射到高维空间,然后通过编码神经网络提取高分辨特征,完成背景和目标判别,根据网络判别输出的强度得到背景抑制目标增强的图像,最后采用基于恒虚警率的检测方法完成对弱小目标检测,解决了现有弱小目标受噪声和干扰影响导致检测精度低的问题,达到了提高网络强大的表示能力,在高噪声环境下,提高弱小目标检测精度的效果。The present invention adopts a spindle network structure, first maps low-dimensional weak target block features to high-dimensional space, then extracts high-resolution features through a coding neural network, completes background and target discrimination, and obtains background suppression targets according to the intensity of network discrimination output In the enhanced image, the detection method based on the constant false alarm rate is finally used to complete the detection of weak and small targets, which solves the problem of low detection accuracy caused by the influence of noise and interference on existing weak and small targets, and achieves the improvement of the powerful representation ability of the network. In the environment, the effect of improving the detection accuracy of weak and small targets.

以下结合实施例对本发明的特征和性能作进一步的详细描述。The characteristics and performance of the present invention will be described in further detail below in conjunction with the examples.

实施例1Example 1

网络的训练数据如图3所示,小圆圈表示一类项目,x点表示另外一类样本,训练样本是线性不可分的,具有较大的判别难度;通过训练样本对深度网络进行训练后,输出第二层和第三层特征如图4所示,线性不可分的二维特征通过神经网络映射到三维后,在三维特征空间具有线性可判别性,三维特征重新编码到二维后,如图5所示,原来不可线性不可分的低维特征被映射到了线性可判别特征空间,而且蓝色x点样本分布在紧致的区域内。如图2所示,整个网络一共包含6层,可以分为2个部分,输入层到第2层是解码层,完成特征的升维,低维到高维的映射;2层到5层是典型的稀疏自编码器,实现抽象高级特征的编码提取,最后一层是softmax输出层;本文提出的网络模型和典型的深度神经网络模型主要区别是第一层到第二层的解码层,传统深度神经网络主要是处理高维数据,提取抽象的降维的特征;基于弱小目标检测窗口较小,维度较低的特点,首先在网络的前端实现解码操作,使得网络具有更强大的表示能力;在高噪声环境下,弱小目标检测精度更高;构建、训练纺锤型深度神经网络:步骤1.1:构建包括输入层、解码层、编码层和softmax输出层的纺锤型深度神经网络的结构;步骤1.2:采用交叉验证方法确定纺锤型深度神经网络的超参数;步骤1.3:构建训练数据集;步骤1.4:将训练数据集输入纺锤型深度神经网络采用无监督方式进行训练获得初始化网络权重完成训练。完成训练的网络模型各层的尺寸为[81,512,256,121,81,1],其中输入层{I1,I2,…,IN}为图像中弱小目标检测窗口像素的线性排列;特征转换,我们采用稀疏自编码器,计算公式如下:The training data of the network is shown in Figure 3. The small circles represent one type of project, and the x point represents another type of sample. The training samples are linear and inseparable, which is difficult to distinguish; after training the deep network through the training samples, the output The second and third layer features are shown in Figure 4. After the linearly inseparable two-dimensional features are mapped to three-dimensional through the neural network, they have linear discriminability in the three-dimensional feature space. After the three-dimensional features are re-encoded to two-dimensional, as shown in Figure 5 As shown, the original non-linear and non-separable low-dimensional features are mapped to the linear discriminable feature space, and the blue x-point samples are distributed in a compact area. As shown in Figure 2, the entire network contains a total of 6 layers, which can be divided into 2 parts. The input layer to the second layer is the decoding layer, which completes the feature upgrade and the low-dimensional to high-dimensional mapping; the 2nd to 5th layers are A typical sparse autoencoder realizes the encoding and extraction of abstract advanced features, and the last layer is the softmax output layer; the main difference between the network model proposed in this paper and the typical deep neural network model is the decoding layer from the first layer to the second layer, traditional The deep neural network mainly processes high-dimensional data and extracts abstract dimensionality reduction features; based on the small target detection window and low dimensionality, the decoding operation is first implemented at the front end of the network, so that the network has more powerful representation capabilities; In a high-noise environment, the detection accuracy of weak and small objects is higher; construct and train a spindle-shaped deep neural network: Step 1.1: Construct the structure of a spindle-shaped deep neural network including an input layer, a decoding layer, an encoding layer and a softmax output layer; Step 1.2 : Using cross-validation method to determine the hyperparameters of the spindle-type deep neural network; Step 1.3: Construct the training data set; Step 1.4: Input the training data set into the spindle-type deep neural network and use unsupervised training to obtain the initialized network weights to complete the training. The size of each layer of the trained network model is [81, 512, 256, 121, 81, 1], where the input layer {I1, I2, ..., IN} is a linear arrangement of pixels in the weak and small target detection window in the image; feature conversion , we use a sparse autoencoder, and the calculation formula is as follows:

其中,x(i)表示在给定输入为x情况下,自编码神经网络隐藏神经元j的激活度,表示隐藏神经元j的平均活跃度;ρ表示稀疏性参数,β表示超参数。Among them, x (i) represents the activation degree of the hidden neuron j of the self-encoder neural network when the given input is x, represents the average activity of hidden neuron j; ρ represents the sparsity parameter, and β represents the hyperparameter.

ρ设置为0.5,β设置为3,采用梯度下降法学习网络参数,学习率设置为0.01;在训练的时候,弱小目标样本标签设置为1,背景样本标签设置为0,采用滑动窗口的方法对图像所有的区域输入网络进行判别,采用输出层logistic回归的结果作为窗口坐标点响应幅值,该值越多说明该检测窗口包含弱小目标的概率越高,通过纺锤型深度神经网络得到目标增强和背景抑制的幅值图为Io;幅值图采用恒虚警率方法完成弱小目标检测;训练的数据集采用仿真方法获取,通过在220幅不包含弱小目标的图像中人工添加弱小目标,仿真构造训练集;在每一幅图像中随机产生坐标点,提取9*9的区域作为背景样本,在背景样本中用二维高斯强度模型加上一个仿真目标作为目标样本,二维高斯模型如下:ρ is set to 0.5, β is set to 3, the gradient descent method is used to learn network parameters, and the learning rate is set to 0.01; during training, the label of weak and small target samples is set to 1, the label of background samples is set to 0, and the method of sliding window is used to All areas of the image are input into the network for discrimination, and the result of the logistic regression of the output layer is used as the response amplitude of the window coordinate point. The larger the value, the higher the probability that the detection window contains a weak target. The target enhancement and The amplitude map of background suppression is I o ; the amplitude map uses the constant false alarm rate method to complete the detection of weak and small targets; the training data set is obtained by simulation method, and artificially add weak and small targets to 220 images that do not contain weak and small targets. Construct a training set; randomly generate coordinate points in each image, extract a 9*9 area as a background sample, and use a two-dimensional Gaussian intensity model plus a simulation target as a target sample in the background sample. The two-dimensional Gaussian model is as follows:

其中,(x0,y0)表示目标图像的中心位置,s(i,j)表示目标图像在位置(i,j)的像素值,sE表示生成目标的强度,其值是(0,1]之间的随机数,σx和σy分别表示水平和垂直散布参数,其值介于[0,2]之间。Among them, (x 0 , y 0 ) represents the center position of the target image, s(i,j) represents the pixel value of the target image at position (i, j), s E represents the intensity of the generated target, and its value is (0, 1], σ x and σ y represent the horizontal and vertical scatter parameters respectively, and their values are between [0, 2].

通过调整不同参数生成不同信噪比的弱小目标,本文生成的弱小目标信噪比介于[0-120] 之间,各样本基本均匀分布在这个信噪比区间范围内,一共包含26400个正样本和26400 个负样本,部分生成训练样本图像如图6所示,仿真测试数据集部分数据如图7所示。Weak targets with different SNRs are generated by adjusting different parameters. The SNRs of the weak targets generated in this paper are between [0-120]. Samples and 26400 negative samples, part of the generated training sample images are shown in Figure 6, and part of the simulation test data set data is shown in Figure 7.

测试集包含仿真测试集和真实数据测试集,图像数据包含毫米波图像和红外图像,红外图像来自于多个数据集,仿真测试数据集采用训练数据相同的弱小目标仿真方法,在背景图像中随机加入不同信噪比的弱小目标,本申请一共在32幅图像中加入1920个弱小目标,同样他们的信噪比接近均匀分布在[0-120]Db之间;将测试集数据输入已经训练的纺锤形神经网络完成测试,通过纺锤形神经网络首先将低维的弱小目标区块特征映射到高维空间,然后通过编码神经网络提取高分辨特征,完成背景和目标判别,根据网络判别输出的强度得到背景抑制目标增强的图像,最后采用基于恒虚警率的检测方法完成对弱小目标检测,有效提高弱小目标的检测精度。The test set includes a simulation test set and a real data test set. The image data includes millimeter-wave images and infrared images. The infrared images come from multiple data sets. The simulation test data set uses the same weak target simulation method as the training data. Add weak and small targets with different signal-to-noise ratios. In this application, a total of 1920 weak and small targets are added to 32 images, and their signal-to-noise ratios are similarly evenly distributed between [0-120]Db; input the test set data into the trained The spindle-shaped neural network completes the test. The low-dimensional weak target block features are first mapped to the high-dimensional space through the spindle-shaped neural network, and then the high-resolution features are extracted through the encoding neural network to complete the background and target discrimination. According to the strength of the output of the network discrimination The enhanced image of the background suppressed target is obtained, and finally the detection method based on constant false alarm rate is used to complete the detection of weak and small targets, which effectively improves the detection accuracy of weak and small targets.

实施例2Example 2

步骤1包括如下步骤:Step 1 includes the following steps:

步骤1.1:构建包括输入层、解码层、编码层和softmax输出层的纺锤型深度神经网络的结构;Step 1.1: Construct the structure of a spindle-type deep neural network including an input layer, a decoding layer, an encoding layer and a softmax output layer;

步骤1.2:采用交叉验证方法确定纺锤型深度神经网络的超参数获得纺锤型深度神经网络;Step 1.2: using the cross-validation method to determine the hyperparameters of the spindle-shaped deep neural network to obtain the spindle-shaped deep neural network;

步骤1.3:构建训练数据集;Step 1.3: Construct the training data set;

步骤1.4:将训练数据集输入纺锤型深度神经网络采用无监督方式进行训练获得初始化网络权重完成训练。Step 1.4: Input the training data set into the spindle-type deep neural network and train in an unsupervised manner to obtain initialized network weights to complete the training.

解码层、编码层训练计算如下:The decoding layer and encoding layer training calculation are as follows:

hk=σ(WkX+bk)h k =σ(W k X+b k )

其中,Wk表示权重矩阵,bk表示偏置向量,σ表示激活函数,X={x1,x2,...,xm}表示当前层的输入,hk表示当前层的输出。Among them, W k represents the weight matrix, b k represents the bias vector, σ represents the activation function, X={x 1 , x 2 ,...,x m } represents the input of the current layer, and h k represents the output of the current layer.

步骤2包括如下步骤:Step 2 includes the following steps:

步骤2.1:将采集的弱小目标图像输入已训练的纺锤型深度神经网络后,将弱小目标样本标签设置为1,背景样本标签设置为0,图像的具体如下:Step 2.1: After inputting the collected weak target image into the trained spindle deep neural network, set the weak target sample label to 1, and the background sample label to 0. The details of the image are as follows:

其中,s表示传感器采集图像,st表示目标信号,sb表示背景信号,n表示噪声。Among them, s represents the image collected by the sensor, s t represents the target signal, s b represents the background signal, and n represents the noise.

步骤2.2:采用滑动窗口的方法对弱小目标图像进行判别获得图像中弱小目标的概率值;Step 2.2: Use the sliding window method to discriminate the weak and small target image to obtain the probability value of the weak and small target in the image;

步骤2.3:采用输出层logistic回归的结果即多个概率值作为窗口坐标点响应幅值获取多个弱小目标对应的目标增强和背景抑制的幅值图。Step 2.3: Use the output layer logistic regression results, that is, multiple probability values as the response amplitude of the window coordinate point to obtain the target enhancement and background suppression amplitude maps corresponding to multiple weak targets.

步骤3包括如下步骤:Step 3 includes the following steps:

步骤3.1:幅值图中的每个值采用滑动窗口提取子块进行检测,输入网络获取幅值;Step 3.1: Each value in the magnitude map uses a sliding window to extract sub-blocks Perform detection and input the network to obtain the amplitude;

步骤3.2:采用恒虚警率对检测后的每个幅值进行统计获取虚警概率;Step 3.2: Use the constant false alarm rate to perform statistics on each detected amplitude to obtain the false alarm probability;

其中,T表示似然比检测的阈值,表示滑动窗口子块的均值,p表示窗口子块内点的个数,Pfa表示恒虚警率检测设定的虚警概率,τCFAR表示检测阈值,F1,p-1CFAR)表示中心F随机变量的累积分布函数;Among them, T represents the threshold of likelihood ratio detection, Represents the mean value of the sliding window sub-block, p represents the number of points in the window sub-block, P fa represents the false alarm probability set by the constant false alarm rate detection, τ CFAR represents the detection threshold, F 1,p-1CFAR ) Represents the cumulative distribution function of the central F random variable;

步骤3.3:根据虚警概率计算候选目标总数,将虚警概率从高到低排序,从多个幅值中检测定位目标。Step 3.3: Calculate the total number of candidate targets according to the false alarm probability, sort the false alarm probability from high to low, and detect and locate the target from multiple amplitudes.

步骤1.3中的构建训练数据集包括如下步骤:The construction of the training data set in step 1.3 includes the following steps:

步骤1.3.1:在不包含弱小目标图像中随机产生坐标点作为仿真目标,提取N*N窗口区域作为背景样本;Step 1.3.1: Randomly generate coordinate points in the image that does not contain the weak target as the simulation target, and extract the N*N window area as the background sample;

步骤1.3.2:在背景样本中采用二维高斯强度模型加上一个仿真目标作为目标样本,二维高斯模型如下:Step 1.3.2: Use a two-dimensional Gaussian intensity model plus a simulation target as the target sample in the background sample. The two-dimensional Gaussian model is as follows:

其中,(x0,y0)表示目标图像的中心位置,s(i,j)表示目标图像在位置(i,j)的像素值,sE表示生成目标的强度,其值是(0,1]之间的随机数,σx和σy分别表示水平和垂直散布参数,其值介于[0,2]之间;Among them, (x 0 , y 0 ) represents the center position of the target image, s(i,j) represents the pixel value of the target image at position (i, j), s E represents the intensity of the generated target, and its value is (0, 1], σ x and σ y represent the horizontal and vertical spread parameters respectively, and their values are between [0, 2];

步骤1.3.3:调整目标样本的不同参数生成不同信噪比的弱小目标完成训练数据集的构建。Step 1.3.3: Adjust different parameters of the target sample to generate weak targets with different signal-to-noise ratios to complete the construction of the training data set.

通过对比分析其它几种主流算法检测效果和本申请的检测效果,体现本申请的精度。采用两类曲线作为评价指标,第一类曲线是ROC曲线,在目标检测中它反映的是检测概率 Pd与虚警率Pfa之间的变化关系,ROC曲线下的面积越大,检测性能越好,Pd与Pfa的计算公式如下:By comparing and analyzing the detection effects of several other mainstream algorithms and the detection effect of this application, the accuracy of this application is reflected. Two types of curves are used as evaluation indicators. The first type of curve is the ROC curve. In target detection, it reflects the relationship between the detection probability P d and the false alarm rate P fa . The larger the area under the ROC curve, the better the detection performance. The better, the calculation formula of P d and P fa is as follows:

其中,Nt表示正确检测到的目标数量,Na表示目标的总数量,Nf表示检测到目标的虚假数量,N表示图像中所有像素点的数量。Among them, N t represents the number of correctly detected targets, N a represents the total number of targets, N f represents the false number of detected targets, and N represents the number of all pixels in the image.

第二类曲线是检测概率Pd与信噪比SNR之间的变化关系,随着SNR值的增加,Pd将逐渐变大,最后趋近于1,采用的SNR计算公式为:The second type of curve is the change relationship between the detection probability P d and the signal-to-noise ratio SNR. As the SNR value increases, P d will gradually increase, and finally approach 1. The SNR calculation formula used is:

其中,gt表示目标局部区域像素的平均值,gb和σb分别表示背景局部区域像素平均值和标准差。Among them, g t represents the average value of the pixels in the target local area, and g b and σ b represent the average value and standard deviation of the pixels in the background local area, respectively.

几种主流算法检测效果包括:ACSDM、CSCD、SR、ISTCR、ISTCSR-CSCD,本申请表示的是DL算法;如图8所示:从左至右分别代表ACSDM、CSCD、SR、ISTCR、 ISTCSR-CSCD和DL算法;其中,绿色实线框表示目标的真实位置,红色框表示检测器的输出,如果检测器输出位置和目标真实位置重合,则红色框会覆盖绿色框,该目标点仅仅现实红色框,只有绿色框的目标,表示检测器在指定虚警率下出现了漏检,从图可以看出 DL算法相对于其它几种算法,检测到的真实目标数最多,漏检的目标数量最少;The detection effects of several mainstream algorithms include: ACSDM, CSCD, SR, ISTCR, and ISTCSR-CSCD. This application represents the DL algorithm; CSCD and DL algorithms; among them, the green solid line box represents the real position of the target, and the red box shows the output of the detector. If the detector output position coincides with the real position of the target, the red box will cover the green box, and the target point is only real red Frame, there are only green frame targets, which means that the detector has missed detection under the specified false alarm rate. It can be seen from the figure that the DL algorithm has the largest number of detected real targets and the least number of missed targets compared with other algorithms. ;

6种算法的定量分析结果如图9-12所示,图9-12描述了不同信噪比SNR之下(SNR<10, 10<SNR<20,20<SNR<30,30<SNR<40)检测概率Pd与虚警率Pfa之间的关系,标志为star的实线是本文提出的DL算法的结果,可以看出在4种不同SNR区间上本申请算法总体上都要优于其它5种算法;在SNR<10时,在Pfa=1×10-4时,ISTCR算法优于本文方法,具有最好的结果,其余情况我们的方法Dl具有更好的检测的精度;在10<SNR<20时,本文算法在各个虚警率下检测率都远高于其余方法,相比排名第二的方法CSCD,DL高出约20%;在20<SNR<30时,高于同类最好方法约12%;在30<SNR<40时,本文方法也明显优于同类方法,约高8%左右;The quantitative analysis results of the six algorithms are shown in Figure 9-12. Figure 9-12 describes the different SNRs (SNR<10, 10<SNR<20, 20<SNR<30, 30<SNR<40 ) between the detection probability P d and the false alarm rate P fa , the solid line marked star is the result of the DL algorithm proposed in this paper, it can be seen that the algorithm of this application is generally better than The other 5 algorithms; when SNR<10, when P fa =1×10 -4 , the ISTCR algorithm is better than the method in this paper, and has the best result, and our method Dl has better detection accuracy in other cases; in When 10<SNR<20, the detection rate of the algorithm in this paper is much higher than other methods in each false alarm rate, compared with the second method CSCD, DL is about 20% higher; when 20<SNR<30, higher than The best method of the same kind is about 12%; when 30<SNR<40, the method in this paper is also significantly better than similar methods, about 8% higher;

图13-16表示在相同的Pfa之下,Pd随SNR的变化情况,标志为star的实线代表本申请提出的DL算法,可以看出在4种不同的恒虚警率情况下,DL算法都获得了最好的结果,尤其是在Pfa>10×10-4时,本文方法平均高于同类最好方法20%;在Pfa=1×10-4时,DL、ISTCR和ISTCR-CSCD三种算法检测方法性能比较接近,但明显优于其余三种算法。图17为DL 算法的检测效果放大示意图,因为本检测采用颜色区分真实位置框和检测输出框,根据专利法进行黑白化后效果可能不是很明显,有必要时可提供彩色图。Figure 13-16 shows the change of P d with SNR under the same P fa . The solid line marked with star represents the DL algorithm proposed by this application. It can be seen that in the case of four different constant false alarm rates, DL algorithms have achieved the best results, especially when P fa >10×10 -4 , the method in this paper is 20% higher than the best method of the same kind on average; when P fa =1×10 -4 , DL, ISTCR and The performance of ISTCR-CSCD three algorithm detection methods is relatively close, but obviously better than the other three algorithms. Figure 17 is an enlarged schematic diagram of the detection effect of the DL algorithm. Because this detection uses color to distinguish the real position frame and the detection output frame, the effect may not be obvious after black and white according to the patent law. If necessary, a color map can be provided.

通过构造纺锤体神经网络结构,学习复杂背景下弱小目标的特征,通过深度神经网络对图像块进行判别输出目标概率,目标区域拥有更高的概率,背景区域概率更低,以此概率构造目标强度地图,再运用恒虚警率即CFAR方法进行目标检测和定位,与主流算法相比,DL算法的检测精度平均提高了约15%,尤其是真实测试图像上,DL性能远超同类方法,基于学习的方法在复杂背景下,具有更好的检测能力,说明DL方法具有高检测精度情况下,还拥有远超同类方法的泛化能力。By constructing the spindle neural network structure, the characteristics of weak and small targets in complex backgrounds are learned, and the image blocks are discriminated through the deep neural network to output the target probability. The target area has a higher probability and the background area has a lower probability, and the target intensity is constructed based on this probability. map, and then use the constant false alarm rate (CFAR) method for target detection and positioning. Compared with the mainstream algorithm, the detection accuracy of the DL algorithm is increased by about 15% on average. Especially on the real test image, the DL performance is far superior to similar methods. The learning method has better detection ability in complex backgrounds, which shows that the DL method has a generalization ability far exceeding similar methods when the detection accuracy is high.

以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention should be included in the protection of the present invention. within range.

Claims (6)

1. a kind of detection method of small target based on Feature Mapping neural network, characterized by the following steps:
Step 1: building, training spindle-type deep neural network;
Step 2: the spindle-type deep neural network that the Weak target image input of acquisition has been trained is obtained into targets improvement and back The amplitude figure that scape inhibits;
Step 3: amplitude figure completes Dim targets detection using constant false alarm rate method.
2. a kind of detection method of small target based on Feature Mapping neural network according to claim 1, feature exist In: the step 1 includes the following steps:
Step 1.1: building includes the spindle-type deep neural network of input layer, decoding layer, coding layer and softmax output layer Structure;
Step 1.2: determining that the hyper parameter of spindle-type deep neural network obtains spindle-type depth nerve using cross validation method Network;
Step 1.3: building training dataset;
Step 1.4: training dataset input spindle-type deep neural network is trained acquisition initially using unsupervised mode Change network weight and completes training.
3. a kind of detection method of small target based on Feature Mapping neural network according to claim 2, feature exist In: the decoding layer, coding layer training calculate as follows:
hk=σ (WkX+bk)
Wherein, WkIndicate weight matrix, bkIndicate that bias vector, σ indicate activation primitive, X={ x1,x2,...,xmIndicate current The input of layer, hkIndicate the output of current layer.
4. a kind of detection method of small target based on Feature Mapping neural network according to claim 1, feature exist In: the step 2 includes the following steps:
Step 2.1: after the spindle-type deep neural network that the Weak target image input of acquisition has been trained, by Weak target sample This label is set as 1, and background sample label is set as 0;
Step 2.2: Weak target image being carried out to differentiate the probability for obtaining Weak target in image using the method for sliding window Value;
Step 2.3: being obtained using result, that is, multiple probability values that output layer logistic is returned as window coordinates point response amplitude The amplitude figure for taking the corresponding targets improvement of multiple Weak targets and background to inhibit.
5. a kind of detection method of small target based on Feature Mapping neural network according to claim 1, feature exist In: the step 3 includes the following steps:
Step 3.1: each value in amplitude figure extracts sub-block using sliding windowIt is detected, inputs spindle-type depth Neural network obtains amplitude;
Step 3.2: statistics being carried out to each amplitude after detection using constant false alarm rate and obtains false-alarm probability;
In, T indicates the threshold value of Likelihood ration test,Indicate the mean value of sliding window sub-block, p indicates put in window sub-block Number, PfaIndicate the false-alarm probability of constant false alarm rate detection setting, τCFARIndicate detection threshold value, F1,p-1CFAR) indicate that center F is random The cumulative distribution function of variable;
Step 3.3: candidate target sum being calculated according to false-alarm probability, false-alarm probability is sorted from high to low, from multiple amplitudes Detection positioning target.
6. a kind of detection method of small target based on Feature Mapping neural network according to claim 2, feature exist In: the building training dataset in the step 1.3 includes the following steps:
Step 1.3.1: coordinate points are randomly generated in not including Weak target image as simulation objectives, extract N*N window region Domain is as background sample;
Step 1.3.2: in background sample using dimensional Gaussian strength model plus simulation objectives as target sample, two It is as follows to tie up Gauss model:
Wherein, (x0,y0) indicate target image center, s (i, j) indicate target image position (i, j) pixel value, sEIndicate generate target intensity, value be (0,1] between random number, σxAnd σyHorizontal and vertical distribution parameter is respectively indicated, Its value is between [0,2];
Step 1.3.3: the different parameters for adjusting target sample generate the Weak targets of different signal-to-noise ratio and complete training dataset Building.
CN201810729648.0A 2018-07-05 2018-07-05 A weak and small target detection method based on feature mapping neural network Active CN109002848B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810729648.0A CN109002848B (en) 2018-07-05 2018-07-05 A weak and small target detection method based on feature mapping neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810729648.0A CN109002848B (en) 2018-07-05 2018-07-05 A weak and small target detection method based on feature mapping neural network

Publications (2)

Publication Number Publication Date
CN109002848A true CN109002848A (en) 2018-12-14
CN109002848B CN109002848B (en) 2021-11-05

Family

ID=64598687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810729648.0A Active CN109002848B (en) 2018-07-05 2018-07-05 A weak and small target detection method based on feature mapping neural network

Country Status (1)

Country Link
CN (1) CN109002848B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109767434A (en) * 2019-01-07 2019-05-17 西安电子科技大学 Time-domain weak and small target detection method based on neural network
CN109902715A (en) * 2019-01-18 2019-06-18 南京理工大学 A Context Aggregation Network Based Infrared Small and Small Target Detection Method
CN110210518A (en) * 2019-05-08 2019-09-06 北京互金新融科技有限公司 The method and apparatus for extracting dimensionality reduction feature
CN112288778A (en) * 2020-10-29 2021-01-29 电子科技大学 Infrared small target detection method based on multi-frame regression depth network
CN113327253A (en) * 2021-05-24 2021-08-31 北京市遥感信息研究所 Weak and small target detection method based on satellite-borne infrared remote sensing image
CN109002848B (en) * 2018-07-05 2021-11-05 西华大学 A weak and small target detection method based on feature mapping neural network
CN115222775A (en) * 2022-09-15 2022-10-21 中国科学院长春光学精密机械与物理研究所 Weak and small target detection and tracking device and detection and tracking method thereof

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005103929A1 (en) * 2004-04-20 2005-11-03 Pluck Corporation Method, system, and computer program product for sharing information within a global computer network
CN103812577A (en) * 2012-11-06 2014-05-21 西南交通大学 Method for automatically identifying and learning abnormal radio signal type
CN105844627A (en) * 2016-03-21 2016-08-10 华中科技大学 Sea surface object image background inhibition method based on convolution nerve network
CN106056097A (en) * 2016-08-17 2016-10-26 西华大学 Millimeter wave weak small target detection method
CN107067413A (en) * 2016-12-27 2017-08-18 南京理工大学 A kind of moving target detecting method of time-space domain statistical match local feature
CN108122003A (en) * 2017-12-19 2018-06-05 西北工业大学 A kind of Weak target recognition methods based on deep neural network
CN112507840A (en) * 2020-12-02 2021-03-16 中国船舶重工集团公司第七一六研究所 Man-machine hybrid enhanced small target detection and tracking method and system
CN113065558A (en) * 2021-04-21 2021-07-02 浙江工业大学 Lightweight small target detection method combined with attention mechanism

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109002848B (en) * 2018-07-05 2021-11-05 西华大学 A weak and small target detection method based on feature mapping neural network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005103929A1 (en) * 2004-04-20 2005-11-03 Pluck Corporation Method, system, and computer program product for sharing information within a global computer network
CN103812577A (en) * 2012-11-06 2014-05-21 西南交通大学 Method for automatically identifying and learning abnormal radio signal type
CN105844627A (en) * 2016-03-21 2016-08-10 华中科技大学 Sea surface object image background inhibition method based on convolution nerve network
CN106056097A (en) * 2016-08-17 2016-10-26 西华大学 Millimeter wave weak small target detection method
CN107067413A (en) * 2016-12-27 2017-08-18 南京理工大学 A kind of moving target detecting method of time-space domain statistical match local feature
CN108122003A (en) * 2017-12-19 2018-06-05 西北工业大学 A kind of Weak target recognition methods based on deep neural network
CN112507840A (en) * 2020-12-02 2021-03-16 中国船舶重工集团公司第七一六研究所 Man-machine hybrid enhanced small target detection and tracking method and system
CN113065558A (en) * 2021-04-21 2021-07-02 浙江工业大学 Lightweight small target detection method combined with attention mechanism

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
GAO, ZHISHENG等: "Dim and small target detection based on feature mapping neural networks", 《JOURNAL OF VISUAL COMMUNICATION & IMAGE REPRESENTATION》 *
LONG GENG等: "Dim small target detection in single frame complex background", 《2016 2ND IEEE INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATIONS (ICCC)》 *
MIAO ZHANG等: "Space debris detection methods utilizing hyperspectral sequence analysis based on Hilbert-Huang transformation", 《2012 IEEE INTERNATIONAL INSTRUMENTATION AND MEASUREMENT TECHNOLOGY CONFERENCE PROCEEDINGS》 *
于周吉: "基于卷积神经网络的红外弱小目标检测算法", 《光学与光电技术》 *
刘俊明等: "融合全卷积神经网络和视觉显著性的红外小目标检测", 《光子学报》 *
娄康等: "基于卷积神经网络与高斯混合建模的红外弱小目标检测方法", 《第三十八届中国控制会议论文集(7)》 *
流浪的MOOK: "红外弱小目标检测算法综述", 《HTTPS://BLOG.CSDN.NET/M0_51625082/ARTICLE/DETAILS/118963429》 *
蔡智富: "基于自适应背景估计的复杂红外背景抑制技术", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
高志升等: "具有高斯噪声不变性的特征描述算子", 《计算机工程与应用》 *
高志升等: "采用目标背景建模的毫米波弱小目标检测", 《光学精密工程》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109002848B (en) * 2018-07-05 2021-11-05 西华大学 A weak and small target detection method based on feature mapping neural network
CN109767434A (en) * 2019-01-07 2019-05-17 西安电子科技大学 Time-domain weak and small target detection method based on neural network
CN109767434B (en) * 2019-01-07 2023-04-07 西安电子科技大学 Time domain weak and small target detection method based on neural network
CN109902715A (en) * 2019-01-18 2019-06-18 南京理工大学 A Context Aggregation Network Based Infrared Small and Small Target Detection Method
CN110210518A (en) * 2019-05-08 2019-09-06 北京互金新融科技有限公司 The method and apparatus for extracting dimensionality reduction feature
CN110210518B (en) * 2019-05-08 2021-05-28 北京互金新融科技有限公司 Method and device for extracting dimension reduction features
CN112288778A (en) * 2020-10-29 2021-01-29 电子科技大学 Infrared small target detection method based on multi-frame regression depth network
CN113327253A (en) * 2021-05-24 2021-08-31 北京市遥感信息研究所 Weak and small target detection method based on satellite-borne infrared remote sensing image
CN113327253B (en) * 2021-05-24 2024-05-24 北京市遥感信息研究所 Weak and small target detection method based on satellite-borne infrared remote sensing image
CN115222775A (en) * 2022-09-15 2022-10-21 中国科学院长春光学精密机械与物理研究所 Weak and small target detection and tracking device and detection and tracking method thereof

Also Published As

Publication number Publication date
CN109002848B (en) 2021-11-05

Similar Documents

Publication Publication Date Title
CN109002848A (en) A kind of detection method of small target based on Feature Mapping neural network
Zeng et al. Underwater target detection based on Faster R-CNN and adversarial occlusion network
He et al. Application of deep convolutional neural network on feature extraction and detection of wood defects
Qi et al. FTC-Net: Fusion of transformer and CNN features for infrared small target detection
Gao et al. Dim and small target detection based on feature mapping neural networks
Tang et al. Compressed-domain ship detection on spaceborne optical image using deep neural network and extreme learning machine
CN102945378B (en) Method for detecting potential target regions of remote sensing image on basis of monitoring method
CN110147812A (en) Recognition Method of Radar Emitters and device based on expansion residual error network
CN104732215A (en) Remote-sensing image coastline extracting method based on information vector machine
CN103927511B (en) image identification method based on difference feature description
CN106056097B (en) Millimeter wave weak and small target detection method
CN109740639A (en) A method, system and electronic device for detecting cloud in remote sensing image of Fengyun satellite
CN106846322B (en) The SAR image segmentation method learnt based on curve wave filter and convolutional coding structure
CN108681737A (en) A kind of complex illumination hypograph feature extracting method
CN105117736B (en) Classification of Polarimetric SAR Image method based on sparse depth heap stack network
CN108921215A (en) A kind of Smoke Detection based on local extremum Symbiotic Model and energy spectrometer
CN109766823A (en) A high-resolution remote sensing ship detection method based on deep convolutional neural network
CN101980298A (en) Image Segmentation Method Based on Multi-agent Genetic Clustering Algorithm
CN106611421A (en) SAR image segmentation method based on feature learning and sketch line constraint
CN107341813A (en) SAR image segmentation method based on structure learning and sketch characteristic inference network
CN117455868A (en) SAR image change detection method based on significant fusion difference map and deep learning
Li et al. Detection and monitoring of oil spills using moderate/high-resolution remote sensing images
CN110766696A (en) Satellite image segmentation method based on improved rough set clustering algorithm
Zhao et al. Infrared small target detection based on adjustable sensitivity strategy and multi-scale fusion
CN105205807A (en) Remote sensing image change detection method based on sparse automatic code machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230413

Address after: 1002, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Henglang Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518110

Patentee after: Shenzhen Wanzhida Technology Co.,Ltd.

Address before: 610039, No. 999, Jin Zhou road, Jinniu District, Sichuan, Chengdu

Patentee before: XIHUA University

Effective date of registration: 20230413

Address after: Room 3303-3306, Building 2, Binjiang Business Center, No. 1888 Ganjiang South Avenue, Honggutan District, Nanchang City, Jiangxi Province, 330000

Patentee after: Jiangxi Chengan Technology Co.,Ltd.

Address before: 1002, Building A, Zhiyun Industrial Park, No. 13 Huaxing Road, Henglang Community, Dalang Street, Longhua District, Shenzhen City, Guangdong Province, 518110

Patentee before: Shenzhen Wanzhida Technology Co.,Ltd.