CN114296067A - Recognition method of low, slow and small target based on LSTM model of pulse Doppler radar - Google Patents

Recognition method of low, slow and small target based on LSTM model of pulse Doppler radar Download PDF

Info

Publication number
CN114296067A
CN114296067A CN202210002980.3A CN202210002980A CN114296067A CN 114296067 A CN114296067 A CN 114296067A CN 202210002980 A CN202210002980 A CN 202210002980A CN 114296067 A CN114296067 A CN 114296067A
Authority
CN
China
Prior art keywords
coefficient
value
track
training
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210002980.3A
Other languages
Chinese (zh)
Inventor
鲁瑞莲
金敏
费德介
汪宗福
郑婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Huirong Guoke Microsystem Technology Co ltd
Original Assignee
Chengdu Huirong Guoke Microsystem Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Huirong Guoke Microsystem Technology Co ltd filed Critical Chengdu Huirong Guoke Microsystem Technology Co ltd
Priority to CN202210002980.3A priority Critical patent/CN114296067A/en
Publication of CN114296067A publication Critical patent/CN114296067A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a method for identifying a low-slow small target of a pulse Doppler radar based on an LSTM model, which comprises the following steps: receiving a set of low-slow subclass target tracks of the collected pulse Doppler radar, and performing splitting and normalization pretreatment on track set information; initializing a data set based on initialization parameters and the number of current training single targets to execute selection of LSTM forward propagation, and calculating a loss function value corresponding to a current coefficient; if the loss function value is larger than a preset threshold value, updating the input gate coefficient, the output gate coefficient and the forgetting gate state coefficient; if the loss function value is smaller than the threshold value, performing next training single target number training based on the current neural network weight coefficient; after finishing the training of all the training single target numbers in the current period, comparing the loss function value in the current period with the preset threshold value of the loss function stopping iteration; and verifying the identification accuracy in the verification set data based on the final state neural network parameters and outputting.

Description

基于LSTM模型的脉冲多普勒雷达低慢小目标识别方法Recognition method of low, slow and small target based on LSTM model of pulse Doppler radar

技术领域technical field

本发明属于脉冲多普勒雷达目标识别技术领域,尤其涉及一种基于LSTM模型的脉冲多普勒雷达低慢小目标识别方法。The invention belongs to the technical field of pulse Doppler radar target recognition, in particular to a pulse Doppler radar low-slow and small target recognition method based on an LSTM model.

背景技术Background technique

雷达目标识别技术是指使用雷达对目标进行探测,对所获取的回波信息进行分析,从而确定目标属性及种类的技术。即根据回波中的特征,识别出目标类型。其本质是已知入射波和散射波而要求反演目标特性的电磁逆散射问题。雷达需要识别的目标种类涵盖了地、海、空、天的所有目标,甚至包括地形、气象、干扰和辐射源。目标识别的程度也有多层次的定义,除了分类、识别、鉴别、辨认之外,还可扩展到目标的敌我识别、威胁评估等、用雷达对目标进行分类和识别,具有很重要的军用和民用价值。Radar target recognition technology refers to the use of radar to detect the target and analyze the acquired echo information to determine the target attribute and type. That is, according to the features in the echo, the target type is identified. Its essence is the electromagnetic inverse scattering problem that requires the inversion of the target characteristics when the incident and scattered waves are known. The types of targets that radars need to identify cover all targets on the ground, sea, air, and sky, and even terrain, weather, interference and radiation sources. The degree of target recognition also has multi-level definitions. In addition to classification, identification, identification, and identification, it can also be extended to target friend or foe identification, threat assessment, etc. The use of radar to classify and identify targets is very important for military and civilian use. value.

人工智能是研究、开发用于模拟、延伸和扩展人的智能理论、方法、技术及应用系统的一门科学。雷达目标识别技术是人工智能在装备领域的重要应用,随着人工智能技术的发展,雷达识别也在不断进步,从模式识别、机器学习到近年来发展迅猛的神经网络、迁移学习等在雷达识别中都有较多研究成果。尽管雷达目标识别应用范围很广,在某些层面也已成功应用,但是雷达目标识别技术还是未形成完整的理论体系,有的雷达目标识别系统在功能上尚存在一定的局限性,其主要原因是目标类型和雷达体制的多样化以及所处环境的极端复杂性。Artificial intelligence is a science that studies and develops theories, methods, technologies and application systems for simulating, extending and expanding human intelligence. Radar target recognition technology is an important application of artificial intelligence in the field of equipment. With the development of artificial intelligence technology, radar recognition is also making continuous progress, from pattern recognition, machine learning to neural networks and migration learning, which have developed rapidly in recent years, in radar recognition. There are many research results. Although radar target recognition has a wide range of applications and has been successfully applied at some levels, radar target recognition technology has not yet formed a complete theoretical system, and some radar target recognition systems still have certain limitations in function. It is the diversity of target types and radar regimes and the extreme complexity of the environment.

传统雷达识别技术常采用统计模式识别理论。模式识别是主要利用统计学、概率论、计算几何、机器学习、信号处理以及算法的设计等工具从可感知的数据中进行推理的一门学科,其中心任务是找出某类事物的本质属性。对于雷达目标识别而言即首先根据雷达所跟踪目标的运动、回波等信息,提取目标稳定的且具有标志性的特征,称为“识别特征模板”,然后把待识别的模式划分到各自模式类中。对于给定一个模式的识别/分类将面临两类任务:有监督分类和无监督分类,其中有监督分类把模式划分到已有的类别中,而无监督分类把模式划分到未知的类别中。Traditional radar recognition technology often uses statistical pattern recognition theory. Pattern recognition is a discipline that mainly uses tools such as statistics, probability theory, computational geometry, machine learning, signal processing, and algorithm design to reason from perceptible data. Its central task is to find out the essential attributes of certain things . For radar target recognition, firstly, according to the motion, echo and other information of the target tracked by the radar, the stable and iconic features of the target are extracted, which are called "recognition feature templates", and then the patterns to be recognized are divided into their respective patterns. in the class. Recognition/classification of a given pattern will face two types of tasks: supervised classification and unsupervised classification, where supervised classification classifies patterns into existing classes, and unsupervised classification classifies patterns into unknown classes.

特征提取是传统雷达识别技术的重要环节,雷达识别特征强烈依赖于用人的先验知识和专业技能,雷达目标识别算法的设计需要较深的目标特性、特征提取的研究背景。传统雷达目标识别通常是接收雷达传感器固定信息进行数字信号处理提取出待识别目标的特征,利用已有的特征模板对提取的特征进行分类,对照隶属度对目标进行识别。传统目标识别存在的主要问题是按照预先设定的识别模式工作,不具备随目标和环境变化而自动改变识别模式的能力,当环境发生变化时,仅仅依靠被动的特征提取、分类已难以获得理想的效果,对目标和环境的适应能力不足。Feature extraction is an important part of traditional radar recognition technology. Radar recognition features are strongly dependent on the prior knowledge and professional skills of employees. The design of radar target recognition algorithms requires a deep research background on target characteristics and feature extraction. The traditional radar target recognition is usually to receive the fixed information of the radar sensor and perform digital signal processing to extract the characteristics of the target to be identified, use the existing feature template to classify the extracted features, and identify the target according to the degree of membership. The main problem of traditional target recognition is that it works according to the preset recognition mode, and it does not have the ability to automatically change the recognition mode with the changes of the target and the environment. When the environment changes, it is difficult to obtain the ideal only by passive feature extraction and classification. effect, insufficient adaptability to the target and the environment.

面对日益复杂的战场环境及密集杂波和多目标背景等的挑战,为满足当前特别是未来作战需求,识别技术必须进一步创新发展以不断提升识别模式、识别性能,才能适应日益复杂的作战环境。Facing the challenges of increasingly complex battlefield environment and dense clutter and multi-target background, in order to meet the current and especially future operational needs, the recognition technology must be further innovated and developed to continuously improve the recognition mode and recognition performance in order to adapt to the increasingly complex combat environment. .

发明内容SUMMARY OF THE INVENTION

为解决上述技术问题,本发明提出了一种基于LSTM模型的脉冲多普勒雷达低慢小目标的识别方法,所述方法包括如下步骤:In order to solve the above-mentioned technical problems, the present invention proposes a method for identifying low, slow and small targets of pulsed Doppler radar based on the LSTM model, and the method includes the following steps:

步骤1,接收已经采集脉冲多普勒雷达低慢小类目标航迹的集合,对航迹集合信息进行拆分、归一化预处理;将归一化后的航迹按预定比例分为训练集与验证集;Step 1: Receive a collection of pulse Doppler radar low and slow target tracks that have been collected, split and normalize the track set information for preprocessing; divide the normalized tracks into training tracks according to a predetermined proportion. set and validation set;

步骤2,初始化输入结点个数、神经网络层数、训练周期数、训练单次目标个数、单次迭代权重调整比例、停止迭代损失函数阈值、输出目标种类;初始化起始时刻输入门、输出门、遗忘门状态系数、细胞系数、偏置值;初始化起始时刻隐层细胞状态值、细胞隐层值;Step 2: Initialize the number of input nodes, the number of neural network layers, the number of training cycles, the number of targets in a single training session, the weight adjustment ratio for a single iteration, the threshold of the stop iteration loss function, and the type of output target; Output gate, forget gate state coefficient, cell coefficient, bias value; initialized hidden layer cell state value, cell hidden layer value;

步骤3,基于步骤2中各项初始化参数与当前训练单次目标个数的数据集执行选择LSTM前向传播,并计算当前系数对应损失函数值;Step 3, based on the initialization parameters in step 2 and the data set of the current training single target number, perform forward propagation of the selected LSTM, and calculate the loss function value corresponding to the current coefficient;

步骤4基于步骤3中所述损失函数值与停止迭代损失函数的预设阈值进行比较,Step 4 compares the loss function value described in step 3 with the preset threshold of the stop iteration loss function,

如果所述损失函数值大于预设阈值,执行输入门系数、输出门系数和遗忘门状态系数的更新;如果所述损失函数值小于阈值,基于当前神经网络权系数进行下一训练单次目标个数训练;If the loss function value is greater than the preset threshold, update the input gate coefficient, output gate coefficient and forget gate state coefficient; if the loss function value is less than the threshold, perform the next training single target number based on the current neural network weight coefficient number training;

步骤5,通过步骤3与步骤4完成当前周期所有训练单次目标个数训练后,将当前周期损失函数值与所述停止迭代损失函数的预设阈值进行比较;Step 5, after completing the training of all training single target numbers in the current cycle through steps 3 and 4, compare the current cycle loss function value with the preset threshold of the stop iteration loss function;

如果所述当前周期损失函数值大于所述预设阈值,执行下一周期训练;If the current cycle loss function value is greater than the preset threshold, perform the next cycle training;

如果所述当前周期损失函数值小于所述预设阈值,停止训练并输出当前时刻的各项参数作为最终的神经网络参数;If the current period loss function value is less than the preset threshold, stop training and output the parameters at the current moment as the final neural network parameters;

步骤6,基于步骤5最终状态神经网络参数在验证集数据验证识别正确率并进行输出。Step 6, based on the final state neural network parameters in step 5, verify the recognition accuracy in the validation set data and output.

进一步的,步骤1中包括以下子步骤:Further, step 1 includes the following sub-steps:

步骤1.1,设定已经采集的脉冲多普勒雷达低慢小类目标航迹集合表述为Step 1.1, set the collection of pulse Doppler radar low-slow and small-class target tracks that have been collected and expressed as

Figure BDA0003454141930000031
Figure BDA0003454141930000031

n=1,...,N,n=1,...,N,

ln=1,...,Ln l n =1,...,L n

其中

Figure BDA0003454141930000032
表示航迹集合内第n条航迹中第ln个点迹距离,
Figure BDA0003454141930000033
表示航迹集合内第n条航迹中第ln个点迹方位角,
Figure BDA0003454141930000034
表示航迹集合内第n条航迹中第ln个点迹俯仰角,
Figure BDA0003454141930000041
表示航迹集合内第n条航迹中第ln个点迹雷达散射截面积RCS,N表示航迹个数,Ln表示第n条航迹内点迹个数;in
Figure BDA0003454141930000032
represents the distance of the nth point track in the nth track in the track set,
Figure BDA0003454141930000033
represents the azimuth angle of the nth point track in the nth track in the track set,
Figure BDA0003454141930000034
represents the pitch angle of the nth point track in the nth track in the track set,
Figure BDA0003454141930000041
Represents the radar scattering cross-sectional area RCS of the nth point track in the nth track in the track set, N represents the number of tracks, and Ln represents the number of point tracks in the nth track;

步骤1.2,针对所述目标航迹集合,根据航迹采集种类先验信息添加航迹标签;Step 1.2, for the set of target tracks, add track labels according to the prior information of track collection types;

步骤1.3,所述目标航迹集合按照下面公式对航迹信息进行归一化处理:Step 1.3, the target track set normalizes the track information according to the following formula:

Figure BDA0003454141930000042
Figure BDA0003454141930000042

n=1,...,N,n=1,...,N,

ln=1,...,Ln l n =1,...,L n

其中∑·表示求和操作;where ∑ represents the summation operation;

步骤1.4,归一化后的航迹按预定比例分为训练集Tn与验证集VnStep 1.4, the normalized track is divided into a training set T n and a validation set V n according to a predetermined ratio.

进一步的,其中N=12338;航迹标签中对无人机航迹标记为1,对非无人机航迹标记为0;训练集比例为70%,验证集比例为30%。Further, where N=12338; in the track label, the UAV track is marked as 1, and the non-UAV track is marked as 0; the training set proportion is 70%, and the verification set proportion is 30%.

进一步的,步骤2中,输入结点个数256,训练周期初始值1000次,训练单次目标个数为500,单次迭代权重比例ρ=1%,停止迭代损失函数阈值Tr=10-6Further, in step 2, the number of input nodes is 256, the initial value of the training cycle is 1000 times, the number of targets in a single training session is 500, the weight ratio of a single iteration is ρ=1%, and the threshold of the stop iteration loss function Tr=10 -6 .

进一步的,步骤3包括子步骤:Further, step 3 includes sub-steps:

步骤3.1,根据步骤1获得的归一化训练集与步骤2中训练单次目标个数minibatch大小将数据集分为Step 3.1, according to the normalized training set obtained in step 1 and the minibatch size of the number of targets in a single training session in step 2, the data set is divided into

Nb=N/minibatch个batch,其中batch做为下面步骤的基本操作单元;N b =N/minibatch batches, wherein batch is used as the basic operation unit of the following steps;

步骤3.2,以1个batch为操作单元进行以下操作:根据步骤2初始化系数结合下式计算输入门、遗忘门、输出门输出;Step 3.2, take 1 batch as the operation unit to carry out the following operations: calculate the input gate, forget gate, and output gate output according to the initialization coefficient of step 2 in combination with the following formula;

Figure BDA0003454141930000051
Figure BDA0003454141930000051

Figure BDA0003454141930000052
Figure BDA0003454141930000052

Figure BDA0003454141930000053
Figure BDA0003454141930000053

其中σ(x)表示sigmoid激活函数:where σ(x) represents the sigmoid activation function:

Figure BDA0003454141930000054
Figure BDA0003454141930000054

步骤3.3,根据步骤3.2的结果结合下式更新细胞状态x与隐层值h:Step 3.3, update the cell state x and the hidden layer value h according to the result of step 3.2 combined with the following formula:

Figure BDA0003454141930000055
Figure BDA0003454141930000055

x=x0□Fg+Ig□Gx=x 0 □F g +I g □G

h=Og□tanh(x)h=O g tanh(x)

其中□表示元素点乘,tanh(x)表示激活函数:where □ represents the element-wise dot product, and tanh(x) represents the activation function:

Figure BDA0003454141930000056
Figure BDA0003454141930000056

步骤3.4,根据步骤3.3的结果结合下式计算当前权系数对应分类输出值;Step 3.4, calculate the classification output value corresponding to the current weight coefficient according to the result of step 3.3 in combination with the following formula;

Figure BDA0003454141930000057
Figure BDA0003454141930000057

其中

Figure BDA0003454141930000058
表示第ln条航迹分别属于种类“0”与“1”的概率,当前权系数对应分类输出值取两概率较大值对应种类
Figure BDA0003454141930000059
in
Figure BDA0003454141930000058
Indicates the probability that the 1 nth track belongs to the category "0" and "1" respectively, and the current weight coefficient corresponds to the classification output value to take the two larger probability values corresponding to the category
Figure BDA0003454141930000059

步骤3.5,根据步骤3.4分类结果计算系数对应损失函数LsStep 3.5, calculate the loss function L s corresponding to the coefficient according to the classification result of step 3.4;

其中各变量的含义为:初始化起始时刻输入系数WI,输入隐层系数Wh,输入偏置值B,输入门状态系数WIg、细胞系数WIc、偏置值BI,输出门状态系数WOg、细胞系数WOc、偏置值BO,遗忘门状态系数WFg、细胞系数WFc、偏置值BF;初始化起始时刻细胞状态值x0、细胞隐层值h0,隐层输出系数WO,偏置值BO;起始时刻各门系数、偏置值、细胞状态、隐层值均初始化为区间(0,1)内的随机值。The meaning of each variable is: input coefficient W I at the initial time of initialization, input hidden layer coefficient W h , input bias value B, input gate state coefficient W Ig , cell coefficient W Ic , bias value B I , output gate state Coefficient W Og , cell coefficient W Oc , bias value BO , forget gate state coefficient W Fg , cell coefficient W Fc , bias value BF ; cell state value x 0 , cell hidden layer value h 0 at the initial time of initialization, The hidden layer output coefficient W O , the bias value B O ; the gate coefficients, bias values, cell states, and hidden layer values are all initialized to random values in the interval (0,1) at the beginning.

进一步的,步骤4包括对各项系数进行更新;具体包含如下子步骤:Further, step 4 includes updating each coefficient; specifically includes the following sub-steps:

步骤4.1,输入系数

Figure BDA00034541419300000510
输出系数
Figure BDA00034541419300000511
更新:Step 4.1, enter the coefficients
Figure BDA00034541419300000510
output coefficient
Figure BDA00034541419300000511
renew:

Figure BDA0003454141930000061
Figure BDA0003454141930000061

Figure BDA0003454141930000062
Figure BDA0003454141930000062

其中,I表示全1向量;Among them, I represents an all-one vector;

步骤4.2,门状态系数更新:Step 4.2, door state coefficient update:

Figure BDA0003454141930000063
Figure BDA0003454141930000063

Figure BDA0003454141930000064
Figure BDA0003454141930000064

Figure BDA0003454141930000065
Figure BDA0003454141930000065

其中,

Figure BDA0003454141930000066
表示输入门状态系数更新值,
Figure BDA0003454141930000067
表示遗忘门状态系数更新值,
Figure BDA0003454141930000068
表示输出门状态系数更新值;in,
Figure BDA0003454141930000066
represents the update value of the input gate state coefficient,
Figure BDA0003454141930000067
represents the update value of the forget gate state coefficient,
Figure BDA0003454141930000068
Indicates the update value of the output gate state coefficient;

步骤4.3,细胞系数更新:Step 4.3, cell coefficient update:

Figure BDA0003454141930000069
Figure BDA0003454141930000069

Figure BDA00034541419300000610
Figure BDA00034541419300000610

Figure BDA00034541419300000611
Figure BDA00034541419300000611

其中,

Figure BDA00034541419300000612
表示输入门细胞系数更新值,
Figure BDA00034541419300000613
表示遗忘门细胞系数更新值,
Figure BDA00034541419300000614
表示输出门细胞系数更新值;in,
Figure BDA00034541419300000612
represents the updated value of the input gate cell coefficient,
Figure BDA00034541419300000613
represents the update value of the forget gate cell coefficient,
Figure BDA00034541419300000614
Indicates the updated value of the output gate cell coefficient;

步骤4.4,输入隐层系数更新:Step 4.4, update the input hidden layer coefficients:

Figure BDA00034541419300000615
Figure BDA00034541419300000615

其中,□表示元素点乘,tanh(x)表示激活函数。Among them, □ represents the element-wise dot product, and tanh(x) represents the activation function.

进一步的,所述当前时刻参数集合作为最终的神经网络参数;所述的参数集合为:Further, the parameter set at the current moment is used as the final neural network parameter; the parameter set is:

Figure BDA00034541419300000616
Figure BDA00034541419300000616

进一步的,步骤6包括以下子步骤:Further, step 6 includes the following sub-steps:

步骤6.1,基于最终状态神经网络参数Wopt、输入门输出、遗忘门输出、输出门输出、细胞状态、隐层值和当前权系数对应分类输出值对验证集数据进行分类并输出分类结果

Figure BDA0003454141930000071
Step 6.1, classify the validation set data based on the final state neural network parameter W opt , input gate output, forget gate output, output gate output, cell state, hidden layer value and the corresponding classification output value of the current weight coefficient and output the classification result
Figure BDA0003454141930000071

步骤6.2,将输出分类结果与航迹标签进行对比,并按下式统计识别率:Step 6.2, compare the output classification result with the track label, and count the recognition rate as follows:

Figure BDA0003454141930000072
Figure BDA0003454141930000072

采用本发明的方法基于长短时记忆网络LSTM模型对雷达低慢小类目标航迹特征学习训练输出对应网络参数,并基于该网络参数建立分类函数,达到对雷达系统低慢小类目标的实时识别与分类的目的。为达到上述技术目的,本发明采用如下技术方案予以实现。The method of the present invention is used to learn and train the track features of the radar low-slow and small-class targets based on the long-short-term memory network LSTM model, and output corresponding network parameters, and based on the network parameters, a classification function is established to achieve real-time recognition of the radar system's low-slow and small class targets. with classification purposes. In order to achieve the above technical purpose, the present invention adopts the following technical solutions to achieve.

附图说明Description of drawings

图1是本发明的低慢小目标识别技术实现总流程图;Fig. 1 is the low-slow and small target identification technology realization general flow chart of the present invention;

图2为LSTM参数传递迭代图;Figure 2 is an iterative diagram of LSTM parameter transfer;

图3为训练过程图;Fig. 3 is a training process diagram;

图4为实时识别效果图。Figure 4 is a real-time recognition effect diagram.

具体实施方式Detailed ways

本发明公开了一种基于LSTM模型的脉冲多普勒雷达低慢小目标识别方法,适用于脉冲多普勒雷达,基于长短时记忆网络LSTM模型对雷达低慢小类目标航迹特征学习训练输出对应网络参数,并基于该网络参数建立分类函数,达到对雷达系统低慢小类目标的实时识别与分类的目的。The invention discloses a pulse Doppler radar low-slow and small target identification method based on an LSTM model, which is suitable for pulse-Doppler radar. Based on the long-short-term memory network LSTM model, the low-slow and small-class target track characteristics of the radar are learned and trained. Corresponding network parameters, and establishing a classification function based on the network parameters, to achieve the purpose of real-time identification and classification of low-slow and small-class targets in radar systems.

一种基于LSTM模型的脉冲多普勒雷达低慢小目标识别方法,包括如下步骤A low-slow and small target recognition method based on LSTM model for pulse Doppler radar, including the following steps

1)数据预处理:已采集脉冲多普勒雷达低慢小类目标航迹集合,对航迹集合信息进行拆分、归一化预处理;将归一化后的航迹按某一比例分为训练集与验证集;1) Data preprocessing: The pulse Doppler radar low and slow target track set has been collected, and the track set information is split and normalized for preprocessing; the normalized track is divided into a certain proportion. are the training set and the validation set;

2)LSTM模型训练参数初始化:初始化输入结点个数、神经网络层数、训练周期数、训练单次目标个数(minibatch)、单次迭代权重调整比例、停止迭代损失函数阈值、输出目标种类;初始化起始时刻输入门、输出门、遗忘门状态系数、细胞系数、偏置值;初始化起始时刻隐层细胞状态值、细胞隐层值;2) Initialization of LSTM model training parameters: initialize the number of input nodes, the number of neural network layers, the number of training cycles, the number of targets per training (minibatch), the weight adjustment ratio for a single iteration, the threshold of the stop iteration loss function, and the type of output target ; Initialize the input gate, output gate, forget gate state coefficient, cell coefficient, bias value at the initial time; initialize the hidden layer cell state value and cell hidden layer value at the initial time;

3)基于2)初始化参数与当前minibatch数据集进行LSTM前向传播并计算当前系数对应损失函数值;3) Based on 2) the initialization parameters and the current minibatch data set carry out LSTM forward propagation and calculate the loss function value corresponding to the current coefficient;

4)基于3)损失函数值与停止迭代损失函数阈值进行比较,若损失函数值大于阈值,进行各门系数更新;若损失函数值小于阈值,基于当前神经网络权系数进行下一minibatch训练;4) Based on 3) compare the loss function value with the stop iteration loss function threshold, if the loss function value is greater than the threshold, update each gate coefficient; if the loss function value is less than the threshold, perform the next minibatch training based on the current neural network weight coefficient;

5)基于3)与4)完成当前周期所有minibatch训练后,将当前周期损失函数值与停止迭代损失函数阈值进行比较,若损失函数值大于阈值,进行下一周期训练;若损失函数小于阈值,停止训练并输出当前时刻参数作为最终的神经网络参数;5) After completing all minibatch training in the current cycle based on 3) and 4), compare the current cycle loss function value with the stop iteration loss function threshold, if the loss function value is greater than the threshold, proceed to the next cycle of training; if the loss function is less than the threshold, Stop training and output the current moment parameters as the final neural network parameters;

6)基于5)最终状态神经网络参数在验证集数据验证识别正确率并进行输出。6) Based on 5) final state neural network parameters, verify the recognition accuracy in the validation set data and output.

以下结合附图对本发明的具体实施方式作出详细说明。The specific embodiments of the present invention will be described in detail below with reference to the accompanying drawings.

针对上述现有技术存在的问题,本发明的目的在于提出一种基于LSTM模型的脉冲多普勒雷达低慢小目标识别方法。本方法基于长短时记忆网络LSTM模型对雷达低慢小类目标航迹特征学习训练输出对应网络参数,并基于该网络参数建立分类函数,达到对雷达系统低慢小类目标的实时识别与分类的目的。为达到上述技术目的,本发明采用如下技术方案予以实现。In view of the problems existing in the above-mentioned prior art, the purpose of the present invention is to propose a low-slow and small target identification method based on the LSTM model of the pulse Doppler radar. This method is based on the long short-term memory network LSTM model to learn and train the corresponding network parameters for the track features of the radar low-slow and small-class targets, and establish a classification function based on the network parameters to achieve real-time recognition and classification of low-slow and small-class targets in the radar system. Purpose. In order to achieve the above technical purpose, the present invention adopts the following technical solutions to achieve.

一种基于LSTM模型的脉冲多普勒雷达低慢小目标识别方法,包括如下步骤:A low-slow and small target recognition method for pulsed Doppler radar based on LSTM model, comprising the following steps:

步骤1数据预处理:已采集脉冲多普勒雷达低慢小类目标航迹集合,对航迹集合信息进行拆分、归一化预处理;将归一化后的航迹按某一比例分为训练集与验证集;Step 1 Data preprocessing: The pulse Doppler radar low and slow target track set has been collected, and the track set information is split and normalized for preprocessing; the normalized track is divided into a certain proportion. are the training set and the validation set;

步骤2 LSTM模型训练参数初始化:初始化输入结点个数、神经网络层数、训练周期数、训练单次目标个数(minibatch)、单次迭代权重调整比例、停止迭代损失函数阈值、输出目标种类;初始化起始时刻输入门、输出门、遗忘门状态系数、细胞系数、偏置值;初始化起始时刻隐层细胞状态值、细胞隐层值;Step 2 LSTM model training parameter initialization: initialize the number of input nodes, the number of neural network layers, the number of training cycles, the number of targets per training (minibatch), the weight adjustment ratio for a single iteration, the threshold of the stop iteration loss function, and the type of output target ; Initialize the input gate, output gate, forget gate state coefficient, cell coefficient, bias value at the initial time; initialize the hidden layer cell state value and cell hidden layer value at the initial time;

步骤3基于步骤2初始化参数与当前minibatch数据集进行选择LSTM前向传播并计算当前系数对应损失函数值;Step 3 selects LSTM forward propagation based on the initialization parameters of step 2 and the current minibatch data set and calculates the loss function value corresponding to the current coefficient;

步骤4基于步骤3损失函数值与停止迭代损失函数阈值进行比较,若损失函数值大于阈值,进行各门系数更新;若损失函数值小于阈值,基于当前神经网络权系数进行下一minibatch训练;Step 4 compares the loss function value of step 3 with the stop iteration loss function threshold. If the loss function value is greater than the threshold, update each gate coefficient; if the loss function value is less than the threshold, perform the next minibatch training based on the current neural network weight coefficient;

步骤5步骤3与步骤4完成当前周期所有minibatch训练后,将当前周期损失函数值与停止迭代损失函数阈值进行比较,若损失函数值大于阈值,进行下一周期训练;若损失函数值小于阈值,停止训练并输出当前时刻参数作为最终的神经网络参数;Step 5, Step 3 and Step 4 After completing all minibatch training in the current cycle, compare the loss function value of the current cycle with the stop iteration loss function threshold. If the loss function value is greater than the threshold, perform the next cycle of training; if the loss function value is less than the threshold, Stop training and output the current moment parameters as the final neural network parameters;

步骤6基于步骤5最终状态神经网络参数在验证集数据验证识别正确率并进行输出。Step 6: Based on the final state neural network parameters of step 5, the recognition accuracy is verified and output in the validation set data.

参照图1,为本发明的一种基于LSTM模型的脉冲多普勒雷达低慢小目标识别方法实现总流程图。其中所述一种基于LSTM模型的脉冲多普勒雷达低慢小目标识别方法,包括以下步骤:Referring to FIG. 1 , it is a general flow chart for realizing a method for recognizing low, slow and small targets of pulse Doppler radar based on the LSTM model of the present invention. The LSTM model-based pulse Doppler radar low-slow and small target recognition method includes the following steps:

步骤1数据预处理:已采集脉冲多普勒雷达低慢小类目标航迹集合,对航迹集合信息进行加标签(label)、归一化预处理;将归一化后的航迹按某一比例分为训练集与验证集;Step 1 Data preprocessing: The pulse Doppler radar low and slow target track set has been collected, and the track set information is labeled and normalized for preprocessing; A ratio is divided into training set and validation set;

1a)已采集脉冲多普勒雷达低慢小类目标航迹集合1a) A collection of low-slow and small-class target tracks collected by pulse Doppler radar

Figure BDA0003454141930000101
Figure BDA0003454141930000101

其中

Figure BDA0003454141930000102
表示航迹集合内第n条航迹中第ln个点迹距离,
Figure BDA0003454141930000103
表示航迹集合内第n条航迹中第ln个点迹方位角,
Figure BDA0003454141930000104
表示航迹集合内第n条航迹中第ln个点迹俯仰角,
Figure BDA0003454141930000105
表示航迹集合内第n条航迹中第ln个点迹雷达散射截面积(RCS),N表示航迹个数,Ln表示第n条航迹内点迹个数;in
Figure BDA0003454141930000102
represents the distance of the nth point track in the nth track in the track set,
Figure BDA0003454141930000103
represents the azimuth angle of the nth point track in the nth track in the track set,
Figure BDA0003454141930000104
represents the pitch angle of the nth point track in the nth track in the track set,
Figure BDA0003454141930000105
Represents the radar cross-sectional area (RCS) of the 1 nth point track in the nth track in the track set, N represents the number of tracks, and Ln represents the number of point tracks in the nth track;

本实例中选用但不限于N=12338。In this example, N=12338 is selected but not limited to.

1b)基于步骤1a)航迹集合根据航迹采集种类先验信息进行航迹标签添加;1b) Based on the track set in step 1a), the track label is added according to the prior information of the track collection type;

本实例中航迹标签选用但不限于无人机航迹(标记为1)与非无人机航迹(标记为0);In this example, the track label is selected but not limited to the UAV track (marked as 1) and the non-UAV track (marked as 0);

1c)根据步骤1b)航迹集合结合下式对航迹信息进行归一化处理:1c) According to step 1b), the track set is combined with the following formula to normalize the track information:

Figure BDA0003454141930000106
Figure BDA0003454141930000106

其中∑·表示求和操作;where ∑ represents the summation operation;

1d)根据步骤1c)归一化后的航迹按某一比例分为训练集Tn与验证集Vn1d) according to step 1c) the normalized track is divided into training set T n and verification set V n according to a certain proportion;

本实例中训练集比例选用但不限于70%,验证集比例选用30%。In this example, the proportion of training set is selected but not limited to 70%, and the proportion of verification set is selected to be 30%.

步骤2LSTM模型训练参数初始化:初始化神经网络层数、训练周期数、训练单次目标个数(minibatch)、单次迭代权重调整比例ρ、停止迭代损失函数阈值Tr、输出目标种类;初始化起始时刻输入系数、隐层系数、偏置值,输入门、输出门、遗忘门状态系数、细胞系数、偏置值;初始化起始时刻细胞状态值、细胞隐层值;Step 2: Initialize the training parameters of the LSTM model: initialize the number of neural network layers, the number of training cycles, the number of targets in a single training (minibatch), the weight adjustment ratio ρ for a single iteration, the threshold Tr of the stop iteration loss function, and the type of output target; initialization start time Input coefficient, hidden layer coefficient, bias value, input gate, output gate, forget gate state coefficient, cell coefficient, bias value; initialized cell state value, cell hidden layer value;

本实例输入结点个数选用但不限于神经网络层数为256,训练周期初始值1000次,训练单次目标个数minibatch为500,单次迭代权重比例ρ=1%,停止迭代损失函数阈值Tr=10-6,输出目标种类2,其中0表示非无人机,1表示无人机;初始化起始时刻输入系数WI,输入隐层系数Wh,输入偏置值B,输入门状态系数WIg、细胞系数WIc、偏置值BI,输出门状态系数WOg、细胞系数WOc、偏置值BO,遗忘门状态系数WFg、细胞系数WFc、偏置值BF;初始化起始时刻细胞状态值x0、细胞隐层值h0,隐层输出系数WO,偏置值BO;起始时刻各门系数、偏置值、细胞状态、隐层值均初始化为区间(0,1)内的随机值。In this example, the number of input nodes is selected but not limited to 256 neural network layers, the initial value of the training cycle is 1000 times, the number of minibatches for a single training target is 500, the weight ratio of a single iteration is ρ=1%, and the threshold of the loss function to stop iteration Tr=10 -6 , output target type 2, in which 0 means non-UAV, 1 means UAV; input coefficient W I at the initial time of initialization, input hidden layer coefficient W h , input bias value B, input gate state Coefficient W Ig , cell coefficient W Ic , bias value B I , output gate state coefficient W Og , cell coefficient W Oc , bias value BO , forget gate state coefficient W Fg , cell coefficient W Fc , bias value B F ; Initialize the cell state value x 0 , the cell hidden layer value h 0 , the hidden layer output coefficient W O , the bias value B O ; the gate coefficient, bias value, cell state, and hidden layer value are all initialized at the initial time is a random value in the interval (0,1).

步骤3基于步骤2初始化参数与当前minibatch数据集进行选择LSTM前向传播并计算当前系数对应损失函数值;Step 3 selects LSTM forward propagation based on the initialization parameters of step 2 and the current minibatch data set and calculates the loss function value corresponding to the current coefficient;

3a)根据步骤1获得的归一化训练集与步骤2minibatch大小将数据集分为Nb=N/minibatch个batch,其中batch为下述基本操作单元;3a) according to the normalized training set obtained in step 1 and the size of step 2 minibatch, the data set is divided into N b =N/minibatch batches, wherein batch is the following basic operation unit;

3b)以步骤3a)中1个batch为操作单元进行以下操作:根据步骤2初始化系数结合下式计算输入门、遗忘门、输出门输出;3b) take 1 batch in step 3a) as the operation unit and carry out the following operations: calculate input gate, forget gate, output gate output according to step 2 initialization coefficient in conjunction with following formula;

Figure BDA0003454141930000111
Figure BDA0003454141930000111

其中σ(x)表示sigmoid激活函数:where σ(x) represents the sigmoid activation function:

Figure BDA0003454141930000112
Figure BDA0003454141930000112

3c)根据步骤3b)结果结合下式更新细胞状态x与隐层值h:3c) Update the cell state x and the hidden layer value h according to the result of step 3b) combined with the following formula:

Figure BDA0003454141930000121
Figure BDA0003454141930000121

其中□表示元素点乘,tanh(x)表示激活函数:where □ represents the element-wise dot product, and tanh(x) represents the activation function:

Figure BDA0003454141930000122
Figure BDA0003454141930000122

3d)根据步骤3c)结果结合下式计算当前权系数对应分类输出值;3d) According to the result of step 3c), the corresponding classification output value of the current weight coefficient is calculated in combination with the following formula;

Figure BDA0003454141930000123
Figure BDA0003454141930000123

其中

Figure BDA0003454141930000124
表示第ln条航迹分别属于种类“0”与“1”的概率,当前权系数对应分类输出值取两概率较大值对应种类
Figure BDA0003454141930000125
in
Figure BDA0003454141930000124
Indicates the probability that the 1 nth track belongs to the category "0" and "1" respectively, and the current weight coefficient corresponds to the classification output value to take the two larger probability values corresponding to the category
Figure BDA0003454141930000125

3e)根据3d)分类结果计算损失函数Ls3e) Calculate the loss function L s according to the classification result of 3d);

所述的损失函数包含交叉熵、Focal loss等,本发明实例中由于样本比例严重失衡,因此选用Focal loss计算损失函数:The loss function includes cross entropy, Focal loss, etc. In the example of the present invention, since the proportion of samples is seriously unbalanced, Focal loss is selected to calculate the loss function:

Figure BDA0003454141930000126
Figure BDA0003454141930000126

其中,log(·)为求对数操作,α为平衡因子,γ为调节简单样本系数降低的比例,本实例中α=0.25,γ=2;Among them, log( ) is the logarithmic operation, α is the balance factor, γ is the ratio of adjusting the reduction of the simple sample coefficient, in this example α=0.25, γ=2;

步骤4基于步骤3损失函数与停止迭代损失函数阈值进行比较,若损失函数大于阈值,进行各门系数更新;若损失函数小于阈值,基于当前神经网络系数进行下一minibatch训练;Step 4 compares the loss function with the stop iteration loss function threshold based on step 3. If the loss function is greater than the threshold, update the coefficients of each gate; if the loss function is less than the threshold, perform the next minibatch training based on the current neural network coefficients;

4a)结合下式对各系数进行更新;4a) Update each coefficient in combination with the following formula;

输入系数

Figure BDA0003454141930000127
输出系数
Figure BDA0003454141930000128
更新:input factor
Figure BDA0003454141930000127
output coefficient
Figure BDA0003454141930000128
renew:

Figure BDA0003454141930000131
Figure BDA0003454141930000131

其中,I表示全1向量。门状态系数更新:where I represents an all-ones vector. Door state coefficient update:

Figure BDA0003454141930000132
Figure BDA0003454141930000132

其中

Figure BDA0003454141930000133
表示输入门状态系数更新值,
Figure BDA0003454141930000134
表示遗忘门状态系数更新值,
Figure BDA0003454141930000135
表示输出门状态系数更新值;细胞系数更新:in
Figure BDA0003454141930000133
represents the update value of the input gate state coefficient,
Figure BDA0003454141930000134
represents the update value of the forget gate state coefficient,
Figure BDA0003454141930000135
Indicates the update value of the output gate state coefficient; the cell coefficient update:

Figure BDA0003454141930000136
Figure BDA0003454141930000136

其中

Figure BDA0003454141930000137
表示输入门细胞系数更新值,
Figure BDA0003454141930000138
表示遗忘门细胞系数更新值,
Figure BDA0003454141930000139
表示输出门细胞系数更新值;输入隐层系数更新:in
Figure BDA0003454141930000137
represents the updated value of the input gate cell coefficient,
Figure BDA0003454141930000138
represents the update value of the forget gate cell coefficient,
Figure BDA0003454141930000139
Represents the updated value of the output gate cell coefficient; the input hidden layer coefficient is updated:

Figure BDA00034541419300001310
Figure BDA00034541419300001310

步骤5,步骤3与步骤4完成当前周期所有minibatch训练后,将当前周期损失函数值与停止迭代损失函数阈值进行比较,若损失函数值大于阈值,进行下一周期训练;若损失函数值小于阈值,停止训练并输出当前时刻参数集合作为最终的神经网络参数;Step 5, Step 3 and Step 4 After completing all minibatch training in the current cycle, compare the current cycle loss function value with the stop iteration loss function threshold, if the loss function value is greater than the threshold, proceed to the next cycle of training; if the loss function value is less than the threshold value , stop training and output the parameter set at the current moment as the final neural network parameter;

所述的参数集合为:The set of parameters described is:

Figure BDA00034541419300001311
Figure BDA00034541419300001311

步骤6基于步骤5最终状态神经网络参数在验证集数据验证识别正确率并进行输出。Step 6: Based on the final state neural network parameters of step 5, the recognition accuracy is verified and output in the validation set data.

6a)基于步骤5最终状态神经网络参数Wopt和式3)、5)、7)对验证集数据进行分类并输出分类结果

Figure BDA0003454141930000141
6a) Based on the final state neural network parameter W opt in step 5 and formulas 3), 5), and 7), classify the validation set data and output the classification results
Figure BDA0003454141930000141

6b)将输出分类结果与步骤1b)航迹标签进行对比,并结合下式统计识别率:6b) Compare the output classification result with the track label in step 1b), and use the following formula to count the recognition rate:

Figure BDA0003454141930000142
Figure BDA0003454141930000142

本发明的效果通过以下仿真对比试验进一步说明:The effect of the present invention is further illustrated by the following simulation comparison test:

1.实验场景:1. Experimental scene:

在地杂波环境下利用雷达设备采集无人机与杂波航迹数据共N=12338条航迹,利用本发明方法基于0.7×N条训练集航迹进行网络训练得到最佳系数Wopt,基于该系数对剩余验证集航迹识别准确率进行结果验证。In the ground clutter environment, radar equipment is used to collect UAV and clutter track data, a total of N=12338 tracks, and the method of the present invention is used to perform network training based on 0.7×N training set tracks to obtain the optimal coefficient W opt , Based on this coefficient, the track recognition accuracy of the remaining validation set is verified.

2.实验结果分析:2. Analysis of experimental results:

表1为不同神经网络层数与不同minibatch下的识别准确率;在本实例参数中,神经网络层数为100层,minibatch大小为400时识别准确率最高,可达到93.25%。Table 1 shows the recognition accuracy rates with different neural network layers and different minibatches; in the parameters of this example, the number of neural network layers is 100 layers, and the recognition accuracy rate is the highest when the minibatch size is 400, which can reach 93.25%.

表1Table 1

Figure BDA0003454141930000143
Figure BDA0003454141930000143

Figure BDA0003454141930000151
Figure BDA0003454141930000151

最后应说明的是,以上实施方式仅用以说明本发明实施例的技术方案而非限制,尽管参照以上较佳实施方式对本发明实施例进行了详细说明,本领域的普通技术人员应当理解,可以对本发明实施例的技术方案进行修改或等同替换都不应脱离本发明实施例的技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the embodiments of the present invention and not to limit them. Although the embodiments of the present invention have been described in detail with reference to the above preferred embodiments, those of ordinary skill in the art should Modifications or equivalent substitutions to the technical solutions of the embodiments of the present invention should not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (8)

1. A method for identifying a low-slow small target of a pulse Doppler radar based on an LSTM model is characterized by comprising the following steps:
step 1, receiving a set of low-speed subclass target tracks of an acquired pulse Doppler radar, and performing splitting and normalization pretreatment on track set information; dividing the normalized flight path into a training set and a verification set according to a preset proportion;
step 2, initializing the number of input nodes, the number of neural network layers, the number of training cycles, the number of single training targets, the single iteration weight adjustment proportion, the threshold value of the stop iteration loss function and the output target type; initializing a starting moment input gate, an output gate, a forgetting gate state coefficient, a cell coefficient and a bias value; initializing a hidden layer cell state value and a hidden layer cell value at the starting moment;
step 3, selecting LSTM forward propagation based on the initialization parameters in the step 2 and the data set of the current training single target number, and calculating a loss function value corresponding to a current coefficient;
step 4 compares the loss function value in step 3 with a preset threshold value for stopping the iterative loss function,
if the loss function value is larger than a preset threshold value, updating the input gate coefficient, the output gate coefficient and the forgetting gate state coefficient; if the loss function value is smaller than the threshold value, performing next training single target number training based on the current neural network weight coefficient;
step 5, after finishing the training of the number of the single targets in the current period through the steps 3 and 4, comparing the loss function value of the current period with a preset threshold value of the iteration-stopping loss function;
if the current period loss function value is larger than the preset threshold value, executing the next period training;
if the loss function value of the current period is smaller than the preset threshold value, stopping training and outputting all parameters at the current moment as final neural network parameters;
and 6, verifying the data verification identification accuracy in the verification set based on the neural network parameters in the final state in the step 5 and outputting.
2. An identification method as claimed in claim 1, characterized in that step 1 comprises the following sub-steps:
step 1.1, setting a low-slow subclass target track set of the acquired pulse Doppler radar to be expressed as
Figure FDA0003454141920000021
n=1,...,N,
ln=1,...,Ln
Wherein
Figure FDA0003454141920000022
Representing the l in the n track in the track setnThe distance between the point traces is determined,
Figure FDA0003454141920000023
representing the l in the n track in the track setnTrack of points orientationThe angle of the corner is such that,
Figure FDA0003454141920000024
representing the l in the n track in the track setnThe pitch angle of the point trace is,
Figure FDA0003454141920000025
representing the l in the n track in the track setnThe scattering sectional area RCS of the point trace radar, N represents the number of tracks, LnRepresenting the number of trace points in the nth track;
step 1.2, adding a track label according to track acquisition type prior information aiming at the target track set;
step 1.3, the target track set normalizes the track information according to the following formula:
Figure FDA0003454141920000026
n=1,...,N,
ln=1,...,Ln
where Σ · denotes a summation operation;
step 1.4, dividing the normalized flight path into training sets T according to a preset proportionnAnd verification set Vn
3. The identification method of claim 1, wherein N-12338; marking the unmanned aerial vehicle track as 1 and marking the non-unmanned aerial vehicle track as 0 in the track label; the training set proportion is 70%, and the validation set proportion is 30%.
4. The identification method of claim 1, wherein in step 2, the number of input nodes is 256, the initial value of the training period is 1000 times, the number of single training targets is 500, the weight ratio p of single iteration is 1%, and the threshold value Tr of the stop iteration loss function is 10 ═ 10-6
5. An identification method as claimed in claim 1, characterized in that step 3 comprises the sub-steps of:
step 3.1, dividing the data set into N according to the normalized training set obtained in step 1 and the size of the training single target number minipatch in step 2bN/minipatch lots, wherein each lot is used as a basic operation unit in the following steps;
and 3.2, taking 1 batch as an operation unit to perform the following operations: calculating the output of the input gate, the forgetting gate and the output gate according to the initialization coefficient in the step 2 and the following formula;
Figure FDA0003454141920000031
Figure FDA0003454141920000032
Figure FDA0003454141920000033
where σ (x) denotes the sigmoid activation function:
Figure FDA0003454141920000034
step 3.3, updating the cell state x and the hidden layer value h according to the result of the step 3.2 by combining the following formula:
Figure FDA0003454141920000035
Figure FDA0003454141920000036
Figure FDA0003454141920000037
wherein
Figure FDA0003454141920000038
Represents the element dot product, tanh (x) represents the activation function:
Figure FDA0003454141920000039
step 3.4, calculating a classification output value corresponding to the current weight coefficient according to the result of the step 3.3 by combining the following formula;
Figure FDA00034541419200000310
wherein
Figure FDA00034541419200000311
Denotes the lnThe flight paths respectively belong to the probabilities of the types of 0 and 1, and the classification output value corresponding to the current weight coefficient takes the larger value of the two probabilities to correspond to the type
Figure FDA00034541419200000312
Step 3.5, calculating coefficient corresponding loss function L according to the classification result of the step 3.4s
Wherein the meaning of each variable is: initial start time input coefficient WIInput hidden layer coefficient WhInputting the offset value B and the gate state coefficient WIgCell coefficient WIcOffset value BIOutput gate state coefficient WOgCell coefficient WOcOffset value BOForgetting the door state coefficient WFgCell coefficient WFcOffset value BF(ii) a Initializing the initial cell state value x0Value of cell envelope h0Hidden layer output coefficient WOOffset value BO(ii) a At the starting time, the gate coefficients, the offset values, the cell states and the hidden layer values are initialized to random values in the interval (0, 1).
6. An identification method as claimed in claim 1, characterized in that step 4 comprises updating the coefficients; the method specifically comprises the following substeps:
step 4.1, input coefficients
Figure FDA0003454141920000041
Output coefficient
Figure FDA0003454141920000042
Updating:
Figure FDA0003454141920000043
Figure FDA0003454141920000044
wherein I represents a full 1 vector;
step 4.2, updating the door state coefficient:
Figure FDA0003454141920000045
Figure FDA0003454141920000046
Figure FDA0003454141920000047
wherein,
Figure FDA0003454141920000048
indicating that the gate state coefficient update value is entered,
Figure FDA0003454141920000049
indicating an update value of the forgetting gate state coefficient,
Figure FDA00034541419200000410
representing the output gate state coefficient update value;
step 4.3, updating cell coefficients:
Figure FDA00034541419200000411
Figure FDA00034541419200000412
Figure FDA00034541419200000413
wherein,
Figure FDA00034541419200000414
indicating the input of the gated cell coefficient update values,
Figure FDA00034541419200000415
indicating an update value of the forgetting gate cell coefficient,
Figure FDA00034541419200000416
representing the output gate cell coefficient update value;
step 4.4, input hidden layer coefficient updating:
Figure FDA0003454141920000051
wherein,
Figure FDA0003454141920000052
representing element dot product, and tanh (x) representing activation function.
7. The identification method according to claim 6, wherein the current time parameter set is used as a final neural network parameter; the parameter sets are:
Figure FDA0003454141920000053
8. an identification method as claimed in claim 1, characterized in that step 6 comprises the following sub-steps:
step 6.1, neural network parameters W based on final stateoptClassifying the verification set data and outputting classification results corresponding to the classification output values of the input gate output, the forgetting gate output, the output gate output, the cell state, the hidden layer value and the current weight coefficient
Figure FDA0003454141920000054
Step 6.2, comparing the output classification result with the track label, and counting the recognition rate according to the following formula:
Figure FDA0003454141920000055
CN202210002980.3A 2022-01-04 2022-01-04 Recognition method of low, slow and small target based on LSTM model of pulse Doppler radar Pending CN114296067A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210002980.3A CN114296067A (en) 2022-01-04 2022-01-04 Recognition method of low, slow and small target based on LSTM model of pulse Doppler radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210002980.3A CN114296067A (en) 2022-01-04 2022-01-04 Recognition method of low, slow and small target based on LSTM model of pulse Doppler radar

Publications (1)

Publication Number Publication Date
CN114296067A true CN114296067A (en) 2022-04-08

Family

ID=80974946

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210002980.3A Pending CN114296067A (en) 2022-01-04 2022-01-04 Recognition method of low, slow and small target based on LSTM model of pulse Doppler radar

Country Status (1)

Country Link
CN (1) CN114296067A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116520252A (en) * 2023-04-03 2023-08-01 中国人民解放军93209部队 Intelligent recognition method and system for aerial targets

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110488278A (en) * 2019-08-20 2019-11-22 深圳锐越微技术有限公司 Doppler radar signal kind identification method
CN111638488A (en) * 2020-04-10 2020-09-08 西安电子科技大学 Radar interference signal identification method based on LSTM network
CN112434643A (en) * 2020-12-06 2021-03-02 零八一电子集团有限公司 Classification and identification method for low-slow small targets
US20210270959A1 (en) * 2020-02-28 2021-09-02 The Boeing Company Target recognition from sar data using range profiles and a long short-term memory (lstm) network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110488278A (en) * 2019-08-20 2019-11-22 深圳锐越微技术有限公司 Doppler radar signal kind identification method
US20210270959A1 (en) * 2020-02-28 2021-09-02 The Boeing Company Target recognition from sar data using range profiles and a long short-term memory (lstm) network
CN111638488A (en) * 2020-04-10 2020-09-08 西安电子科技大学 Radar interference signal identification method based on LSTM network
CN112434643A (en) * 2020-12-06 2021-03-02 零八一电子集团有限公司 Classification and identification method for low-slow small targets

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
曹展家 等: "基于堆叠双向LSTM的雷达目标识别方法", 《计算机测量与控制》, vol. 29, no. 12, 21 December 2021 (2021-12-21), pages 126 - 131 *
王智文: "《人脸检测与识别研究》", 30 November 2020, 西南交通大学出版社, pages: 108 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116520252A (en) * 2023-04-03 2023-08-01 中国人民解放军93209部队 Intelligent recognition method and system for aerial targets
CN116520252B (en) * 2023-04-03 2024-03-15 中国人民解放军93209部队 Intelligent recognition method and system for aerial targets

Similar Documents

Publication Publication Date Title
CN112132042B (en) SAR image target detection method based on contrast domain adaptation
CN110133599B (en) Classification method of intelligent radar emitter signal based on long short-term memory model
CN111913156A (en) Individual identification method of radar radiation source based on deep learning model and feature combination
CN104931960A (en) Trend message and radar target state information whole-track data correlation method
CN110188647A (en) A Feature Extraction and Classification Method of Radar Emitter Based on Variational Mode Decomposition
CN113486917B (en) Radar HRRP small sample target recognition method based on metric learning
Xiao et al. Specific emitter identification of radar based on one dimensional convolution neural network
Xie et al. Dual-channel and bidirectional neural network for hypersonic glide vehicle trajectory prediction
CN114594440A (en) Radar high-resolution one-dimensional range image target recognition method and system based on dual parallel network
CN105913081A (en) Improved PCAnet-based SAR image classification method
CN111368653B (en) Low-altitude small target detection method based on R-D graph and deep neural network
CN104239901A (en) Polarized SAR image classification method based on fuzzy particle swarm and target decomposition
CN111401168A (en) Multi-layer radar feature extraction and selection method for unmanned aerial vehicle
CN113159218A (en) Radar HRRP multi-target identification method and system based on improved CNN
CN113064133A (en) Sea surface small target feature detection method based on time-frequency domain depth network
Tian et al. Performance evaluation of deception against synthetic aperture radar based on multifeature fusion
CN113743180A (en) CNNKD-based radar HRRP small sample target identification method
CN114973019B (en) A method and system for detecting and classifying geospatial information changes based on deep learning
CN114296067A (en) Recognition method of low, slow and small target based on LSTM model of pulse Doppler radar
CN116304966A (en) Track association method based on multi-source data fusion
CN115015908A (en) Radar target data association method based on graph neural network
CN119830149A (en) Radiation source open set identification method and device based on dynamic group constant-change network and generation countermeasure network
CN117572355A (en) Intelligent deception method and device for target recognition network model
CN114445456B (en) Data-driven intelligent maneuvering target tracking method and device based on partial model
CN117665807A (en) Face recognition method based on millimeter wave multi-person zero sample

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination