CN112986941B - Radar target micro-motion feature extraction method - Google Patents

Radar target micro-motion feature extraction method Download PDF

Info

Publication number
CN112986941B
CN112986941B CN202110187205.5A CN202110187205A CN112986941B CN 112986941 B CN112986941 B CN 112986941B CN 202110187205 A CN202110187205 A CN 202110187205A CN 112986941 B CN112986941 B CN 112986941B
Authority
CN
China
Prior art keywords
micro
time sequence
sequence
model
radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110187205.5A
Other languages
Chinese (zh)
Other versions
CN112986941A (en
Inventor
杨嘉琛
杨悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202110187205.5A priority Critical patent/CN112986941B/en
Publication of CN112986941A publication Critical patent/CN112986941A/en
Application granted granted Critical
Publication of CN112986941B publication Critical patent/CN112986941B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/415Identification of targets based on measurements of movement associated with the target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/418Theoretical aspects

Abstract

The invention provides a radar target micro-motion feature extraction method, which comprises the following steps: firstly, preparing a data set; the data used are a radar scattering sectional area time sequence and a high-resolution range profile time sequence of an object, the micro-motion characteristics are abstracted into a precession period and a nutation angle, the information of the precession period and the nutation angle is marked, the label of a data set is added, and a training set and a test set are determined; secondly, building a micro-motion feature extraction model for training, wherein the micro-motion feature extraction model is formed by connecting a plurality of layers of transform encoders and bidirectional LSTMs in series; and respectively obtaining the micro characteristic extraction models applied to the radar scattering sectional area time sequence and the high-resolution range profile time sequence by training the micro characteristic extraction model and adjusting the network weight parameters.

Description

Radar target micro-motion feature extraction method
Technical Field
The field of automatic target identification of radar, and designs a method for extracting radar target micro-motion characteristics based on series connection of a multilayer transform encoder and bidirectional LSTMs.
Background
The space ballistic target detection and identification technology is mainly used for identifying the specific type and the authenticity of the target and depends on the radar for sensing the geometric shape, such as the shape and the size, of the target and the characteristics of the target, such as the infrared characteristics and the micro-motion characteristics. With the development of space target feature control technology and true and false target camouflage technology, space target detection and identification methods based on radar signals and other signals are seriously challenged.
During the middle-end motion of the target in space, the target inevitably generates a small amount of micro-motion, such as vibration, spin, vibration and rolling, which is called the micro-motion of the object. The micro-motion of the object can modulate the radar signal in the time domain, so that the radar signal time sequence of the target object also contains the motion information of the object besides the geometrical structure information specific to the target. Because the bait generally does not adopt an attitude control technology and has rolling motion, the method has important research value on accurately extracting the target micro-motion characteristics in the field of true and false target identification.
V.C.Chen[1,2]The earliest proposed application of the micromotion feature to radar target recognition, Lei P[3]And (4) providing time-frequency transformation information analysis aiming at the target micro Doppler signal to obtain the target micro motion characteristics. Li W[4]Reliable micro-motion feature extraction is carried out by a method for extracting target scattering points of HRRP and ISAR images.
The above conventional method also has the following problems while completing the task of micro-motion feature extraction:
1. a priori algorithm design is required to be completed;
2. partial algorithm needs to reconstruct the micro-motion track and then carry out quantization processing.
Aiming at the problems, the invention adopts the theory of using deep learning[5]The method for extracting the micromotion features of the radar scattering cross-sectional area time sequence and the high-resolution range profile time sequence is improved on the basis.
[1]Chen V C,Li F,Ho S S,et al.Micro-Doppler effect in radar:Phenomenon,model,and simulation study[J].IEEE Transactions on Aerospace&Electronic Systems,2006,42(1):2-21.
[2]Chen.V.C.,Li.F,Ho.S.S,et al.Analysis of micro-Doppler signatures[J].IEE Proceedings-Radar,Sonar and Navigation,2003,150(4):271-0.
[3]Lei P,Sun J,Wang J,et al.Micromotion Parameter Estimation of Free Rigid Targets Based on Radar Micro-Doppler[J].IEEE Transactions on Geoence&Remote Sensing,2012,50(10):3776-3786.
[4]Li W,Fan H,Ren L,et al.Micromotion Feature Extraction Based on Phase-Derived Range and Velocity Measurement[J].IEEE Access,2019,PP(99):1-1.
[5]LeCun,Yann&Bengio,Y.&Hinton,Geoffrey.(2015).Deep Learning.Nature.521.436-44.10.1038/nature14539.
Disclosure of Invention
The invention aims to provide a method for extracting the micro-motion characteristics from end to end, which has the following technical scheme:
a radar target micro-motion feature extraction method comprises the following steps:
first, a data set is prepared
The data are radar scattering cross section time sequence and high resolution range profile time sequence of the object, and the micro-motion characteristics are abstracted into a precession period and a nutation angle; the method comprises the following steps:
(1) generating a radar scattering sectional area angle sequence and a high-resolution range image angle sequence;
(2) generating a target attitude angle time sequence to obtain a precession period and a nutation angle which are used as inching characteristics;
(3) generating a radar scattering sectional area time sequence and a high-resolution range profile time sequence in an interpolation mode, labeling precession period and nutation angle information, adding a label of a data set, and determining a training set and a test set;
secondly, building a micro-motion feature extraction model for training
Inputting the training set data in the data set into the micro-motion characteristic extraction model for training the model and adjusting parameters, and using X E to R [ c X T ∈]Is characterized by each sequence data, c represents the dimension of each frame of data, T is the length of time sequence and is xt∈R[c]Representing an input vector for each frame in the sequence; the dimension of a radar scattering sectional area time sequence is 1, and the dimension of a high-resolution range profile time sequence is 512;
the adopted micro-motion characteristic extraction model is formed by connecting a plurality of layers of transform encoders and bidirectional LSTMs in series;
(1) multi-layer transform encoder
The deep learning sequence model based on the attention mechanism ensures that the model does not disappear due to the increase of the depth by introducing residual connection while ensuring the diversity of depth information by using a multi-head attention mechanism MultiHead, and is specifically implemented as follows:
a multi-head attention mechanism MultiHead is used, a plurality of single-head attention mechanisms are connected with output, and more weight parameters are set for the network on the basis of the single-head attention mechanisms so as to ensure the diversity of depth information;
introducing residual error connection on the basis of the multi-head attention mechanism, wherein the result of the residual error connection is to sum the input x and the mapping of the multi-head attention mechanism;
adding two times of linear transformation on the basis of residual connection to obtain final coding output;
(2) bidirectional LSTMs
Connecting a multi-layer transform encoder and two bidirectional long and short memory neural network LSTM models in series to form a micro-motion characteristic extraction model; each two-way long and short memory neural network LSTM model is formed by linking a plurality of modules, and each module has three structures of an input gate, a forgetting gate and an output gate which are input to output in common;
and respectively obtaining the micro characteristic extraction models applied to the radar scattering sectional area time sequence and the high-resolution range profile time sequence by training the micro characteristic extraction model and adjusting the network weight parameters.
According to the time sequence characteristics of the radar signal time sequence, firstly, the classical sequence models are used for respectively extracting the characteristics of different radar signal time sequence data, a sequence model structure is designed by combining the characteristics of a deep neural network and the structurality of the sequence data, and the structure can effectively realize end-to-end micro-motion characteristic extraction. In an end-to-end experiment, the preprocessed radar scattering cross section area time sequence and the high-resolution range profile time sequence are respectively input into a deep learning neural network model to obtain a well-trained micro characteristic model.
Drawings
FIG. 1 data set Generation results graph
FIG. 2 is a diagram of a deep learning model architecture with multi-layer transform encoder and bi-directional LSTMs concatenated
FIG. 3 Structure of a multi-layer transform encoder
FIG. 4 Experimental results
Detailed Description
In order to make the technical scheme of the invention clearer, the invention is further explained with reference to the attached drawings. The invention is realized by the following steps:
the invention provides a sequence model structure capable of effectively realizing end-to-end radar target micro-motion feature extraction, which can be specifically implemented according to the following steps:
first, a data set is prepared
The experimental data used in the method are a radar scattering cross-sectional area time sequence and a high-resolution range profile time sequence of an object, and the specific steps can be generated according to the following 3 steps of simulation. Firstly, respectively establishing mapping among angles, radar scattering sectional areas, angles and high-resolution range profiles; a second part establishing a mapping between time and angle; and thirdly, respectively obtaining mapping between time and radar scattering sectional area and mapping between time and high-resolution range profile by taking the angle value as a medium. The method can be implemented as follows:
(1) generating a radar scattering sectional area angle sequence and a high-resolution range image angle sequence
The object radar scattering cross-section angle sequence and the high resolution range profile angle sequence are obtained by Simulation of CST (computer Simulation technology), and the invention takes the following 4 types of targets as examples, which respectively are as follows: ball, cone column, column. Fig. 1(a) is an example of an angular sequence of a scattering cross-sectional area of a radar, and fig. 1(b) is an example of an angular sequence of a high-resolution range image.
(2) Generating a target pose angular time series
The dynamic time sequence is calculated by MATLAB, and the position relation between the motion trail of the object and the radar in the real environment is simulated by using 19 pieces of track information and 3 radar positions, so that 57(19 × 3) kinds of target attitude angle time sequences are generated. The micro-motion features are added in the step of generating the target attitude angle time sequence, the micro-motion features are abstracted into a precession period and a nutation angle, the range of the precession period is selected to be 3-8 seconds, the step is 1 second, the range of the nutation angle is selected to be 3-10 degrees, the step is 1 degree, and data sets are generated in a uniformly distributed mode. FIG. 1(c) is a time series of attitude angles for a trajectory, different nutation angles and precession periods of a radar.
(3) Generating a radar scattering cross-sectional area time series and a high resolution range profile time series
And generating a radar scattering sectional area time sequence and a high-resolution range profile time sequence by adopting an interpolation mode, namely, inserting a radar scattering sectional area value or a high-resolution range profile value corresponding to a target attitude angle at a certain moment into a position corresponding to the moment by taking a time value as a sequence, and further respectively forming the radar scattering sectional area time sequence and the high-resolution range profile time sequence. Taking cone targets as an example, fig. 1(d) is an example of a time series of radar scattering cross-sectional areas under different micromotions, and fig. 1(e) is an example of a time series of high-resolution range profiles under different micromotions.
When the data is stored, the precession period and nutation angle information of the data are marked, namely, the label of the data set is added. Meanwhile, the data set is divided into 8: the scale of 2 is randomly partitioned into a training set and a test set.
Secondly, building a micromotion feature extraction model
And inputting training set data in the data set into the micro characteristic extraction model for training the model and adjusting parameters. Using X ∈ R [ c × T]Is characterized by each sequence data, c represents the dimension of each frame of data of the sequence, T is the length of the time sequence and is xt∈R[c]Representing the input vector for each frame in the sequence. For our data, the radar cross-sectional area time series dimension is 1 and the high resolution range image time series dimension is 512. The length of a time sequence generated by combining different track information and radar positions has a certain difference, but due to the fact that the micromotion characteristic widely exists in a radar scattering cross-section area time sequence and a high-resolution range profile time sequence, a time sequence with the length of 500 before each sample in a training set is intercepted and used as training data.
The micro-motion feature extraction model adopted by the invention is formed by connecting a plurality of layers of transform encoders and bidirectional LSTMs in series, the structure of the model is shown as the attached figure 2, and the specific structure of the model is as follows:
(1) multi-layer transform encoder
The encoder is a deep learning sequence model based on an attention mechanism, and a specific structure of a multi-layer transform encoder is shown in figure 3. Literally, the attention mechanism is similar to the attention mechanism of human beings, and human beings rapidly scan the global text to obtain an area needing important attention, namely the attention focus in general, and then put more attention resources into the area to obtain more detailed information of the target needing attention, and suppress other useless information.
The multi-layer transform encoder may be embodied as follows:
first, a multi-head attention mechanism (MultiHead) is used. The single-head attention mechanism can be understood from the coding point of view as dynamically multiplying the numerical values of different time sequence positions by different weights and finally summing. The multi-head attention mechanism can be understood that a plurality of single-head attention mechanisms are connected with the output, and more weight parameters are set for the network on the basis of the single-head attention mechanism to ensure the diversity of depth information.
Second, residual concatenation is introduced on the basis of a multi-head attention mechanism. Residual concatenation, the result of which is summing the input x with its map (multi-head attention mechanism), ensures that the model does not vanish due to the increase in depth, and the formula for residual concatenation is expressed as follows:
shortcut(f(·),x)=f(x)+x
third, two linear transformations are added on the basis of residual concatenation. W1, W2, b1 and b2 are weight parameters and bias parameters of two linear transformations, respectively, max () takes the larger value between 0 and xW1+ b1, and the formula of the two linear transformations is expressed as follows:
FFNN(x)=max(0,xW1+b1)W2+b2
adding the residual concatenation of the two linear transformations ensures that the model does not vanish in gradient due to the increase in depth,
the final output of the encoding can be expressed as:
Encoder(x)=shortcut(FFNN(·),shortcut(MultiHead(·),x))
(2) bidirectional LSTMs
The deep learning model based on the RNN (recurrent neural network) is difficult to fit due to the limitation of parameters, and the deep learning model is not capable of extracting the deep features more effectively than the CNN (convolutional neural network) model, but the performance of the RNN model in the time dimension is considerable.
Therefore, the invention tries the network model by increasing the number of hidden layer nodes, increasing the number of network layers and increasing the network modules, and designs a deep learning model with two-way LSTMs connected in series, namely two LSTM (long-short memory neural network) models are connected in series. The LSTM is composed of a plurality of module chains, each module is provided with an input gate g from input to outputi(Input Gate), forget Gate gf(Forget Gate) and output Gate go(Output Gate) three structures.
In the LSTM module, the operation procedure for the time point t can be expressed by the following formula. Where the input semantic vector for the time t cell is xtThe hidden layer output is ht,Wf、Wi、Wo、WCIs weight parameter of forgetting gate unit, input gate unit, output gate unit and cell state in module If、Ii、Io、ICIs the projection matrix weight of the input information, where σ (-) is the Sigmoid function.
gf=σ(Wf·ht-1+If·xt)
gi=σ(Wi·ht-1+Ii·xt)
go=σ(Wo·ht-1+Io·xt)
The gate values g of the forgetting gate, the input gate and the output gate are respectively obtained through operation activationf、giAnd go. And then matrix multiplication and tanh function activation are carried out, and the cell state of each step can be obtained step by step
Figure BDA0002938905350000051
And final cellular state CtFurther, the hidden layer output h of the time point t is obtainedt
Figure BDA0002938905350000052
Is the hadamard product of the matrix.
Figure BDA0002938905350000053
Figure BDA0002938905350000054
Figure BDA0002938905350000055
The above steps can be simply understood as that the input gate is used for controlling whether the current input is added into the memory storage unit, the forgetting gate is used for controlling whether the memory storage unit is set to zero, and the output gate is used for controlling whether the number in the memory storage unit is output.
The micro characteristic extraction model is trained through training set data, and the micro characteristic extraction model applied to a radar scattering sectional area time sequence and a high-resolution range profile time sequence can be obtained respectively.
Thirdly, testing the extraction effect of the micro-motion characteristic extraction model
During testing, the radar scattering cross section time sequence and the high-resolution range profile time sequence data in the test set are respectively input into corresponding micromotion feature extraction models, corresponding precession period and nutation angle prediction results are obtained, the corresponding precession period and nutation angle prediction results are respectively compared with the real precession period and nutation angle in the test set label, and the prediction accuracy of the models is calculated. The extraction accuracy rate of the precession period of the radar scattering sectional area time sequence is 99.4%, and the extraction accuracy rate of the nutation angle is 93.27%; the precession period extraction accuracy of the high-resolution range profile time sequence is 99.07%, and the nutation angle extraction accuracy is 99.58%.
And comparing the micro-motion characteristic extraction model constructed by the method with the other 3 models to obtain a micro-motion characteristic extraction result, wherein an experimental result is shown in an attached diagram 4. From the aspect of accuracy of extracting characteristics of a precession period and a nutation angle, compared with a bidirectional LSTMs series model and an ATLSTM (Attention-based LSTM), the model provided by the invention has certain improvement on a radar scattering sectional area time sequence and a high-resolution range profile time sequence; compared with ResNet (residual Neural network), the scattering cross-sectional area of the radar is reduced in time series application, and the scattering cross-sectional area of the radar is improved in high-resolution distance image time series application.

Claims (1)

1. A radar target micro-motion feature extraction method comprises the following steps:
first, a data set is prepared
The data are radar scattering cross section time sequence and high resolution range profile time sequence of the object, and the micro-motion characteristics are abstracted into a precession period and a nutation angle; the method comprises the following steps:
(1) generating a radar scattering sectional area angle sequence and a high-resolution range image angle sequence;
(2) generating a target attitude angle time sequence to obtain a precession period and a nutation angle which are used as inching characteristics;
(3) generating a radar scattering sectional area time sequence and a high-resolution range profile time sequence in an interpolation mode, labeling precession period and nutation angle information, adding a label of a data set, and determining a training set and a test set;
secondly, building a micro-motion feature extraction model for training
Inputting the training set data in the data set into the micro-motion characteristic extraction model for training the model and adjusting parameters, and using X E to R [ c X T ∈]Is characterized by each sequence data, c represents the dimension of each frame of data, T is the length of time sequence and is xt∈R[c]Representing an input vector for each frame in the sequence; the dimension of a radar scattering sectional area time sequence is 1, and the dimension of a high-resolution range profile time sequence is 512;
the adopted micro-motion characteristic extraction model is formed by connecting a plurality of layers of transform encoders and bidirectional LSTMs in series;
(1) multi-layer transform encoder
The deep learning sequence model based on the attention mechanism ensures that the model does not disappear due to the increase of the depth by introducing residual connection while ensuring the diversity of depth information by using a multi-head attention mechanism MultiHead, and is specifically implemented as follows:
a multi-head attention mechanism MultiHead is used, a plurality of single-head attention mechanisms are connected with output, and more weight parameters are set for the network on the basis of the single-head attention mechanisms so as to ensure the diversity of depth information;
introducing residual error connection on the basis of the multi-head attention mechanism, wherein the result of the residual error connection is to sum the input x and the mapping of the multi-head attention mechanism;
adding two times of linear transformation on the basis of residual connection to obtain final coding output;
(2) bidirectional LSTMs
Connecting a multi-layer transform encoder and two bidirectional long and short memory neural network LSTM models in series to form a micro-motion characteristic extraction model; each two-way long and short memory neural network LSTM model is formed by linking a plurality of modules, and each module has three structures of an input gate, a forgetting gate and an output gate which are input to output in common;
and respectively obtaining the micro characteristic extraction models applied to the radar scattering sectional area time sequence and the high-resolution range profile time sequence by training the micro characteristic extraction model and adjusting the network weight parameters.
CN202110187205.5A 2021-02-08 2021-02-08 Radar target micro-motion feature extraction method Active CN112986941B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110187205.5A CN112986941B (en) 2021-02-08 2021-02-08 Radar target micro-motion feature extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110187205.5A CN112986941B (en) 2021-02-08 2021-02-08 Radar target micro-motion feature extraction method

Publications (2)

Publication Number Publication Date
CN112986941A CN112986941A (en) 2021-06-18
CN112986941B true CN112986941B (en) 2022-03-04

Family

ID=76393517

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110187205.5A Active CN112986941B (en) 2021-02-08 2021-02-08 Radar target micro-motion feature extraction method

Country Status (1)

Country Link
CN (1) CN112986941B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114052740B (en) * 2021-11-29 2022-12-30 中国科学技术大学 Non-contact electrocardiogram monitoring method based on millimeter wave radar
CN115834310B (en) * 2023-02-15 2023-05-09 四川轻化工大学 LGTransformer-based communication signal modulation identification method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110245581A (en) * 2019-05-25 2019-09-17 天津大学 A kind of Human bodys' response method based on deep learning and distance-Doppler sequence
CN110363151A (en) * 2019-07-16 2019-10-22 中国人民解放军海军航空大学 Based on the controllable radar target detection method of binary channels convolutional neural networks false-alarm
CN111596276A (en) * 2020-04-02 2020-08-28 杭州电子科技大学 Radar HRRP target identification method based on spectrogram transformation and attention mechanism recurrent neural network
CN111736125A (en) * 2020-04-02 2020-10-02 杭州电子科技大学 Radar target identification method based on attention mechanism and bidirectional stacked cyclic neural network
CN111859784A (en) * 2020-06-24 2020-10-30 天津大学 RCS time series feature extraction method based on deep learning neural network
CN111914400A (en) * 2020-07-03 2020-11-10 天津大学 HRRP (high resolution regression) feature extraction method based on multitask learning
CN111968629A (en) * 2020-07-08 2020-11-20 重庆邮电大学 Chinese speech recognition method combining Transformer and CNN-DFSMN-CTC

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10410113B2 (en) * 2016-01-14 2019-09-10 Preferred Networks, Inc. Time series data adaptation and sensor fusion systems, methods, and apparatus
CN110472627B (en) * 2019-07-02 2022-11-08 五邑大学 End-to-end SAR image recognition method, device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110245581A (en) * 2019-05-25 2019-09-17 天津大学 A kind of Human bodys' response method based on deep learning and distance-Doppler sequence
CN110363151A (en) * 2019-07-16 2019-10-22 中国人民解放军海军航空大学 Based on the controllable radar target detection method of binary channels convolutional neural networks false-alarm
CN111596276A (en) * 2020-04-02 2020-08-28 杭州电子科技大学 Radar HRRP target identification method based on spectrogram transformation and attention mechanism recurrent neural network
CN111736125A (en) * 2020-04-02 2020-10-02 杭州电子科技大学 Radar target identification method based on attention mechanism and bidirectional stacked cyclic neural network
CN111859784A (en) * 2020-06-24 2020-10-30 天津大学 RCS time series feature extraction method based on deep learning neural network
CN111914400A (en) * 2020-07-03 2020-11-10 天津大学 HRRP (high resolution regression) feature extraction method based on multitask learning
CN111968629A (en) * 2020-07-08 2020-11-20 重庆邮电大学 Chinese speech recognition method combining Transformer and CNN-DFSMN-CTC

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Classification of Space Targets with Micro-motion Based on Deep CNN;Yizhe Wang et al.;《2019 IEEE 2nd International Conference on Electronic Information and Communication Technology (ICEICT)》;20191104;全文 *
基于RCS序列的弹道中段目标微动提取技术;陈翱;《现代雷达》;20120630;第34卷(第06期);全文 *
基于微多普勒特征的真假目标雷达识别研究;高红卫等;《电波科学学报》;20080831;第23卷(第04期);全文 *
基于时频分布的空间锥体目标微动形式分类;韩勋等;《系统工程与电子技术》;20130430;第35 卷(第04期);全文 *
频率步进宽带雷达微动目标成像特征提取与分析;胡光等;《弹箭与制导学报》;20130831;第33卷(第04期);全文 *

Also Published As

Publication number Publication date
CN112986941A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN109086700B (en) Radar one-dimensional range profile target identification method based on deep convolutional neural network
Seyfioglu et al. DNN transfer learning from diversified micro-Doppler for motion classification
US11402494B2 (en) Method and apparatus for end-to-end SAR image recognition, and storage medium
Choi et al. Short-range radar based real-time hand gesture recognition using LSTM encoder
CN110472483B (en) SAR image-oriented small sample semantic feature enhancement method and device
CN112986941B (en) Radar target micro-motion feature extraction method
CN106951923B (en) Robot three-dimensional shape recognition method based on multi-view information fusion
Long et al. Lira-YOLO: A lightweight model for ship detection in radar images
CN113313123B (en) Glance path prediction method based on semantic inference
CN110956154A (en) Vibration information terrain classification and identification method based on CNN-LSTM
Liu et al. Background classification method based on deep learning for intelligent automotive radar target detection
CN111027627A (en) Vibration information terrain classification and identification method based on multilayer perceptron
Kim et al. Human detection based on time-varying signature on range-Doppler diagram using deep neural networks
Pan et al. A novel approach for marine small target detection based on deep learning
Kreutz et al. Applied spiking neural networks for radar-based gesture recognition
CN112364689A (en) Human body action and identity multi-task identification method based on CNN and radar image
CN111401180A (en) Neural network recognition model training method and device, server and storage medium
CN108830172A (en) Aircraft remote sensing images detection method based on depth residual error network and SV coding
Zhu et al. Ground target recognition using carrier-free UWB radar sensor with a semi-supervised stacked convolutional denoising autoencoder
Xie et al. Neural network normal estimation and bathymetry reconstruction from sidescan sonar
Liu et al. Dim and small target detection in multi-frame sequence using Bi-Conv-LSTM and 3D-Conv structure
Decourt et al. A recurrent CNN for online object detection on raw radar frames
de Oliveira et al. Generating synthetic short-range fmcw range-doppler maps using generative adversarial networks and deep convolutional autoencoders
Jo et al. Mixture density-PoseNet and its application to monocular camera-based global localization
Singh et al. An enhanced YOLOv5 based on color harmony algorithm for object detection in unmanned aerial vehicle captured images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant