AU2020104006A4 - Radar target recognition method based on feature pyramid lightweight convolutional neural network - Google Patents

Radar target recognition method based on feature pyramid lightweight convolutional neural network Download PDF

Info

Publication number
AU2020104006A4
AU2020104006A4 AU2020104006A AU2020104006A AU2020104006A4 AU 2020104006 A4 AU2020104006 A4 AU 2020104006A4 AU 2020104006 A AU2020104006 A AU 2020104006A AU 2020104006 A AU2020104006 A AU 2020104006A AU 2020104006 A4 AU2020104006 A4 AU 2020104006A4
Authority
AU
Australia
Prior art keywords
layer
convolution
model
feature
radar target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2020104006A
Inventor
Chen Guo
Qiang Guo
Youpeng HUANG
Shuyi JIA
Hao Liu
Xinlong PAN
Liqiang REN
Shun SUN
Tiantian Tang
Haipeng Wang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Naval Aeronautical University
Original Assignee
Naval Aeronautical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Naval Aeronautical University filed Critical Naval Aeronautical University
Priority to AU2020104006A priority Critical patent/AU2020104006A4/en
Application granted granted Critical
Publication of AU2020104006A4 publication Critical patent/AU2020104006A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/003Bistatic radar systems; Multistatic radar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/245Classification techniques relating to the decision surface
    • G06F18/2453Classification techniques relating to the decision surface non-linear, e.g. polynomial classifier
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The present invention belongs to the technical field of radar target automatic recognition, and provides a radar target recognition method based on a feature pyramid lightweight convolutional neural network (CNN). The present invention solves the problem of high-resolution range profile (HRRP) feature extraction, classification and recognition of a target under a low signal-to-noise ratio (SNR). This method uses a multi-scale representation of HRRP to construct a multi-channel input of a model, thereby taking into account local information and global information of the target. This method also designs a lightweight CNN based on a depthwise separable convolution layer, which effectively reduces the number of parameters and improves the generalization performance of the model. In addition, this method adds a feature pyramid fusion method to extract robust features of the target, which improves the stability of the model. Output 000 V U0 FIG. I

Description

Output
000
V
U0
FIG. I RADAR TARGET RECOGNITION METHOD BASED ON FEATURE PYRAMID LIGHTWEIGHT CONVOLUTIONAL NEURAL NETWORK TECHNICAL FIELD
The present invention belongs to the technical field of radar target automatic recognition, and provides a radar target recognition method based on a feature pyramid lightweight convolutional neural network (CNN). The present invention solves the problem of high-resolution range profile (HRRP) feature extraction, classification and recognition of a target under a low signal-to-noise ratio (SNR).
BACKGROUND
In recent years, deep learning (DL) related algorithms, such as target detection, classification and recognition, have been widely used in the field of computer vision. These DL algorithms are data-driven to obtain the required model, and the deep features extracted are robust and well characterize the essential information of the target. The high-resolution range profile (HRRP) includes a large amount of information, such as the structure and intensity of the target scattering point. The DL algorithms currently applied to radar HRRP target recognition are mainly based on the stacked auto-encoder models. Auto-encoder is an unsupervised feature extraction method, which cannot make good use of the target labels. Further, as the greedy algorithm is adopted to train the model layer by layer, feature extraction of the stacked auto-encoder tends to fail as the number of layers increases. In order to solve the above problems, the present invention proposes a supervised target recognition method based on a convolutional neural network (CNN). The conventional depthwise CNN only uses the output feature of the deepest layer for target classification and recognition. Each layer of the depthwise CNN outputs a target feature. The shallow features are mostly contour and edge information, and the deep features are mostly high level semantic information. In order to make full use of the features extracted from each layer, the present invention draws on the scale-invariant feature transform (SIFT) feature extraction method based on the conventional depthwise CNN, and proposes a radar target recognition method based on feature pyramid fusion lightweight CNN. This method uses the multi-scale representation of the HRRP to construct a multi-channel input to a model, thereby taking into account the local information and global information of a target. This method also designs a lightweight CNN based on a depthwise separable convolution layer, which effectively reduces the number of parameters and improves the generalization performance of the model. In addition, this method adds a feature pyramid fusion method to extract robust features of the target, which improves the stability of the model.
SUMMARY OF THE INVENTION
An objective of the present invention is to provide a radar target recognition method based on a feature pyramid lightweight convolutional neural network (CNN). The present invention solves the problem of low recognition rate of a high-resolution range profile (HRRP) under a low signal-to noise ratio (SNR), and improves the robustness and generalization performance of an algorithm. A technical solution of the present invention includes: constructing multi-channel HRRP data based on a multi-scale space, constructing a depthwise separable convolution feature extraction block, building a feature pyramid lightweight CNN, and using training samples to train an end-to end HRRP radar target recognition model. In order to achieve the above objective, the method of the present invention includes the following steps: step 1: constructing multi-channel HRRP data based on a multi-scale space; step 2: constructing a depthwise separable convolution feature extraction block, building a feature pyramid lightweight CNN, and initializing a parameter of a model; step 3: performing a forward propagation (FP), and calculating a loss function in an iterative process; step 4: performing a back-propagation (BP), and using a chain rule to update the parameter in the model; and step 5: repeating steps 2 and 3 until the loss function converges, and obtaining a model that can be used for radar target recognition. Compared with the prior art, the present invention has the following technical effects: (1) The proposed model is a data-driven end-to-end model, and the model after training can automatically extract the deep features of a target. (2) The proposed method uses the multi-scale representation of an HRRP to construct a multi channel input of the model, so as to ensure that the proposed model takes into account global structure features while extracting detailed features of the target, which is helpful for extracting robust features. (3) Compared with the conventional CNN, the lightweight CNN is designed based on a depthwise separable convolution layer, which reduces the number of parameters and improves the calculation efficiency and generalization performance of the model.
(4) The proposed model makes full use of the features of each layer, and improves the robustness and convergence speed of the proposed model by using a feature pyramid fusion method.
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1 is a block diagram of a feature pyramid lightweight convolutional neural network (CNN) model. FIG. 2 is a schematic diagram of a convolution process in a single depthwise convolution (DC) layer. FIG. 3 is a structural diagram of a depthwise separable convolution feature extraction block. FIG. 4 is a structural diagram of a feature pyramid fusion method.
DETAILED DESCRIPTION
The present invention is described in detail below with reference to the accompanying drawings. The model constructed by the present invention is described below with reference to the accompanying drawings of the specification. The present invention first uses a Gaussian kernel function to obtain a multi-scale representation of a high-resolution range profile (HRRP) as an input to a model, and then designs and constructs a feature pyramid lightweight convolutional neural network (CNN) model to extract target features for target recognition. The proposed CNN model mainly includes four separable convolution feature extraction blocks, numbered as blocks 1, 2, 3, and 4 respectively. Outputs of the first three blocks are used as inputs of branches 1, 2, and 3, respectively, and each branch uses a depthwise convolution (DC) to down-sample the input. Outputs of branches 1, 2, 3 are paralleled with an output of block 4 to obtain a feature vector. A pointwise convolution (PC) is used to fuse each channel feature of the feature vector, and an obtained feature vector is expanded into a one-dimensional (ID) vector to connect with the next fully connected (FC) layer. Finally an output result is obtained from an output layer. The overall block diagram of the proposed model is shown in FIG. 1. In the figure, DC represents a depthwise convolution layer, PC represents a pointwise convolution layer, P represents a pooling layer, and a stride is 2. In order to ensure that a length of an output vector of branches 1, 2, and 3 is the same as a length of an output vector of block 4, the strides of the DC layers of branches 1, 2, and 3 are 8, 4, and 2, respectively. There are 50 neurons in the FC layer, and the FC layer is a conventional neural network used as a classifier. The proposed method is introduced and analyzed in detail from four aspects: 1. multi-scale representation method of HRRP data, 2. construction of depthwise separable convolution feature extraction block, 3. specific method of feature pyramid fusion, and 4. computational complexity of the proposed model and other depthwise models. 1. Construction of multi-channel HRRP based on multi-scale space Gaussian kernels with different parameters are used to convolve a signal to obtain a multi-scale representation of the signal. At a smaller scale, the signal includes more detailed information (local features); on the contrary, it includes more structural features. Therefore, compared with a single scale method, it is easier to obtain the essential characteristics of the signal by comprehensively using the multi-scale information of the signal. The existing research usually only extracts the features of the HRRP data at the original scale, and most of the features represent the local details of the HRRP, which tends to ignore the global information of the HRRP and reduces the generalization performance. The Gaussian kernel is the only linear kernel that realizes the multi scale representation. By performing the Gaussian kernel convolution on the signal, the high frequency components in the signal can be filtered, and part of the detailed information of the signal can be discarded. As the global characteristics of the signal will not change due to the Gaussian convolution, the present invention uses Gaussian kernels to perform a convolution operation on the HRRP data by the following formulas to obtain the multi-scale representation of the HRRP: L(x,o-)=G(x,a-)0/(x)
G(x,oa)=aexp(---) 2c (2) In the formula, G(x,a) is a 1D Gaussian kernel function; a represents an amplitude of the Gaussian kernel; a represents a scale parameter of the Gaussian kernel; I(x) represents an input signal; L(x,a) is a signal after Gaussian blur. The present invention selects three Gaussian kernels of different scales, where the amplitude a=1, the width is 3, and the scales a are respectively yo, ao2 and oo2 2 12; o is equal to 1. The HRRPs of different scales are normalized, and paralleled to form three-channel data as an input to the model. 2. Separable convolution feature extraction block The model in the present invention uses a new feature extraction block, which is composed of a depthwise separable convolution layer and a P layer. The depthwise separable convolution layer is mainly responsible for feature extraction, and the P layer is mainly responsible for reducing the redundancy of features. The depthwise separable convolution decomposes a complete convolution operation into two steps, namely DC and PC. The DC is responsible for extracting the features of each input channel, and the PC is responsible for fusing the features of each channel. The DC and the PC are combined to decouple the spatial information and depth information in the features. FIG. 2 describes a specific operation process of DC in a single depthwise convolution layer. The input is a three-channel 1D vector; the size of the convolution kernel of the DC layer is 3x1. The DC has a different kernel from conventional convolution. The number of channels of the convolution kernel in the DC layer is always 1, and each convolution is only performed on a single input channel. Therefore, the number of channels of the output feature of the DC layer is the same as the number of channels of the input vector. The DC only performs an independent convolution operation on each input channel, but does not fuse the feature information of different channels at the same spatial position, so the PC is required to combine the features of each channel into new features. The PC is a special case of conventional convolution, and the calculation process of the PC is the same as conventional convolution. The size of the convolution kernel of the PC is fixed at 1x 1. The PC of a multi-channel feature vector is equivalent to the weighted summation of each channel of the features to obtain a new feature vector. The number of parameters of the depthwise separable convolution layer is the sum of the parameters of the DC and the PC. Assuming that the size of the convolution kernel of the DC is nkx1, and the number of input/output feature channels is ni and n., respectively, the number of parameters of the depthwise separable convolution layer is nink+nin. As shown in FIG. 3, the depthwise separable convolution feature extraction block includes a DC layer, a PC layer, and a P layer. The size of the DC kernel is fixed at 3x1, and the number of convolution kernels is the same as the number of corresponding input vector channels. The stride of the P layer is 2. After the feature vector is down-sampled by the P layer, the number of channels remains unchanged, but the dimension becomes one-half of the original. 3. Feature pyramid fusion method The conventional CNN includes convolution layers, P layer and FC layers. The features extracted from each convolution layer of the CNN are target features. The shallow convolution layer mostly extracts low-level information such as target contours and edges, while the deep convolution layer mostly extracts high-level semantic information. The conventional CNN only uses the features extracted by the last convolution layer for target recognition, and increases the depth of the layer to obtain higher-level semantic features, thereby improving the correct recognition rate. In this process, the features of other layers are not fully utilized. In order to make full use of the features of each layer, a feature pyramid fusion method is proposed. FIG. 4 is a structural diagram of a feature pyramid fusion method. In FIG. 4, levels 1 to 4 represent output feature vectors of blocks 1 to 4, respectively. A feature vector of level i+1 is obtained by convolution and down-sampling of a feature vector of level i by block i, and its vector dimension is half of level i. Therefore, the feature vector combination of levels 1 to 4 can be called a feature pyramid. The feature pyramid fusion method is specifically as follows:
(1) Down-sample feature vectors of feature pyramid levels 1 to 3 to make the feature dimension of levels 1 to 3 the same as the feature dimension of level 4. If a P layer is directly used to down sample shallow features, it is easy to cause the loss of part of effective information. Therefore, the present invention uses DC to down-sample the feature vectors of levels 1 to 3. As shown in FIG. 1, the sizes of DC kernels corresponding to branches 1, 2, and 3 are 9x1, 5x1 and 3x1, and the strides are 8, 4, and 2, respectively. (2) Parallel the multi-channel features of each level, where if channel numbers of features at levels 1 to 4 are ci to 4, respectively, the channel number of the feature vector after paralleling is
(3) Use PC to fuse each channel of the vector after paralleling, and input a fused vector into an FC layer to obtain a final recognition result. 4. Model structure comparison and computational complexity analysis The number of parameters of the proposed model and the conventional CNN is much smaller than that of an auto-encoding model. The proposed model has a smaller number of parameters than the conventional CNN due to the use of depthwise separable convolution layers. Assuming that there are N training samples, and the input and output feature dimensions of a single layer are D and T, respectively, the single-layer computational complexity of the auto encoding model is O(NDT), where DT is the number of hidden layer parameters. For the conventional CNN, assuming that the size of the convolution kernel is nk and the number of input and output channels is ni and n respectively, the computational complexity of the convolution layer is O(NDnknino), nnino being the number of parameters of the convolution layer. For the proposed model, assuming that the size of the convolution kernel of the DC layer is nk, and the number of input and output channels is ni and no respectively, the computational complexity of the depthwise separable convolution layer is O(ND(nni+nino)), nkni-nino being the number of parameters of the depthwise separable convolution layer. Generally, nknino > nni+nino > T, and the number of convolution kernel based neural network layers is more than that of the auto-encoder model, so the corresponding computational complexity is greater than that of the auto-encoder model. In summary, the proposed model has a smaller number of parameters and lower computational complexity. It belongs to a lightweight CNN and has better generalization performance.

Claims (5)

  1. What is claimed is: 1. A radar target recognition method based on a feature pyramid lightweight convolutional neural network (CNN), comprising the following steps: step 1: constructing multi-channel high-resolution range profile (HRRP) data based on a multi scale space; step 2: constructing a depthwise separable convolution feature extraction block building a feature pyramid lightweight CNN, and initializing a parameter of a model; step 3: performing a forward propagation (FP), and calculating a loss function in an iterative process; step 4: performing a backpropagation (BP), and using a chain rule to update the parameter in the model; and step 5: repeating steps 3 and 4 until the loss function converges, and obtaining a model that can be used for radar target recognition.
  2. 2. The radar target recognition method according to claim 1, wherein step 1 specifically comprises: using Gaussian kernels of different scales to perform a convolution operation on the HRRP data by the following formulas to obtain a multi-scale representation of the HRRP: L(x,o-)=G(x,)0/(x)
    G(x,oa)=aexp(---) 2c (2) wherein, G(x,) is a one-dimensional (lD) Gaussian kernel function; a represents an amplitude of the Gaussian kernel; a represents a scale parameter of the Gaussian kernel; I(x) represents an input signal; 0 represents a convolution operation; L(x,c) is a signal after Gaussian blur; and normalizing HRRPs of different scales, and then paralleling to form multi-channel data as an input to the model.
  3. 3. The radar target recognition method according to claim 1, wherein in step 2, the depthwise separable convolution feature extraction block comprises a depthwise convolution (DC) layer, a pointwise convolution (PC) layer and a pooling (P) layer; wherein a size of a convolution kernel in the DC layer is fixed at 3x1, and a number of convolution kernels is the same as a number of corresponding input vector channels; wherein a stride of the P layer is 2.
  4. 4. The radar target recognition method according to claim 1, wherein the feature pyramid lightweight CNN built in step 2 is composed of four depthwise separable convolution feature extraction blocks, wherein outputs of first three blocks respectively pass through a branch and are then paralleled with an output of a fourth block to form a new feature vector; the PC layer is used to fuse channels of the feature vector, and then a non-linear classifier composed of a fully connected (FC) layer and an output layer is used to obtain a classification and recognition result.
  5. 5. The radar target recognition method according to claim 4, wherein the branch is composed of a DC layer.
    Gaussian DC stride 8 kernel Three-channel Branch 1 input Branch 2 DC stride 4 Raw HRRP DC stride 2 Branch 3
    FIG. 1 1/2
    Output DRAWINGS
    Block 1 Block 2 Block 3 Block 4
AU2020104006A 2020-12-10 2020-12-10 Radar target recognition method based on feature pyramid lightweight convolutional neural network Ceased AU2020104006A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2020104006A AU2020104006A4 (en) 2020-12-10 2020-12-10 Radar target recognition method based on feature pyramid lightweight convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2020104006A AU2020104006A4 (en) 2020-12-10 2020-12-10 Radar target recognition method based on feature pyramid lightweight convolutional neural network

Publications (1)

Publication Number Publication Date
AU2020104006A4 true AU2020104006A4 (en) 2021-02-18

Family

ID=74591580

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2020104006A Ceased AU2020104006A4 (en) 2020-12-10 2020-12-10 Radar target recognition method based on feature pyramid lightweight convolutional neural network

Country Status (1)

Country Link
AU (1) AU2020104006A4 (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033629A (en) * 2021-03-09 2021-06-25 中南大学 Radar signal sorting method and device based on improved cuckoo algorithm
CN113221709A (en) * 2021-04-30 2021-08-06 芜湖美的厨卫电器制造有限公司 Method and device for recognizing user movement and water heater
CN113239898A (en) * 2021-06-17 2021-08-10 阿波罗智联(北京)科技有限公司 Method for processing image, road side equipment and cloud control platform
CN113239959A (en) * 2021-04-09 2021-08-10 西安电子科技大学 Radar HRRP target identification method based on decoupling representation variational self-coding machine
CN113327253A (en) * 2021-05-24 2021-08-31 北京市遥感信息研究所 Weak and small target detection method based on satellite-borne infrared remote sensing image
CN113361472A (en) * 2021-07-01 2021-09-07 西安电子科技大学 Radar HRRP target identification method based on ILFACs model
CN113435246A (en) * 2021-05-18 2021-09-24 西安电子科技大学 Radiation source individual intelligent identification method, system and terminal
CN113516115A (en) * 2021-05-19 2021-10-19 西安建筑科技大学 Dense scene text detection method, device and medium based on multi-dimensional fusion
CN113567984A (en) * 2021-07-30 2021-10-29 长沙理工大学 Method and system for detecting artificial small target in SAR image
CN113743269A (en) * 2021-08-26 2021-12-03 浙江工业大学 Method for identifying video human body posture in light weight mode
CN113962298A (en) * 2021-10-14 2022-01-21 电子科技大学 Low-rank subspace true and false target one-dimensional range profile feature extraction method
CN114119582A (en) * 2021-12-01 2022-03-01 安徽大学 Synthetic aperture radar image target detection method
CN114495060A (en) * 2022-01-25 2022-05-13 青岛海信网络科技股份有限公司 Road traffic marking identification method and device
CN114768279A (en) * 2022-04-29 2022-07-22 福建德尔科技股份有限公司 Rectification control system for preparing electronic-grade difluoromethane and control method thereof
CN115236606A (en) * 2022-09-23 2022-10-25 中国人民解放军战略支援部队航天工程大学 Radar signal feature extraction method and complex number field convolution network architecture
CN115908992A (en) * 2022-10-22 2023-04-04 北京百度网讯科技有限公司 Binocular stereo matching method, device, equipment and storage medium
CN115937672A (en) * 2022-11-22 2023-04-07 南京林业大学 Remote sensing rotating target detection method based on deep neural network
CN116091854A (en) * 2022-12-14 2023-05-09 中国人民解放军空军预警学院 Method and system for classifying space targets of HRRP sequence
CN116524348A (en) * 2023-03-14 2023-08-01 中国人民解放军陆军军事交通学院镇江校区 Aviation image detection method and system based on angle period representation
CN113640764B (en) * 2021-08-09 2023-08-11 中国人民解放军海军航空大学航空作战勤务学院 Radar one-dimensional range profile identification method and device based on multi-dimension one-dimensional convolution
CN116593980A (en) * 2023-04-20 2023-08-15 中国人民解放军93209部队 Radar target recognition model training method, radar target recognition method and device
CN116908808A (en) * 2023-09-13 2023-10-20 南京国睿防务系统有限公司 RTN-based high-resolution one-dimensional image target recognition method
CN116975728A (en) * 2023-08-07 2023-10-31 山东省地质矿产勘查开发局第八地质大队(山东省第八地质矿产勘查院) Safety management method and system for coal bed methane drilling engineering
CN117196418A (en) * 2023-11-08 2023-12-08 江西师范大学 Reading teaching quality assessment method and system based on artificial intelligence
CN117274717A (en) * 2023-10-24 2023-12-22 中国人民解放军空军预警学院 Ballistic target identification method based on global and local visual feature mapping network
CN117493953A (en) * 2023-10-31 2024-02-02 国网青海省电力公司海北供电公司 Lightning arrester state evaluation method based on defect data mining
CN117572376A (en) * 2024-01-16 2024-02-20 烟台大学 Low signal-to-noise ratio weak and small target radar echo signal recognition device and training recognition method
CN114119582B (en) * 2021-12-01 2024-04-26 安徽大学 Synthetic aperture radar image target detection method

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033629B (en) * 2021-03-09 2022-08-05 中南大学 Radar signal sorting method and device based on improved cuckoo algorithm
CN113033629A (en) * 2021-03-09 2021-06-25 中南大学 Radar signal sorting method and device based on improved cuckoo algorithm
CN113239959B (en) * 2021-04-09 2024-02-20 西安电子科技大学 Radar HRRP target identification method based on decoupling characterization variation self-encoder
CN113239959A (en) * 2021-04-09 2021-08-10 西安电子科技大学 Radar HRRP target identification method based on decoupling representation variational self-coding machine
CN113221709A (en) * 2021-04-30 2021-08-06 芜湖美的厨卫电器制造有限公司 Method and device for recognizing user movement and water heater
CN113221709B (en) * 2021-04-30 2022-11-25 芜湖美的厨卫电器制造有限公司 Method and device for identifying user motion and water heater
CN113435246B (en) * 2021-05-18 2024-04-05 西安电子科技大学 Intelligent radiation source individual identification method, system and terminal
CN113435246A (en) * 2021-05-18 2021-09-24 西安电子科技大学 Radiation source individual intelligent identification method, system and terminal
CN113516115A (en) * 2021-05-19 2021-10-19 西安建筑科技大学 Dense scene text detection method, device and medium based on multi-dimensional fusion
CN113516115B (en) * 2021-05-19 2022-11-22 西安建筑科技大学 Dense scene text detection method, device and medium based on multi-dimensional fusion
CN113327253A (en) * 2021-05-24 2021-08-31 北京市遥感信息研究所 Weak and small target detection method based on satellite-borne infrared remote sensing image
CN113239898A (en) * 2021-06-17 2021-08-10 阿波罗智联(北京)科技有限公司 Method for processing image, road side equipment and cloud control platform
CN113361472A (en) * 2021-07-01 2021-09-07 西安电子科技大学 Radar HRRP target identification method based on ILFACs model
CN113567984B (en) * 2021-07-30 2023-08-22 长沙理工大学 Method and system for detecting artificial small target in SAR image
CN113567984A (en) * 2021-07-30 2021-10-29 长沙理工大学 Method and system for detecting artificial small target in SAR image
CN113640764B (en) * 2021-08-09 2023-08-11 中国人民解放军海军航空大学航空作战勤务学院 Radar one-dimensional range profile identification method and device based on multi-dimension one-dimensional convolution
CN113743269A (en) * 2021-08-26 2021-12-03 浙江工业大学 Method for identifying video human body posture in light weight mode
CN113743269B (en) * 2021-08-26 2024-03-29 浙江工业大学 Method for recognizing human body gesture of video in lightweight manner
CN113962298A (en) * 2021-10-14 2022-01-21 电子科技大学 Low-rank subspace true and false target one-dimensional range profile feature extraction method
CN113962298B (en) * 2021-10-14 2023-04-28 电子科技大学 Low-rank discrimination subspace true and false target one-dimensional range profile feature extraction method
CN114119582A (en) * 2021-12-01 2022-03-01 安徽大学 Synthetic aperture radar image target detection method
CN114119582B (en) * 2021-12-01 2024-04-26 安徽大学 Synthetic aperture radar image target detection method
CN114495060B (en) * 2022-01-25 2024-03-26 青岛海信网络科技股份有限公司 Road traffic marking recognition method and device
CN114495060A (en) * 2022-01-25 2022-05-13 青岛海信网络科技股份有限公司 Road traffic marking identification method and device
CN114768279A (en) * 2022-04-29 2022-07-22 福建德尔科技股份有限公司 Rectification control system for preparing electronic-grade difluoromethane and control method thereof
CN114768279B (en) * 2022-04-29 2022-11-11 福建德尔科技股份有限公司 Rectification control system for preparing electronic grade difluoromethane and control method thereof
CN115236606A (en) * 2022-09-23 2022-10-25 中国人民解放军战略支援部队航天工程大学 Radar signal feature extraction method and complex number field convolution network architecture
CN115908992A (en) * 2022-10-22 2023-04-04 北京百度网讯科技有限公司 Binocular stereo matching method, device, equipment and storage medium
CN115908992B (en) * 2022-10-22 2023-12-05 北京百度网讯科技有限公司 Binocular stereo matching method, device, equipment and storage medium
CN115937672A (en) * 2022-11-22 2023-04-07 南京林业大学 Remote sensing rotating target detection method based on deep neural network
CN116091854A (en) * 2022-12-14 2023-05-09 中国人民解放军空军预警学院 Method and system for classifying space targets of HRRP sequence
CN116091854B (en) * 2022-12-14 2023-09-22 中国人民解放军空军预警学院 Method and system for classifying space targets of HRRP sequence
CN116524348B (en) * 2023-03-14 2023-11-07 中国人民解放军陆军军事交通学院镇江校区 Aviation image detection method and system based on angle period representation
CN116524348A (en) * 2023-03-14 2023-08-01 中国人民解放军陆军军事交通学院镇江校区 Aviation image detection method and system based on angle period representation
CN116593980B (en) * 2023-04-20 2023-12-12 中国人民解放军93209部队 Radar target recognition model training method, radar target recognition method and device
CN116593980A (en) * 2023-04-20 2023-08-15 中国人民解放军93209部队 Radar target recognition model training method, radar target recognition method and device
CN116975728B (en) * 2023-08-07 2024-01-26 山东省地质矿产勘查开发局第八地质大队(山东省第八地质矿产勘查院) Safety management method and system for coal bed methane drilling engineering
CN116975728A (en) * 2023-08-07 2023-10-31 山东省地质矿产勘查开发局第八地质大队(山东省第八地质矿产勘查院) Safety management method and system for coal bed methane drilling engineering
CN116908808A (en) * 2023-09-13 2023-10-20 南京国睿防务系统有限公司 RTN-based high-resolution one-dimensional image target recognition method
CN116908808B (en) * 2023-09-13 2023-12-01 南京国睿防务系统有限公司 RTN-based high-resolution one-dimensional image target recognition method
CN117274717A (en) * 2023-10-24 2023-12-22 中国人民解放军空军预警学院 Ballistic target identification method based on global and local visual feature mapping network
CN117493953A (en) * 2023-10-31 2024-02-02 国网青海省电力公司海北供电公司 Lightning arrester state evaluation method based on defect data mining
CN117196418B (en) * 2023-11-08 2024-02-02 江西师范大学 Reading teaching quality assessment method and system based on artificial intelligence
CN117196418A (en) * 2023-11-08 2023-12-08 江西师范大学 Reading teaching quality assessment method and system based on artificial intelligence
CN117572376A (en) * 2024-01-16 2024-02-20 烟台大学 Low signal-to-noise ratio weak and small target radar echo signal recognition device and training recognition method
CN117572376B (en) * 2024-01-16 2024-04-19 烟台大学 Low signal-to-noise ratio weak and small target radar echo signal recognition device and training recognition method

Similar Documents

Publication Publication Date Title
AU2020104006A4 (en) Radar target recognition method based on feature pyramid lightweight convolutional neural network
CN109828251B (en) Radar target identification method based on characteristic pyramid light-weight convolution neural network
CN109949255B (en) Image reconstruction method and device
CN109345508B (en) Bone age evaluation method based on two-stage neural network
CN109993100B (en) Method for realizing facial expression recognition based on deep feature clustering
CN112861722B (en) Remote sensing land utilization semantic segmentation method based on semi-supervised depth map convolution
CN109389171B (en) Medical image classification method based on multi-granularity convolution noise reduction automatic encoder technology
CN111984817B (en) Fine-grained image retrieval method based on self-attention mechanism weighting
CN113240683B (en) Attention mechanism-based lightweight semantic segmentation model construction method
JP2011248879A (en) Method for classifying object in test image
CN110880010A (en) Visual SLAM closed loop detection algorithm based on convolutional neural network
CN110705600A (en) Cross-correlation entropy based multi-depth learning model fusion method, terminal device and readable storage medium
CN108596044B (en) Pedestrian detection method based on deep convolutional neural network
CN113344077A (en) Anti-noise solanaceae disease identification method based on convolution capsule network structure
CN114283120B (en) Domain-adaptive-based end-to-end multisource heterogeneous remote sensing image change detection method
CN115965864A (en) Lightweight attention mechanism network for crop disease identification
Zhuang et al. A handwritten Chinese character recognition based on convolutional neural network and median filtering
CN113807356B (en) End-to-end low-visibility image semantic segmentation method
CN114821149A (en) Hyperspectral remote sensing image identification method based on deep forest transfer learning
CN112036419B (en) SAR image component interpretation method based on VGG-Attention model
Zhang An image recognition algorithm based on self-encoding and convolutional neural network fusion
Si et al. Crop Disease Recognition Based on Improved Model-Agnostic Meta-Learning.
Jiang A manifold constrained multi-head self-attention variational autoencoder method for hyperspectral anomaly detection
Shi et al. Self-Guided Autoencoders for Unsupervised Change Detection in Heterogeneous Remote Sensing Images
Antar et al. Robust Object Recognition with Deep Learning on a Variety of Datasets.

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry