CN113030902A - Twin complex network-based few-sample radar vehicle target identification method - Google Patents

Twin complex network-based few-sample radar vehicle target identification method Download PDF

Info

Publication number
CN113030902A
CN113030902A CN202110498690.8A CN202110498690A CN113030902A CN 113030902 A CN113030902 A CN 113030902A CN 202110498690 A CN202110498690 A CN 202110498690A CN 113030902 A CN113030902 A CN 113030902A
Authority
CN
China
Prior art keywords
complex
sample
network
convolution
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110498690.8A
Other languages
Chinese (zh)
Other versions
CN113030902B (en
Inventor
廖阔
何学思
彭曙鹏
田祯杰
周代英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110498690.8A priority Critical patent/CN113030902B/en
Publication of CN113030902A publication Critical patent/CN113030902A/en
Application granted granted Critical
Publication of CN113030902B publication Critical patent/CN113030902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/411Identification of targets based on measurements of radar reflectivity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of artificial intelligence, and particularly relates to a twin complex network-based small-sample radar vehicle target identification method. According to the method, for the two-dimensional target image of the radar vehicle under the condition of few samples, the real part and the imaginary part of the target image are jointly input into the complex number neural network to output a real number image, and the amplitude information of the original image is reserved by utilizing the jump link, so that the amplitude image is directly used instead, the complex number information in radar data is better utilized, the precision of feature extraction is improved, the distance between the original image and the corresponding anchor sample image is shortened, and the over-fitting phenomenon caused by few samples is avoided to a certain extent. In addition, aiming at a native twin network, under the condition of inputting a sample pair, the anchor samples corresponding to the two samples are simultaneously input, the convergence of the network is controlled to be carried out towards the direction of the given anchor sample, the convergence speed of the network is accelerated, and the identification precision is improved.

Description

Twin complex network-based few-sample radar vehicle target identification method
Technical Field
The invention belongs to the technical field of artificial intelligence, and particularly relates to a twin complex network-based small-sample radar vehicle target identification method.
Background
The discrimination of the target vehicle category through radar echo data is one of effective ways for the identification of long-distance vehicles. In recent years, a target identification method based on deep learning has a good effect in the image field, but for a radar echo image of a target, since target data is a complex signal, a conventional method is to take an amplitude value of the target radar data as an input image, such processing loses phase information of the radar data, and a fine radar scattering characteristic of the target is ignored, thereby affecting identification accuracy. Meanwhile, the existing radar target identification has the problems of few samples and deficient data sets. Therefore, the study on the complex neural network and the learning method with few samples is expected to solve the problem of insufficient samples and improve the generalization performance and the recognition precision of the model.
Disclosure of Invention
The invention aims to provide a novel method for identifying a vehicle target in a small sample based on a twin complex neural network by utilizing a frequency modulation continuous wave radar to obtain a distance-speed image of the vehicle target and aiming at the conditions of less data and complex numerical values.
The technical scheme of the invention is as follows: a radar distance-velocity image target identification method based on a twin complex neural network is characterized in that complex information is processed by the complex neural network and combined with original amplitude information to generate a real number characteristic image, a twin network structure is improved, a positive sample and a negative sample are introduced and combined with double anchor sample input, and model convergence speed is increased. The method mainly comprises the following steps:
s1, acquiring sample data:
acquiring two-dimensional distance-velocity image data of a vehicle target in a moving process by using a frequency modulation continuous wave radar, randomly dividing the acquired data into a training data set and a testing data set, wherein the training data set is recorded as follows:
X0={x0 ij|i=1,2,…,K;j=1,2,…,Ni}∈Ch×TM×N
where K represents the total number of object classes, NiThe number of training samples for the ith class of target,
Figure BDA0003055536840000011
is the total number of samples in the training sample set. x is the number of0 ijThe jth two-dimensional range-velocity image sample representing the ith class of object, h and w represent the length and height of the image, respectively. The sample labels for this training set are represented as:
X1={x1 ij|i=1,2,…,K;j=1,2,…,Ni}∈Ch×w×N
in the formula x1 ijRepresents a sample x0 ijThe category label of (1).
S2, preprocessing the obtained sample:
first, the sample set X obtained in step S10In, each type of sample set X0 iSelecting a sample as an anchor point for the set of samples, e.g. x0 i1. Secondly, combining the two-dimensional images of each remaining sample, pairwise matching the two-dimensional images with other samples of different types to form a sample pair:
Figure BDA0003055536840000021
s3, constructing a complex convolution network:
the complex neural network constructs a reference residual network with the input being the real and imaginary parts of the samples { R (x)0 ij),I(x0 ij) And the first layer is a batch normalization layer: carrying out complex batch normalization on input batch samples, and outputting a normalized real-imaginary part { R (x) }1 0 ij),I(x1 0 ij) }; plural batch normalizationThe implementation of the layers requires consideration of both real and imaginary distributions, so the (0-1) distribution is implemented with the mean and variance of the two-dimensional random variable distribution.
Figure BDA0003055536840000022
Figure BDA0003055536840000023
BN(x)=γx+β
Where x is the complex input, V is the covariance, E (x) is the mean, and γ and β are the two super-parameters of the layer that can be trained. The second layer is a complex convolution layer which is composed of a convolution layer, an activation function layer and a batch normalization layer, and the final output is a multi-channel complex feature image { R (x) } with the same size as the input2 0 ij),I(x2 0 ij)}. The realization of the convolutional layer is built according to a complex convolution formula:
W*h=(A*x-B*y)+i(B*x+A*y)
wherein W is a complex sample, A is the real part of W, and B is the imaginary part of W; h is the convolution kernel, x is the real part of h, and y is the imaginary part of h. The complex convolution is achieved by a formulaic transformation, i.e. by separately convolving the real and imaginary parts. And the activation function layer selects a complex RELU function:
CReLU=ReLU(R(z))+iReLU(I(z))
the third layer is also a plurality of convolution layers, which is basically similar to the upper layer structure, and only the input multi-channel images are merged to output a single-channel complex characteristic image { R (x) }3 0 ij),I(x3 0 ij)}. The last layer is a jump connection layer, and the structure is that the amplitude of the characteristic complex image input by the upper layer is calculated, and then the amplitude is added with the amplitude of the original input image which only passes through the batch normalization layer once to obtain the real number output image { R (x) of the final networkout 0 ij),I(xout 0 ij)}:
Figure BDA0003055536840000031
S4, constructing a real convolution network:
constructing a real convolution network with the input being the output of the complex convolution network { R (x)out 0 ij),I(xout 0 ij) Or anchor samples corresponding to positive and negative samples R (x)0 i1),I(x0 i1)}. The first five layers of the multilayer structure are real convolution layers, and the multilayer structure is composed of convolution layers, activation function layers and batch normalization layers, and the specific structure is shown in the following table:
Figure BDA0003055536840000032
the last three layers are all connected layers, the dimensionality of the generated multi-channel feature image is reduced, and the final output is a 10-dimensional normalized feature vector outijOr outi1
S5, constructing a twin complex network:
two sets of complex convolutional networks and four sets of real convolutional networks are constructed, which are parameter shared with each other to form a twin complex network. The input of the two groups of complex convolution networks is positive and negative sample pairs Pair, the input of the four groups of real convolution networks is the output of the positive and negative sample pairs through the complex convolution networks and anchor samples corresponding to the positive and negative sample pairs respectively, and the finally obtained output is four 10-dimensional normalized feature vectors, for example:
Figure BDA0003055536840000041
the similarity s in the vectors can be obtained by vector multiplicationpSimilarity between and classes sn
Figure BDA0003055536840000042
Figure BDA0003055536840000043
And then, according to the sample label information, using circle loss to obtain the loss value of the network.
Figure BDA0003055536840000044
In the formula
Figure BDA0003055536840000045
Is the input jth inter-class similarity vector, L is the number of the input inter-class similarity vectors,
Figure BDA0003055536840000046
the ith internal similarity vector is input, and K is the number of the input internal similarity vectors;
Figure BDA0003055536840000047
is that
Figure BDA0003055536840000048
The weight of (a) is determined,
Figure BDA0003055536840000049
is that
Figure BDA00030555368400000410
And (3) is added to 1.
And S6, respectively updating the parameters of the two networks by adopting a gradient descent method according to the training sample set, and iterating until the network loss converges to obtain a final deep network model, namely the trained twin complex network model.
And S7, performing target recognition on the input sample by adopting the deep network model obtained in the step S6.
The general technical scheme of the invention is as follows: aiming at the two-dimensional distance-speed image identification problem of radar vehicles, firstly, positive and negative samples of images are paired, and then real part information and imaginary part information of the images are simultaneously input into a complex neural network, wherein the complex neural network is constructed by two layers of convolution structures and a jump connection layer, and the output of the complex neural network is a real number characteristic image with the same size as that of an input image. And combining the obtained sample characteristic image and the anchor sample image corresponding to the sample characteristic image, and simultaneously inputting the combined sample characteristic image and the anchor sample image into a real convolution neural network, wherein the network consists of a five-layer convolution structure and three full-connection layers. Each input corresponds to a feature vector to obtain an output. And judging which type of target the sample belongs to by using the cosine similarity between the sample characteristic vector and all anchor sample characteristic vectors. During training, the error of the network model can be calculated through the information, so that the BP algorithm is used for updating the network parameters and optimizing the model. During testing, the size of cosine similarity can be directly used for target identification.
The invention has the beneficial effects that: according to the method, for the two-dimensional target image of the radar vehicle under the condition of few samples, the real part and the imaginary part of the target image are jointly input into the complex number neural network to output a real number image, and the amplitude information of the original image is reserved by utilizing the jump link, so that the amplitude image is directly used instead, the complex number information in radar data is better utilized, the precision of feature extraction is improved, the distance between the original image and the corresponding anchor sample image is shortened, and the over-fitting phenomenon caused by few samples is avoided to a certain extent. In addition, aiming at a native twin network, under the condition of inputting a sample pair, the anchor samples corresponding to the two samples are simultaneously input, the convergence of the network is controlled to be carried out towards the direction of the given anchor sample, the convergence speed of the network is accelerated, and the identification precision is improved.
Drawings
FIG. 1 is a schematic diagram of a model structure of an overall twin complex network;
FIG. 2 is a flow chart of radar vehicle target identification based on the network.
Detailed Description
The technical scheme of the invention is described in detail below with reference to the accompanying drawings and embodiments:
and (3) training and testing two-dimensional distance-speed image data generated by vehicle targets on a real road by using a radar. The data set contains data for four types of vehicle targets, each type of target having 60 samples, each sample having a size of 40 x 10. During training, 10 samples are randomly selected from each type of target to form a training set, and the other 50 samples are used as a testing set. The training data set is noted as:
X0={x0 ij|i=1,2,3,4;j=1,2,…,10}∈C40×10
first, the first sample is selected from each class of training set as the anchor sample of the class:
Anchor={x0 i1|i=1,2,3,4}
the remaining samples are then combined with other samples of different classes to form positive and negative sample pairs:
Figure BDA0003055536840000051
secondly, a complex neural network is constructed, and the input is the real and imaginary parts of the samples { R (x)0 ij),I(x0 ij)}. The first layer is a batch normalization layer: carrying out complex batch normalization on input batch samples, and outputting a normalized real-imaginary part { R (x) }1 0 ij),I(x1 0 ij) }; the second layer is a complex convolution layer composed of convolution layer, activation function layer and batch normalization layer, and the final output is a complex feature image { R (x) with the same size as the input2 0 ij),I(x2 0 ij)}. The third layer is also a plurality of convolution layers, which is basically similar to the upper layer structure, except that the input channel is changed to 8, the output channel is changed to 1, and the output is { R (x)3 0 ij),I(x3 0 ij)}. The parameters of the convolutional layer are set as follows:
Figure BDA0003055536840000052
Figure BDA0003055536840000061
the activation function layer selects a complex RELU function:
CReLU=ReLU(R(z))+iReLU(I(z))
the last layer is a jump connection layer, and the structure is that the amplitude of the characteristic complex image output by the upper layer is calculated, and then the amplitude is added with the amplitude of the original input image which only passes through the batch normalization layer once to obtain the real number output image { R (x) of the final networkout 0 ij),I(xout 0 ij)}:
Figure BDA0003055536840000062
Then, continuously constructing a real convolution network, wherein the input is the output { R (x) of the upper complex convolution networkout 0 ij),I(xout 0 ij) Anchor samples corresponding to positive and negative samples. The first five layers are real convolution layers, which are composed of convolution layers, activation function layers and batch normalization layers, and the final output is a real characteristic image with the same size as the input. Wherein the specific hyper-parameters of each convolution layer are respectively as follows:
Figure BDA0003055536840000063
the ReLU function is used in the activation function layers of the two, and the batch normalization layer is normal normalization. The rest three layers are all full connection layers, and the input and the output of the full connection layers are respectively as follows: 8 x 40 x 10 to 500, 500 to 10. The final output of the network is thus a 10-dimensional feature vector outijOr outi1
Continuously constructing an integral twin network, respectively inputting the positive and negative sample pair information and the anchor sample information corresponding to the positive and negative sample pair information into two groups of complex convolution networks and four groups of real convolution networks sharing parameters to obtain four groups of 10-dimensional characteristic vectors, and converting the four groups of 10-dimensional characteristic vectors into a binary system
The obtained feature vector is combined with label information, for example:
Figure BDA0003055536840000071
calculating their degree of intra-similarity spSimilarity between and classes snAnd calculating a circle loss function using the following equation:
Figure BDA0003055536840000072
using the LcircleAnd (3) respectively carrying out gradient descent updating on the two networks by using an Adam algorithm (the complex network learning rate is set to be 0.001, and the real network learning rate is set to be 0.005), and obtaining a final network model when the loss value tends to converge. During testing, the inner product of the feature vector of the test sample and the feature vectors of the four anchor samples is obtained, and the test sample and the anchor sample can be judged to belong to the same class by the maximum inner product. Finally, the average correct recognition rate of the four types of targets can reach 92%.

Claims (1)

1. A few-sample radar vehicle target identification method based on a twin complex network is characterized by comprising the following steps:
s1, acquiring sample data:
acquiring two-dimensional distance-velocity image data of a vehicle target in a moving process by using a frequency modulation continuous wave radar, randomly dividing the acquired data into a training data set and a testing data set, wherein the training data set is recorded as follows:
X0={x0 ij|i=1,2,...,K;j=1,2,...,Ni}∈ch×w×N
where K represents the total number of object classes, NiThe number of training samples for the ith class of target,
Figure FDA0003055536830000011
is the total number of samples, x, in the training sample set0 ijThe jth two-dimensional distance-velocity image sample representing the ith class of target, h and w represent the length and height of the image, respectively, and the sample label of the training dataset is represented as:
X1={x1 ij|i=1,2,...,K;j=1,2,...,Ni}∈Ch×w×N
in the formula x1 ijRepresents a sample x0 ijA category label of (1);
s2, preprocessing the obtained training data set:
from a training data set X0In, each type sample set X0 iSelecting a sample as an anchor point of the sample set, combining the two-dimensional images of each remaining sample, and pairing the two-dimensional images with other samples of different types to form a positive and negative sample pair:
Figure FDA0003055536830000012
s3, constructing a complex convolution network:
the input to the complex convolutional network is the real and imaginary part of the sample { R (x)0 ij),I(x0 ij) And the first layer of the network is a batch normalization layer: carrying out complex batch normalization on input batch samples, and outputting a normalized real-imaginary part { R (x) }1 0 ij),I(x1 0 ij) }; the complex batch normalization layer implements a (0-1) distribution using the mean and variance of a two-dimensional random variable distribution:
Figure FDA0003055536830000013
Figure FDA0003055536830000014
BN(x)=γx+β
where x is the complex input, V is the covariance, E (x) is the mean, γ and β are the two super-parameters of the layer that can be trained;
the second layer is a plurality of convolution layers consisting of a convolution layer,The activation function layer and the batch normalization layer, the output of the complex convolution layer is a multi-channel complex feature image { R (x) } with the same size as the input2 0 ij),I(x2 0 ij) }; wherein the convolutional layer is built according to a complex convolution formula:
W*h=(A*x-B*y)+i(B*x+A*y)
wherein W is a complex sample, A is the real part of W, and B is the imaginary part of W; h is a convolution kernel, x is the real part of h, and y is the imaginary part of h; complex convolution is achieved by formula transformation, i.e. by separately convolving the real and imaginary parts; the activation function layer employs a complex RELU function:
CReLU=ReLU(R(z))+iReLU(I(z))
the third layer is a plurality of convolution layers, and has a similar structure to the second layer, except that the third layer is used for merging input multi-channel images and outputting a single-channel complex characteristic image { R (x) }3 0 ij),I(x3 0 ij)};
The fourth layer is a jump connection layer, the amplitude of the characteristic complex image input by the third layer is calculated, and then the amplitude is added with the amplitude of the original input image which only passes through the batch normalization layer once to obtain a real number output image { R (x) of the final networkout 0 ij),I(xout 0 ij)}:
Figure FDA0003055536830000021
S4, constructing a real convolution network:
the input of the real convolution network is the output of the complex convolution network { R (x)out 0 ij),I(xout 0 ij) Or anchor samples corresponding to positive and negative samples R (x)0 i1),I(x0 i1) The first five layers of the real convolution network are real convolution layers, the real convolution layers are composed of convolution layers, activation function layers and batch normalization layers, the last three layers of the real convolution network are full connection layers, the dimension of the generated multi-channel characteristic image is reduced,the final output is a 10-dimensional normalized feature vector outijOr outi1
S5, constructing a twin complex network:
constructing two sets of complex convolution networks and four sets of real convolution networks through steps S3 and S4, and sharing parameters with each other to form a twin complex network; the input of the two groups of complex convolution networks is positive and negative sample pairs Pair, the input of the four groups of real convolution networks is the output of the positive and negative sample pairs through the complex convolution networks and anchor samples corresponding to the positive and negative sample pairs respectively, the finally obtained output is four 10-dimensional normalized characteristic vectors, and the similarity s in the similarity between the positive and negative sample pairs and the anchor samples is obtained through vector multiplicationpSimilarity between and classes snThen, according to the sample label information, using circle loss to obtain the loss value of the network:
Figure FDA0003055536830000031
wherein
Figure FDA0003055536830000032
Is the input jth inter-class similarity vector, L is the number of the input inter-class similarity vectors,
Figure FDA0003055536830000033
the ith internal similarity vector is input, and K is the number of the input internal similarity vectors;
Figure FDA0003055536830000034
is that
Figure FDA0003055536830000035
The weight of (a) is determined,
Figure FDA0003055536830000036
is that
Figure FDA0003055536830000037
The weight of (1) added to the weight of (1);
s6, according to the training sample set, parameters of the two networks are respectively updated by adopting a gradient descent method, and iteration is carried out until the network loss converges to obtain a trained twin complex network model;
and S7, adopting the twin complex network model trained in the step S6 to perform target recognition on the collected vehicle data sample.
CN202110498690.8A 2021-05-08 2021-05-08 Twin complex network-based few-sample radar vehicle target identification method Active CN113030902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110498690.8A CN113030902B (en) 2021-05-08 2021-05-08 Twin complex network-based few-sample radar vehicle target identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110498690.8A CN113030902B (en) 2021-05-08 2021-05-08 Twin complex network-based few-sample radar vehicle target identification method

Publications (2)

Publication Number Publication Date
CN113030902A true CN113030902A (en) 2021-06-25
CN113030902B CN113030902B (en) 2022-05-17

Family

ID=76455261

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110498690.8A Active CN113030902B (en) 2021-05-08 2021-05-08 Twin complex network-based few-sample radar vehicle target identification method

Country Status (1)

Country Link
CN (1) CN113030902B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105116408A (en) * 2015-06-30 2015-12-02 电子科技大学 Ship ISAR image structure feature extraction method
US20160019458A1 (en) * 2014-07-16 2016-01-21 Deep Learning Analytics, LLC Systems and methods for recognizing objects in radar imagery
CN106934419A (en) * 2017-03-09 2017-07-07 西安电子科技大学 Classification of Polarimetric SAR Image method based on plural profile ripple convolutional neural networks
CN108388927A (en) * 2018-03-26 2018-08-10 西安电子科技大学 Small sample polarization SAR terrain classification method based on the twin network of depth convolution
CN109508655A (en) * 2018-10-28 2019-03-22 北京化工大学 The SAR target identification method of incomplete training set based on twin network
CN109886406A (en) * 2019-02-25 2019-06-14 东南大学 A kind of complex convolution neural network compression method based on depth-compression
US20190294966A1 (en) * 2018-03-26 2019-09-26 Cohda Wireless Pty Ltd. Systems and methods for automatically training neural networks
CN110780271A (en) * 2019-10-18 2020-02-11 西安电子科技大学 Spatial target multi-mode radar classification method based on convolutional neural network
CN111126570A (en) * 2019-12-24 2020-05-08 江西理工大学 SAR target classification method for pre-training complex number full convolution neural network
CN111638488A (en) * 2020-04-10 2020-09-08 西安电子科技大学 Radar interference signal identification method based on LSTM network
CN111680596A (en) * 2020-05-29 2020-09-18 北京百度网讯科技有限公司 Positioning truth value verification method, device, equipment and medium based on deep learning

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160019458A1 (en) * 2014-07-16 2016-01-21 Deep Learning Analytics, LLC Systems and methods for recognizing objects in radar imagery
CN105116408A (en) * 2015-06-30 2015-12-02 电子科技大学 Ship ISAR image structure feature extraction method
CN106934419A (en) * 2017-03-09 2017-07-07 西安电子科技大学 Classification of Polarimetric SAR Image method based on plural profile ripple convolutional neural networks
CN108388927A (en) * 2018-03-26 2018-08-10 西安电子科技大学 Small sample polarization SAR terrain classification method based on the twin network of depth convolution
US20190294966A1 (en) * 2018-03-26 2019-09-26 Cohda Wireless Pty Ltd. Systems and methods for automatically training neural networks
CN109508655A (en) * 2018-10-28 2019-03-22 北京化工大学 The SAR target identification method of incomplete training set based on twin network
CN109886406A (en) * 2019-02-25 2019-06-14 东南大学 A kind of complex convolution neural network compression method based on depth-compression
CN110780271A (en) * 2019-10-18 2020-02-11 西安电子科技大学 Spatial target multi-mode radar classification method based on convolutional neural network
CN111126570A (en) * 2019-12-24 2020-05-08 江西理工大学 SAR target classification method for pre-training complex number full convolution neural network
CN111638488A (en) * 2020-04-10 2020-09-08 西安电子科技大学 Radar interference signal identification method based on LSTM network
CN111680596A (en) * 2020-05-29 2020-09-18 北京百度网讯科技有限公司 Positioning truth value verification method, device, equipment and medium based on deep learning

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
JIA SONG等: "Radar HRRP recognition based on CNN", 《THE JOURNAL OF ENGINEERING》 *
JIANSHENG FU等: "Modeling Recognizing Behavior of Radar High Resolution Range Profile Using Multi-agent System", 《WSEAS TRANSACTIONS ON INFORMATION SCIENCE AND APPLICATIONS》 *
JINWEI WAN等: "Radar HRRP Recognition using Attentional CNN with Multi-resolution Spectrograms", 《2019 INTERNATIONAL RADAR CONFERENCE (RADAR)》 *
廖阔: "基于高分辨距离像的雷达自动目标识别研究", 《中国博士学位论文全文数据库 信息科技辑》 *
潘宗序等: "基于深度学习的雷达图像目标识别研究进展", 《中国科学:信息科学》 *
王家明: "基于深度学习的SAR目标识别与PolSAR分类", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *

Also Published As

Publication number Publication date
CN113030902B (en) 2022-05-17

Similar Documents

Publication Publication Date Title
Li et al. Lightweight attention convolutional neural network for retinal vessel image segmentation
Amin et al. End-to-end deep learning model for corn leaf disease classification
CN112766379B (en) Data equalization method based on deep learning multiple weight loss functions
CN109492556B (en) Synthetic aperture radar target identification method for small sample residual error learning
CN108648191A (en) Pest image-recognizing method based on Bayes's width residual error neural network
CN112001270A (en) Ground radar automatic target classification and identification method based on one-dimensional convolutional neural network
Lameski et al. Skin lesion segmentation with deep learning
Sabrol et al. Fuzzy and neural network based tomato plant disease classification using natural outdoor images
CN109558896A (en) The disease intelligent analysis method and system with deep learning are learned based on ultrasound group
CN109840518B (en) Visual tracking method combining classification and domain adaptation
Venmathi et al. An automatic brain tumors detection and classification using deep convolutional neural network with VGG-19
Sharif et al. M3BTCNet: multi model brain tumor classification using metaheuristic deep neural network features optimization
CN111524140B (en) Medical image semantic segmentation method based on CNN and random forest method
Chudzik et al. DISCERN: Generative framework for vessel segmentation using convolutional neural network and visual codebook
Rajendran et al. Hyperspectral image classification model using squeeze and excitation network with deep learning
Wang OCT image recognition of cardiovascular vulnerable plaque based on CNN
CN116469561A (en) Breast cancer survival prediction method based on deep learning
CN113627240B (en) Unmanned aerial vehicle tree species identification method based on improved SSD learning model
Radhika et al. Ensemble subspace discriminant classification of satellite images
CN113392871B (en) Polarized SAR (synthetic aperture radar) ground object classification method based on scattering mechanism multichannel expansion convolutional neural network
Lu et al. Image recognition of rice leaf diseases using atrous convolutional neural network and improved transfer learning algorithm
Cho et al. Fruit ripeness prediction based on DNN feature induction from sparse dataset
CN113030902B (en) Twin complex network-based few-sample radar vehicle target identification method
CN117390371A (en) Bearing fault diagnosis method, device and equipment based on convolutional neural network
CN115761240B (en) Image semantic segmentation method and device for chaotic back propagation graph neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant