CN115861676A - Extreme value distribution-based radar high-resolution range profile open-set identification method - Google Patents

Extreme value distribution-based radar high-resolution range profile open-set identification method Download PDF

Info

Publication number
CN115861676A
CN115861676A CN202211378461.3A CN202211378461A CN115861676A CN 115861676 A CN115861676 A CN 115861676A CN 202211378461 A CN202211378461 A CN 202211378461A CN 115861676 A CN115861676 A CN 115861676A
Authority
CN
China
Prior art keywords
sample set
neural network
convolutional neural
layer
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211378461.3A
Other languages
Chinese (zh)
Inventor
刘宏伟
王鹏辉
夏子恒
丁军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202211378461.3A priority Critical patent/CN115861676A/en
Publication of CN115861676A publication Critical patent/CN115861676A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a radar high-resolution range profile open-set identification method based on extremum distribution, which comprises the following steps of: establishing and preprocessing a training sample set and a test sample set; constructing a convolutional neural network; training the convolutional neural network by using a training sample set; extracting high-dimensional features of a training sample set by using the trained convolutional neural network; calculating Euclidean distances of high-dimensional features of a training sample set, and carrying out extreme value distribution fitting on the Euclidean distances; extracting high-dimensional features of the test sample set by using the trained convolutional neural network; and performing open set identification on the high-dimensional features of the test sample set by using an extreme value distribution cumulative probability distribution function. The method provided by the invention can be used for identifying the targets with known types in the radar target identification database, and meanwhile, the targets with unknown types from the outside of the database can be rejected, so that the accuracy and the practicability of the radar target identification system are improved, and the automation and the intelligentization level of the radar are effectively improved.

Description

Extreme value distribution-based radar high-resolution range profile open-set identification method
Technical Field
The invention belongs to the technical field of radars, and particularly relates to a radar high-resolution range profile open set identification method based on extreme value distribution.
Background
The range resolution of the radar is proportional to the receiving pulse width after matched filtering, and the range unit length of the radar transmitting signal meets the following requirements: Δ R = c τ/2= c/2B, where Δ R is the distance unit length of the radar-transmitted signal, c is the speed of light, τ is the pulse width matched for reception, and B is the bandwidth of the radar-transmitted signal. The large radar transmission signal bandwidth provides high range resolution. In practice, the radar range resolution is high or low relative to the observed target, when the dimension of the observed target along the radar sight line direction is L, if L < < Δ R, the corresponding radar echo signal width is approximately the same as the radar transmission pulse width (the received pulse after matching processing), generally referred to as "point" target echo, this type of radar is a low resolution radar, if Δ R < < L, the target echo becomes a "one-dimensional range profile" extending over the range according to the characteristics of the target, this type of radar is a high resolution radar (less than < < means).
The high-resolution radar transmits broadband coherent signals (linear frequency modulation or step frequency signals), and receives echo data through backscattering of the target to transmitted electromagnetic waves. Generally, echo characteristics are calculated using a simplified scattering point model, i.e., using a Born first order approximation that ignores multiple scattering. Fluctuations and peaks appearing in the high-resolution radar echo reflect the distribution condition of the radar scattering cross-sectional area (RCS) of scatterers (such as a nose, a wing, a tail rudder, an air inlet, an engine and the like) on a target along a radar sight line (RLOS) at a certain radar viewing angle, and reflect the relative geometric relationship of scattering points in the radial direction, which is often called a High Resolution Range Profile (HRRP). Therefore, the HRRP sample contains important structural features of the target and is valuable for target identification and classification.
The traditional target identification method aiming at high-resolution range profile data mainly adopts a support vector machine to directly classify targets, or uses a characteristic extraction method based on a limiting Boltzmann machine to firstly project data into a high-dimensional space and then uses a classifier to classify the data. However, the method only utilizes the time domain characteristics of the signal, and the target identification accuracy is not high.
In recent years, target identification methods for radar high-resolution range profile data are mainly directed to closed set identification, and the identification requires that the data type models in a test sample set are consistent with the data type models in a training sample set. In practice, however, the radar will capture high resolution range images of many unknown class-type targets from outside the database, in addition to the high resolution range images of targets in the target recognition database. Under the condition, the existing closed set identification algorithm cannot reject the target data with unknown type models outside the database, but can misjudge the target data as a target with a certain type model in the database, so that the accuracy and the practicability of radar target identification are greatly reduced.
Thus, some researchers began to research on open-set identification of radar high-resolution range-images. For example, on the basis of support vector field description (SVDD), chaijing et al proposes a Multi-kernel SVDD model to more flexibly describe multimode distribution of HRRP data in a high-dimensional feature space, thereby improving the identification and rejection performance of radar HRRP. Zhankou et al propose a multi-classifier fusion algorithm based on a Maximum Correlation Classifier (MCC), a Support Vector Machine (SVM) and a Relevance Vector Machine (RVM) to implement rejection and identification functions of the radar HRRP. However, both of the above algorithms need to rely on a kernel function in a specific form to extract features, which limits the ability of the model to extract enough separable features, thereby affecting the accuracy of target recognition and the intelligence level of radar.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a radar high-resolution range profile open set identification method based on extreme value distribution. The technical problem to be solved by the invention is realized by the following technical scheme:
the first aspect of the embodiments of the present invention provides an extremum distribution-based radar high-resolution range profile open set identification method, including the following steps:
establishing a first training sample set and a first testing sample set; the training sample set comprises radar high-resolution range profiles of a plurality of targets with known types, and the testing sample set comprises radar high-resolution range profiles of a plurality of targets with known types and radar high-resolution range profiles of targets with unknown types outside a radar target recognition database;
preprocessing the radar high-resolution range profile in the training sample set and the test sample set to obtain a second training sample set and a second test sample set;
constructing a convolutional neural network;
training the convolutional neural network by using the second training sample set to obtain a trained convolutional neural network;
extracting high-dimensional features of the second training sample set by using the trained convolutional neural network;
calculating feature centers of the high-dimensional features of the second training sample set and Euclidean distances from each high-dimensional feature to the corresponding feature center, and performing extreme value distribution fitting on all the Euclidean distances to obtain an extreme value distribution cumulative probability distribution function;
extracting high-dimensional features of the second test sample set by using the trained convolutional neural network;
and performing open set identification on the high-dimensional features of the second test sample set by using the extreme value distribution cumulative probability distribution function to obtain an identification result of the second test sample set.
In an embodiment of the present invention, the preprocessing the radar high-resolution range profiles in the training sample set and the test sample set to obtain a second training sample set and a second test sample set after preprocessing includes:
and sequentially carrying out gravity center alignment and normalization processing on the radar high-resolution range profiles in the training sample set and the test sample set to obtain a second training sample set and a second test sample set after preprocessing.
In one embodiment of the present invention, the convolutional neural network comprises: three convolutional layers and a fourth full-link layer; the three convolution layers are respectively a first layer convolution layer, a second layer convolution layer and a third layer convolution layer; the above-mentioned
The convolution step length of each convolution layer is the same; each convolution layer comprises a plurality of convolution kernels, and the sizes of the convolution kernels are the same;
wherein the loss function of the convolutional neural network has an expression as follows:
Figure BDA0003927775060000041
wherein, theta (x) and theta (x) k ) As an output result of the convolutional neural network, O i (i =1, \8230;, k, \8230;, N) is the ith prototype randomly initialized according to a gaussian distribution; d (theta (x), O) k ) Is theta (x) to O k λ is a hyperparameter.
In an embodiment of the present invention, the training the convolutional neural network by using the second training sample set to obtain a trained convolutional neural network, including:
randomly dividing the sample data of the second training sample set into q batches, wherein the data of each batch is n multiplied by D dimensional matrix data; wherein,
Figure BDA0003927775060000042
floor () represents a rounding down, P represents the number of high resolution range images in the second set of training samples;
sequentially inputting the sample data of each batch into the convolutional neural network for processing to obtain an output result of the convolutional neural network;
and calculating the value of the loss function according to the output result of the convolutional neural network and the loss function of the convolutional neural network, and updating the parameter value of the convolutional neural network by adopting a random gradient method until the network is converged to obtain the trained convolutional neural network.
In an embodiment of the present invention, the sequentially inputting the sample data of each batch into the convolutional neural network for processing to obtain an output result of the convolutional neural network, includes:
carrying out convolution and downsampling processing on input sample data of the current batch by using the first layer of convolution layer to obtain a first feature map;
performing convolution and downsampling processing on the first feature map by using a second layer of convolution layer to obtain a second feature map;
carrying out convolution and downsampling processing on the second feature map by using a third layer of convolution layer to obtain a third feature map;
carrying out nonlinear transformation processing on the third characteristic diagram by utilizing a fourth full-connection layer to obtain an output result of the current sample data;
and repeating the steps until the processing of the sample data of q batches is completed, and obtaining the output result of the convolutional neural network.
In an embodiment of the present invention, the expression of the feature center of the high-dimensional feature of the second training sample set is:
Figure BDA0003927775060000051
wherein,
Figure BDA0003927775060000052
representing the ith sample in the kth class in the second training sample set, the kth class totaling N k A sample is obtained;
the expression of the cumulative probability distribution function of the extremum distribution of the kth class is:
Figure BDA0003927775060000053
wherein,
Figure BDA0003927775060000054
expressing an extreme value distribution cumulative probability distribution function, and expressing the Euclidean distance from each high-dimensional feature to the corresponding feature center as a function independent variable; ξ, α and β represent parameters of the extremum distribution cumulative probability distribution function.
In an embodiment of the present invention, the performing open-set identification on the high-dimensional features of the second test sample set by using the extremum distribution cumulative probability distribution function to obtain an identification result of the second test sample set includes:
when in use
Figure BDA0003927775060000061
When the second test sample set is established, the test sample corresponding to the high-dimensional feature of the second test sample set is an unknown class outside the database;
when in use
Figure BDA0003927775060000062
If not, then the class of the test sample corresponding to the high-dimensional feature of the second set of test samples is->
Figure BDA0003927775060000063
Wherein τ is a preset decision threshold.
The invention has the beneficial effects that:
1. the radar high-resolution range profile open-set identification method provided by the invention adopts the convolutional neural network to combine the primary characteristics of each layer so as to obtain the characteristics of higher layers for identification, so that the identification rate is obviously improved, the method can be used for identifying and classifying the known class targets in the database, meanwhile, the unknown class targets outside the database can be rejected, the target identification accuracy rate is improved, and the automation and intelligence level of the radar is further improved;
2. the method adopts a multilayer convolutional neural network structure, carries out energy normalization and alignment pretreatment on the data, can mine the high-level characteristics of the high-resolution range profile data, removes the amplitude sensitivity, the translation sensitivity and the attitude sensitivity of the radar high-resolution range profile data, and has stronger robustness compared with the traditional direct classification method.
3. The invention introduces the extreme value distribution established aiming at the characteristics of the target with the known category in the library in the judging stage, and judges the specific category of the target to be detected through the cumulative distribution function of the extreme value distribution, compared with the traditional distance measurement-based method, the strategy can effectively improve the identification rate of the sample (namely the extreme value sample which is often an object difficult to be identified correctly in the identification task) at the edge of the cluster with the known category, and has stronger identification robustness.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
Fig. 1 is a schematic flowchart of a method for identifying an extremum distribution-based radar high-resolution range profile set according to an embodiment of the present invention;
fig. 2 is a simulation test result provided by the embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but the embodiments of the present invention are not limited thereto.
Example one
As shown in fig. 1, a method for identifying a radar high-resolution range profile open set based on extremum distribution includes the following steps:
step 10, establishing a first training sample set and a first testing sample set; the training sample set comprises radar high-resolution range profiles of a plurality of targets with known types, and the testing sample set comprises radar high-resolution range profiles of a plurality of targets with known types and radar high-resolution range profiles of targets with unknown types except a radar target identification database;
step 20, preprocessing the radar high-resolution range profiles in the training sample set and the test sample set to obtain a second training sample set and a second test sample set;
step 30, constructing a convolutional neural network;
step 40, training the convolutional neural network by utilizing a second training sample set to obtain a trained convolutional neural network;
step 50, extracting high-dimensional features of a second training sample set by using the trained convolutional neural network;
step 60, calculating feature centers of the high-dimensional features of the second training sample set and Euclidean distances from each high-dimensional feature to the corresponding feature center, and performing extremum distribution fitting on all the Euclidean distances to obtain an extremum distribution cumulative probability distribution function;
step 70, extracting high-dimensional features of a second test sample set by using the trained convolutional neural network;
and 80, performing open set identification on the high-dimensional features of the second test sample set by using the extreme value distribution cumulative probability distribution function to obtain an identification result of the second test sample set.
In the embodiment, the method can be used for identifying and classifying the targets of the known classes in the database, and meanwhile, the targets of the unknown classes outside the database can be rejected, so that the target identification accuracy is improved, and the automation and intelligentization level of the radar is further improved.
Example two
A radar high-resolution range profile open set identification method based on extremum distribution comprises the following steps:
step 100, establishing a first training sample set and a first testing sample set; the training sample set comprises radar high-resolution range image sample data of targets of a plurality of known types, and the testing sample set comprises radar high-resolution range image sample data of targets of a plurality of known types and radar high-resolution range image sample data of targets of unknown types except the radar target identification database.
The step 100 specifically includes:
101, acquiring P radar high-resolution range profile original data of N categories as a first training sample set, wherein N is more than or equal to 3, and P is more than or equal to 900;
102, obtaining Q radar high-resolution range profile original data of N categories and R radar high-resolution range profile original data of M unknown categories as a second test sample set, wherein Q is more than or equal to 900, M is more than or equal to 1, and L is more than or equal to 300.
The P pieces of radar high-resolution range profile original data and the Q pieces of radar high-resolution range profile original data are data of known types in the database, and the L pieces of radar high-resolution range profile original data are data of unknown types outside the database.
Step 200, preprocessing the radar high-resolution range profile sample data in the training sample set and the test sample set to obtain a second training sample set and a second test sample set.
In this embodiment, the center of gravity alignment and normalization processing are sequentially performed on the original data in the first training sample set and the first testing sample set to obtain a training sample set and a testing sample set after the preprocessing, that is, to obtain a second training sample set and a second testing sample set after the preprocessing.
Specifically, the raw data in the first training sample set or the first testing sample set is denoted as x 0 First, for the original data x 0 Carrying out center of gravity alignment to obtain data x 'after center of gravity alignment' 0 (ii) a Then, data x 'aligned according to the gravity center is processed' 0 And carrying out two-norm normalization processing to obtain sample data x after normalization processing, wherein the expression of the sample data x is as follows:
Figure BDA0003927775060000091
where x may represent a second set of training samples x train Or a second set of test samples x test The second training sample set and the second testing sample set are respectively a dimension matrix of P × D, (Q + R) × D, where D represents the total number of range cells included in the original data of a radar high-resolution range profile.
And step 300, constructing a convolutional neural network.
In this embodiment, the convolutional neural network is provided to include three convolutional layers and a fully-connected layer, which are respectively referred to as a first convolutional layer, a second convolutional layer, a third convolutional layer and a fourth fully-connected layer. Each convolution layer has the same convolution step size, each convolution layer comprises a plurality of convolution kernels, and the sizes of the convolution kernels are the same.
Specifically, for the first layer convolutional layer:
setting the convolution kernel to include C convolution kernels, recording the C convolution kernels of the first convolution layer as K, setting the size of the K to be 1 xwx 1, wherein w represents each convolution kernel window in the first convolution layer, and 1-w-t; c is a positive integer greater than 0; setting the convolution step length of the first layer of convolution layer to be L; setting the core window size of downsampling processing of the first layer of convolution layer to be m multiplied by m, wherein 1-m and D are constructed, D represents the total number of distance units respectively contained in one radar high-resolution distance imaging data in the second training sample set, and m is a positive integer larger than 0; and setting the step length of the down-sampling processing of the first layer convolution layer as I, wherein the values of I and m are equal.
Setting the activation function of the first convolutional layer as
Figure BDA0003927775060000101
x represents the pre-processed sample data (sample data of the second set of training samples or the second set of test samples), ->
Figure BDA0003927775060000102
Representing the convolution operation, b represents the all 1 offset of the third layer convolution layer.
For the second layer of convolutional layers:
setting the convolution kernel to comprise C 'convolution kernels, and marking the C' convolution kernels of the second convolution layer as K ', wherein the value of the C' convolution kernels is the same as that of the K convolution kernels of the first convolution layer; the convolution step length of the second layer of convolution layer is recorded as L ', w is not less than L ' and not more than D-w, and the value of L ' is equal to that of the convolution step length L of the first layer of convolution layer; setting the core window size of downsampling processing of the second convolution layer to be m 'multiplied by m',1 to be a sum of m 'and < D, wherein m' is a positive integer larger than 0; the step length of the down-sampling processing of the second layer is I ', and the values of I ' and m ' are equal.
Setting the activation function of the second convolution layer to
Figure BDA0003927775060000103
Figure BDA0003927775060000104
A first characteristic diagram representing the output of the first convolutional layer, <' >>
Figure BDA0003927775060000105
Representing the convolution operation and b' representing the all 1 offsets of the second convolutional layer.
For the third layer of convolutional layers:
setting the convolution kernel to comprise C 'convolution kernels, and enabling the C' convolution kernels of the third layer of convolution layer to be K 'and the size of the C' convolution kernels to be the same as the size of each convolution kernel window in the second layer of convolution layer; setting the convolution step length of the third layer of convolution layer to be L 'which is equal to the convolution step length L' of the second layer of convolution layer in value; meanwhile, the core window size of the downsampling processing of the third layer of convolution layer is set to be m 'multiplied by m',1 is constructed from m '< D, and m' is a positive integer larger than 0; the step size of the down-sampling processing of the third layer is I ', and the values of I ' and m ' are equal.
Setting the activation function of the second convolution layer to
Figure BDA0003927775060000111
Figure BDA0003927775060000112
A second characteristic diagram representing the output of a second layer of convolution values, -a>
Figure BDA0003927775060000113
Representing the convolution operation, b "represents the all 1 offset of the third convolutional layer.
For the fourth fully connected layer:
setting its randomly initialized weight matrix
Figure BDA0003927775060000114
Is B × U dimension matrix, is asserted>
Figure BDA0003927775060000115
floor () represents a downward integer, D represents the total number of distance units respectively contained in one radar high-resolution range imaging data in the second training sample, B is greater than or equal to D, and B is a positive integer greater than 0; setting an activation function to
Figure BDA0003927775060000116
A third characteristic diagram representing the output of the convolutional layer of the third layer, <' >>
Figure BDA0003927775060000117
Represents a full 1 bias of a fully connected layer of the fourth layer, and +>
Figure BDA0003927775060000118
Is U × 1 dimension.
After the model of the convolutional neural network is constructed, constructing a loss function of the convolutional neural network, wherein the expression of the loss function is as follows:
Figure BDA0003927775060000119
wherein, theta (x) and theta (x) k ) As output of the convolutional neural network, x k Training sample data representing a kth class; o is i (i =1, \8230;, k, \8230;, N) is the ith prototype randomly initialized in a gaussian distribution, for a total of N; d (theta (x), O) k ) Is theta (x) to O k λ is a hyperparameter.
Step 400, training the convolutional neural network by using a second training sample set to obtain a trained convolutional neural network, which specifically includes:
step 401, randomly dividing sample data of a second training sample set into q batches, wherein the data of each batch is n × D dimensional matrix data; wherein,
Figure BDA00039277750600001110
floor () represents the rounding down and P represents the number of high resolution range images in the second set of training samples.
And step 402, sequentially inputting the sample data of each batch into the convolutional neural network for processing to obtain an output result of the convolutional neural network. Specifically, step 402 includes steps 402-1 to 402-5:
step 402-1, after the sample data is input into the convolutional neural network, the currently input sample data is convolved and downsampled by using the first layer of convolutional layer to obtain a first feature map.
Specifically, convolving the input sample data x with C convolution kernels of the first convolutional layer by using the convolution step L of the first convolutional layer, to obtain C convolved results of the first convolutional layer, and recording the results as C feature maps y of the first convolutional layer:
Figure BDA0003927775060000121
carrying out Gaussian normalization processing on the C feature maps y of the first layer of convolution layer to obtain C feature maps of the first layer of convolution layer after Gaussian normalization processing
Figure BDA0003927775060000122
To pair
Figure BDA0003927775060000123
Respectively performing downsampling processing on each feature map to obtain C feature maps +>
Figure BDA0003927775060000124
I.e. the first characteristic diagram, expressed as:
Figure BDA0003927775060000125
wherein,
Figure BDA0003927775060000126
representing down-sampling processes at the first layerTaking Gaussian normalization processing within the kernel window size m multiplied by m to obtain C characteristic maps of the first layer of convolution layer and judging whether the first layer of convolution layer is based on the C characteristic maps>
Figure BDA0003927775060000127
Is greater than or equal to>
Figure BDA0003927775060000128
And C characteristic graphs of the first layer convolution layer after Gaussian normalization processing are shown.
And 402-2, performing convolution and downsampling processing on the first feature map by using the second layer of convolution layer to obtain a second feature map.
Specifically, the C feature maps obtained by downsampling the first convolutional layer by using the convolution step L' of the second convolutional layer
Figure BDA0003927775060000129
(i.e., the first feature map) and the C 'convolution kernels K' of the second convolution layer are convolved respectively to obtain C 'convolved results of the second convolution layer, and the results are recorded as C' feature maps ^ of the second convolution layer>
Figure BDA00039277750600001210
C' feature maps for the second convolutional layer
Figure BDA0003927775060000131
Performing Gaussian normalization to obtain C' feature maps/based on the second layer of convolution layer after the Gaussian normalization>
Figure BDA0003927775060000132
To pair
Figure BDA0003927775060000133
Respectively performing downsampling processing on each feature map to obtain C' feature maps->
Figure BDA0003927775060000134
I.e. the second characteristic diagram, expressed as:
Figure BDA0003927775060000135
wherein,
Figure BDA0003927775060000136
represents the C ' feature maps ^ of the second convolution layer after Gaussian normalization within the kernel window size m ' × m ' of the second downsampling process>
Figure BDA0003927775060000137
Is greater than or equal to>
Figure BDA0003927775060000138
And C characteristic graphs of the second convolution layer after the Gaussian normalization processing are shown.
And step 402-3, performing convolution and downsampling processing on the second feature map by using the third layer of convolution layer to obtain a third feature map.
Specifically, the C feature maps obtained by downsampling the second convolutional layer are obtained by using the convolution step L' of the third convolutional layer
Figure BDA0003927775060000139
(i.e. the second characteristic diagram) and C "convolution kernels K" of the third convolutional layer respectively to obtain C "convolved results of the third convolutional layer, and the results are recorded as C" characteristic diagrams ^ of the third convolutional layer>
Figure BDA00039277750600001310
C' number of feature maps for the third layer convolutional layer
Figure BDA00039277750600001311
Performing Gaussian normalization to obtain C "feature patterns in the third layer of convolution layer after Gaussian normalization>
Figure BDA00039277750600001312
To pair
Figure BDA00039277750600001313
Respectively performing downsampling processing on each feature map to obtain C' feature maps>
Figure BDA00039277750600001314
I.e. the third characteristic diagram, expressed as:
Figure BDA00039277750600001315
wherein,
Figure BDA00039277750600001316
represents the C "feature maps ` in the third layer convolution layer after Gaussian normalization within the kernel window size m `xm ` of the third layer downsampling processing>
Figure BDA00039277750600001317
Is greater than or equal to>
Figure BDA00039277750600001318
And C' feature maps of the convolution layer of the third layer after Gaussian normalization processing are shown.
Step 402-4, utilizing the fourth full connection layer to perform nonlinear transformation processing on the third characteristic diagram to obtain the processing result of the current sample data
Figure BDA0003927775060000141
The expression is as follows:
Figure BDA0003927775060000142
wherein,
Figure BDA0003927775060000143
denotes the fourthRandomly initialized weight matrix for a layer full link layer @>
Figure BDA0003927775060000144
Indicating a full 1 bias of the fourth layer full link layer.
And step 402-5, repeating the steps 402-1 to 402-4 until the processing of all batches of input data is completed, and obtaining an output result of the convolutional neural network, wherein the output result is also called as a high-dimensional feature.
And 403, calculating a loss function value according to the output result of the convolutional neural network, and updating a parameter value of the convolutional neural network by using a random gradient method until the network converges to obtain the trained convolutional neural network.
Specifically, the result obtained in step 402 is
Figure BDA0003927775060000145
Substituting the output result of the convolutional neural network into the loss function expression of the convolutional neural network to obtain a value of a loss function, and updating the parameter value of the convolutional neural network by adopting the conventional random gradient method until the network converges to obtain the trained convolutional neural network. The random gradient method is well known in the art, and the present embodiment is not specifically described.
In the embodiment, a multilayer convolutional neural network structure is adopted, and the data is subjected to two-norm normalization and alignment preprocessing, so that the high-level characteristics of the high-resolution range profile data can be mined, the amplitude sensitivity, the translation sensitivity and the attitude sensitivity of the high-resolution range profile data are removed, and the method has stronger robustness compared with the traditional direct classification method.
Step 500, extracting a second training sample set x according to steps 401 to 402 by using the trained convolutional neural network train High dimensional feature of (x) train )。
Step 600, calculating feature centers of the high-dimensional features of the second training sample set and Euclidean distances from each high-dimensional feature to the corresponding feature center, and performing extremum distribution fitting on all the Euclidean distances to obtain an extremum distribution cumulative probability distribution function.
In particular, the feature center of the high-dimensional features of the sample data of each class of the second training sample set
Figure BDA0003927775060000151
(where k denotes the kth class) is:
Figure BDA0003927775060000152
wherein,
Figure BDA0003927775060000153
represents the ith sample in the kth class in the second training sample set, and the classes total N k And (4) sampling. And then calculating the Euclidean distance from each high-dimensional feature in each category to the feature center corresponding to the category
Figure BDA0003927775060000154
And carrying out extreme value distribution fitting on the Euclidean distances, wherein the fitted k-th category of extreme value distribution cumulative probability distribution function is as follows:
Figure BDA0003927775060000155
wherein t is a function independent variable and represents the Euclidean distance from the high-dimensional feature to the feature center; ξ, α and β are the relevant parameters of the extremum distribution cumulative probability distribution function. In the embodiment, extreme value distribution established for the known class target characteristics in the library is introduced in the judgment stage, the specific class of the target to be detected is judged through the cumulative distribution function of the extreme value distribution, dimension sensitivity introduced based on distance criteria can be eliminated, and the method has stronger robustness compared with the traditional classification method. Compared with the conventional distance measurement-based method, the strategy can effectively improve the recognition rate of samples (namely extreme value samples which are often objects difficult to correctly recognize in a recognition task) at the edge of the known class cluster, and has stronger recognition robustness.
Step 700, extracting a second test sample set x according to step 401 and step 402 by using the trained convolutional neural network test High dimensional feature of (x) test )。
And 800, performing open set identification on each high-dimensional feature of the second test sample set by using an extreme value distribution cumulative probability distribution function to obtain an identification result of the second test sample set.
Specifically, the high-dimensional feature Θ (x) is calculated test ) To the center of the feature
Figure BDA0003927775060000161
European distance of
Figure BDA0003927775060000162
When/is>
Figure BDA0003927775060000163
When the second test sample set is established, the test sample corresponding to the high-dimensional feature of the second test sample set is judged to be an unknown class outside the database;
when in use
Figure BDA0003927775060000164
If the answer is false, the answer is judged to be a known class in the library, and the class of the test sample corresponding to the high-dimensional feature of the second test sample set is->
Figure BDA0003927775060000165
Where τ represents a preset decision threshold.
EXAMPLE III
The embodiment of the invention also discloses a radar high-resolution range profile open-set identification device based on extremum distribution, which is characterized by comprising the following steps:
the data acquisition module is used for establishing a first training sample set and a first testing sample set; the training sample set comprises a plurality of radar high-resolution range profiles of targets of known types, and the testing sample set comprises a plurality of radar high-resolution range profiles of targets of known types and radar high-resolution range profiles of targets of unknown types except the database;
the preprocessing module is used for preprocessing the radar high-resolution range profiles in the training sample set and the test sample set to obtain a second training sample set and a second test sample set;
the model building module is used for building a convolutional neural network;
the training module is used for training the convolutional neural network by utilizing a second training sample set to obtain a trained convolutional neural network;
the first extraction module is used for extracting high-dimensional features of the second training sample set by using the trained convolutional neural network;
the extreme value distribution fitting module is used for calculating the feature centers of the high-dimensional features of the second training sample set and the Euclidean distances from each high-dimensional feature to the feature centers, and carrying out extreme value distribution fitting on all the Euclidean distances to obtain an extreme value distribution cumulative probability distribution function;
the second extraction module is used for extracting high-dimensional features of the second test sample set by using the trained convolutional neural network;
and the target identification module is used for performing open set identification on the high-dimensional features of the second test sample set by using the extreme value distribution cumulative probability distribution function to obtain an identification result of the second test sample set.
The radar high-resolution range profile open set identification device provided by this embodiment can implement the radar high-resolution range profile open set identification method provided by the first embodiment, and the detailed process is not repeated here.
Therefore, the radar high-resolution range profile open-set identification device provided by the embodiment also has the advantages that the identification and classification of the targets with known classes in the database can be realized, meanwhile, the judgment of the targets with unknown classes outside the database can be refused, and the target identification accuracy rate is high.
Example four
The following is a simulation test to verify the beneficial effects of the present invention.
1. Simulation conditions
The hardware platform of the simulation experiment of the embodiment is as follows:
a processor: intel (R) Core (TM) i9-10980XE, the dominant frequency of 3.00GHz and the memory of 256GB.
The software platform of the simulation experiment of the embodiment is as follows: ubuntu 20.04 operating system and python 3.9.
The data used in the simulation test is measured data of high resolution distance image of 13 types of airplanes, and the 13 types of airplanes are An-26, cessna, yak-42, A319, A320, A330-2, A330-3, B737-8, CRJ-900, A321, A350-941, B737-7 and B747-89L respectively. And (3) taking the first 3 classes of airplanes as known in-library target classes, taking the last 10 classes of airplanes as unknown out-library target classes, and manufacturing a training sample set and a testing sample set. Wherein, the training sample set comprises about 50000 samples, and each in-library category sample comprises about 15000 samples; the test sample set includes a total of about 30000 samples for 3 classes known in the library and a total of 30000 samples for 10 classes unknown outside the library, with about 3000 samples for each class.
Before performing the experiment, all raw data were preprocessed according to step 200 of the second embodiment, and then the open set identification experiment was performed using the present invention.
2. Simulation content and result analysis
The simulation experiment compares the method of the invention with the traditional two-stage rejection identification method and the SoftMax threshold value method.
The traditional two-stage rejection identification method is mainly used for further classifying the targets judged to be in the database by methods such as SVM (support vector machine) and the like on the basis of using SVDD (singular value decomposition) and OCSVM (online similarity decomposition) and the like to reject targets outside the database. And judging the category of the target to be detected according to the final classified output size of the convolutional neural network by using a SoftMax threshold rule, if the final classified output of the network is larger than a defined threshold, judging the final classified output of the network as an in-library category, and otherwise, judging the final classified output of the network as an out-library category.
The simulation experiment utilizes the area AUC under the characteristic curve (ROC) of the operation of a testee to evaluate the rejection capability of different methods to the targets outside the library, wherein the larger the value of the AUC is, the stronger the rejection capability to the targets outside the library is represented.
Referring to fig. 2, fig. 2 is a comparison diagram of simulation test results provided in an embodiment of the present invention, and it can be seen from fig. 2 that, in the present simulation experiment, the rejection capability of the present invention to the out-of-library object is strongest, and then, the SoftMax threshold method is used, and the rejection capability of the 3 traditional methods to the out-of-library object is general.
As the simulation experiment uses more data types, the open set identification capability of different methods is comprehensively evaluated by using Macro Average F1-Score, wherein the larger the value of F1-Score is, the stronger the open set identification capability is represented. The results of the simulation are shown in the following table.
Figure BDA0003927775060000181
Figure BDA0003927775060000191
It can be seen that in the simulation experiment, the comprehensive open set identification capability of the invention is strongest and is obviously superior to other 3 methods.
In conclusion, the invention obtains the optimal result no matter in the aspect of refusal judgment capability of the targets outside the warehouse or in the aspect of comprehensive consideration of the opening set identification capability, and proves the effectiveness of the invention.
The radar high-resolution range profile open-set identification method provided by the embodiment can combine the primary features of all layers by adopting the convolutional neural network technology, so that the features of higher layers are obtained for identification, the identification rate is obviously improved, the method can be used for identifying and classifying the known class targets in the database, meanwhile, the unknown class targets outside the database can be rejected, the target identification accuracy rate is improved, and the automation and intelligence level of the radar is further improved.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or may be connected through the use of two elements or the interaction of two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless expressly stated or limited otherwise, the recitation of a first feature "on" or "under" a second feature may include the recitation of the first and second features being in direct contact, and may also include the recitation that the first and second features are not in direct contact, but are in contact via another feature between them. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature includes the first feature being directly under and obliquely below the second feature, or simply meaning that the first feature is at a lesser elevation than the second feature.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples described in this specification can be combined and combined by those skilled in the art.
The foregoing is a further detailed description of the invention in connection with specific preferred embodiments and it is not intended to limit the invention to the specific embodiments described. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (7)

1. A radar high-resolution range profile open set identification method based on extremum distribution is characterized by comprising the following steps:
establishing a first training sample set and a first testing sample set; the training sample set comprises radar high-resolution range profiles of a plurality of targets with known types, and the testing sample set comprises radar high-resolution range profiles of a plurality of targets with known types and radar high-resolution range profiles of targets with unknown types outside a radar target recognition database;
preprocessing the radar high-resolution range profile in the training sample set and the test sample set to obtain a second training sample set and a second test sample set;
constructing a convolutional neural network;
training the convolutional neural network by using the second training sample set to obtain a trained convolutional neural network;
extracting high-dimensional features of the second training sample set by using the trained convolutional neural network;
calculating feature centers of the high-dimensional features of the second training sample set and Euclidean distances from each high-dimensional feature to the corresponding feature center, and performing extreme value distribution fitting on all the Euclidean distances to obtain an extreme value distribution cumulative probability distribution function;
extracting high-dimensional features of the second test sample set by using the trained convolutional neural network;
and performing open set identification on the high-dimensional features of the second test sample set by using the extreme value distribution cumulative probability distribution function to obtain an identification result of the second test sample set.
2. The method according to claim 1, wherein the preprocessing the radar high-resolution range profiles in the training sample set and the test sample set to obtain a second training sample set and a second test sample set after preprocessing comprises:
and sequentially carrying out gravity center alignment and normalization processing on the radar high-resolution range profiles in the training sample set and the test sample set to obtain a second training sample set and a second test sample set after preprocessing.
3. The method of claim 1, wherein the convolutional neural network comprises: three convolutional layers and a fourth full-link layer; the three convolution layers are respectively a first layer convolution layer, a second layer convolution layer and a third layer convolution layer; the above-mentioned
The convolution step length of each convolution layer is the same; each convolution layer comprises a plurality of convolution kernels, and the sizes of the convolution kernels are the same;
wherein the loss function of the convolutional neural network has an expression as follows:
Figure FDA0003927775050000021
wherein, theta (x) and theta (x) k ) As an output result of the convolutional neural network, O i (i =1, \8230;, k, \8230;, N) is the ith prototype randomly initialized according to a gaussian distribution; d (theta (x), O) k ) Is theta (x) to O k λ is a hyperparameter.
4. The method of claim 1, wherein the training the convolutional neural network with the second training sample set to obtain a trained convolutional neural network comprises:
randomly dividing the sample data of the second training sample set into q batches, wherein the data of each batch is n multiplied by D dimensional matrix data; wherein,
Figure FDA0003927775050000022
floor () represents a rounding down, P represents the number of high resolution range images in the second set of training samples;
sequentially inputting the sample data of each batch into the convolutional neural network for processing to obtain an output result of the convolutional neural network;
and calculating the value of the loss function according to the output result of the convolutional neural network and the loss function of the convolutional neural network, and updating the parameter value of the convolutional neural network by adopting a random gradient method until the network converges to obtain the trained convolutional neural network.
5. The method according to claim 4, wherein the step of sequentially inputting the sample data of each batch into the convolutional neural network for processing to obtain the output result of the convolutional neural network comprises:
carrying out convolution and downsampling processing on input sample data of the current batch by using the first layer of convolution layer to obtain a first feature map;
performing convolution and downsampling processing on the first feature map by using a second layer of convolution layer to obtain a second feature map;
carrying out convolution and downsampling processing on the second feature map by using a third layer of convolution layer to obtain a third feature map;
carrying out nonlinear transformation processing on the third characteristic diagram by utilizing a fourth full-connection layer to obtain an output result of the current sample data;
and repeating the steps until the processing of the sample data of q batches is completed, and obtaining the output result of the convolutional neural network.
6. The method of claim 5, wherein the expression of the feature center of the high-dimensional features of the second training sample set is:
Figure FDA0003927775050000031
wherein,
Figure FDA0003927775050000032
representing the ith sample in the kth class in the second training sample set, the kth class totaling N k A sample is obtained;
the expression of the cumulative probability distribution function of the extremum distribution of the kth class is:
Figure FDA0003927775050000041
wherein,
Figure FDA0003927775050000042
expressing the cumulative probability distribution function of extreme value distribution, and t represents the Euclidean distance from each high-dimensional feature to the corresponding feature center as functionCounting an independent variable; ξ, α and β represent the parameters of the extremum distribution cumulative probability distribution function.
7. The method of claim 5, wherein the obtaining the identification result of the second test sample set by the open-set identification of the high-dimensional features of the second test sample set using the extremum distribution cumulative probability distribution function comprises:
when in use
Figure FDA0003927775050000043
When the second test sample set is established, the test sample corresponding to the high-dimensional feature of the second test sample set is an unknown class outside the database;
when in use
Figure FDA0003927775050000044
If not, then the class of the test sample corresponding to the high-dimensional feature of the second set of test samples is->
Figure FDA0003927775050000045
Wherein τ is a preset decision threshold.
CN202211378461.3A 2022-11-04 2022-11-04 Extreme value distribution-based radar high-resolution range profile open-set identification method Pending CN115861676A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211378461.3A CN115861676A (en) 2022-11-04 2022-11-04 Extreme value distribution-based radar high-resolution range profile open-set identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211378461.3A CN115861676A (en) 2022-11-04 2022-11-04 Extreme value distribution-based radar high-resolution range profile open-set identification method

Publications (1)

Publication Number Publication Date
CN115861676A true CN115861676A (en) 2023-03-28

Family

ID=85662505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211378461.3A Pending CN115861676A (en) 2022-11-04 2022-11-04 Extreme value distribution-based radar high-resolution range profile open-set identification method

Country Status (1)

Country Link
CN (1) CN115861676A (en)

Similar Documents

Publication Publication Date Title
CN107977642B (en) High-resolution range profile target identification method based on kernel self-adaptive mean discrimination analysis
CN110109060A (en) A kind of radar emitter signal method for separating and system based on deep learning network
CN104408469A (en) Firework identification method and firework identification system based on deep learning of image
CN111352086B (en) Unknown target identification method based on deep convolutional neural network
Missaoui et al. Land-mine detection with ground-penetrating radar using multistream discrete hidden Markov models
Wan et al. Recognizing the HRRP by combining CNN and BiRNN with attention mechanism
CN114137518B (en) Radar high-resolution range profile open set identification method and device
CN111273288B (en) Radar unknown target identification method based on long-term and short-term memory network
CN110703221A (en) Urban low-altitude small target classification and identification system based on polarization characteristics
CN109558803B (en) SAR target identification method based on convolutional neural network and NP criterion
CN111580058A (en) Radar HRRP target identification method based on multi-scale convolution neural network
CN110458064B (en) Low-altitude target detection and identification method combining data driving type and knowledge driving type
Srinivas et al. Meta-classifiers for exploiting feature dependencies in automatic target recognition
CN118151119A (en) Millimeter wave radar open-set gait recognition method oriented to search task
CN109816022A (en) A kind of image-recognizing method based on three decisions and CNN
CN110991469B (en) Fruit soluble solid online detection method and system
CN115861676A (en) Extreme value distribution-based radar high-resolution range profile open-set identification method
CN115565050A (en) Intelligent target recognition method based on multi-modal characteristic fusion
CN115481659A (en) Small sample SAR image target identification method based on depth Brown distance
CN114764879A (en) Aerial target identification method based on radar infrared fusion characteristics
CN114663916A (en) Thermal infrared human body target identification method based on depth abstract features
CN114818845A (en) Noise-stable high-resolution range profile feature selection method
Yuankui et al. Automatic target recognition of ISAR images based on Hausdorff distance
Cheung Soft classification of single samples based on multi-analyte spectra
Blomerus et al. Improved explainability through uncertainty estimation in automatic target recognition of SAR images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination