AU2021105247A4 - Deep transfer learning-based method for radar HRRP target recognition with small sample size - Google Patents
Deep transfer learning-based method for radar HRRP target recognition with small sample size Download PDFInfo
- Publication number
- AU2021105247A4 AU2021105247A4 AU2021105247A AU2021105247A AU2021105247A4 AU 2021105247 A4 AU2021105247 A4 AU 2021105247A4 AU 2021105247 A AU2021105247 A AU 2021105247A AU 2021105247 A AU2021105247 A AU 2021105247A AU 2021105247 A4 AU2021105247 A4 AU 2021105247A4
- Authority
- AU
- Australia
- Prior art keywords
- model
- layer
- loss function
- hrrp
- sample size
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000013526 transfer learning Methods 0.000 title claims abstract description 11
- 230000006870 function Effects 0.000 claims abstract description 36
- 238000013527 convolutional neural network Methods 0.000 abstract description 4
- 239000010410 layer Substances 0.000 description 71
- 238000011176 pooling Methods 0.000 description 7
- 210000002569 neuron Anatomy 0.000 description 5
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 206010068829 Overconfidence Diseases 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000002356 single layer Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/255—Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/41—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
- G01S7/411—Identification of targets based on measurements of radar reflectivity
- G01S7/412—Identification of targets based on measurements of radar reflectivity based on a comparison between measured values and known or stored values
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2137—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on criteria of topology preservation, e.g. multidimensional scaling or self-organising maps
- G06F18/21375—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on criteria of topology preservation, e.g. multidimensional scaling or self-organising maps involving differential geometry, e.g. embedding of pattern manifold
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/02—Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/41—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
- G01S7/415—Identification of targets based on measurements of movement associated with the target
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Radar, Positioning & Navigation (AREA)
- Data Mining & Analysis (AREA)
- Remote Sensing (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
OF THE DISCLOSURE
To address the challenge of radar high resolution range profile (HRRP) target recognition with
a small sample size, the present disclosure provides a deep transfer learning-based method. Firstly, a
pre-trained model suitable for a target with a small sample size is designed, and a loss function that
can improve the generalization performance of the pre-trained model is proposed. The pre-trained
model is trained from scratch with source domain data (the simulated HRRP dataset). On the basis
of the pre-trained model, the structures of a fully connected layer and an output layer are reset and
initialized to constitute a fine-tuned model. During fine-tuning, to solve the problems of the
dissatisfactory recognition performance due to the small data size and class imbalance of the
measured HRRP dataset, a loss function that can reduce recognition deviations induced by
imbalanced samples between classes and improve feature separability is proposed. With a small
sample size, compared with a convolutional neural network model trained from scratch, the
proposed method has an increased convergence rate and improved model stability while having
improved recognition accuracy.
ABSTRACT DRAWING
---------------------------
Construct and initialize model A
Pre-training process
Train model A by using N-class
simulated HRRP target data set to
converge
I------------------------
Construct and initialize model B (the
structure of the convolutional layer in
model B is identical to that of the
model A)
Fine-tuning
process
Update parameters layer by layer
until the loss function converges and
stops declining
Finish the algorithm and obtain the
final recognition model
FIG. 1
Description
[01] The present disclosure belongs to the technical field of radar automatic target recognition, and to address the problem of low accuracy of radar high resolution range profile (HRRP) target recognition with a small labeled sample size. Specifically, the present disclosure provides a deep transfer leaming-based method for radar HRRP target recognition with a small sample size. BACKGROUND ART
[02] For cooperative targets, it is easy to obtain sufficient HRRPs with a complete angular domain. But in practical applications, especially during a war, it may be difficult to obtain sufficient labeled HRRP target samples. This is because in complicated electromagnetic environment, the targets to be recognized are mostly non-cooperative targets with high mobility and the class labels of HRRPs need to be interpreted by professionals. Therefore, radar HRRP target recognition with a small sample size is one of the most pressing problems to be solved in the field of radar target recognition.
[03] Existing methods of recognition with a small sample size may have the following disadvantages: 1) the model requires training samples with a complete angular domain, but in practical applications, with a small sample size, it is hard to guarantee that the training samples cover the complete angular domain of targets; 2) low degree-of-freedom models require few training samples but are low in recognition accuracy, while high degree-of-freedom models have high recognition accuracy but require numerous training samples; and the accuracy of recognition with few samples needs to be further improved. In view of the problems in the above-mentioned methods, a deep learning method is adopted to solve the problem of radar HRRP target recognition with a small sample size.
[04] Compared with shallow method, deep networks allow for efficient extraction of higher-order features of HRRPs. In most existing methods with deep networks, stacked auto-encoder models are used to extract deep features of a target, and the number of samples is reduced by sharing global features of HRRPs. Compared with a stacked auto-encoder, a convolutional neural network has better target recognition performance. However, if a model is trained from scratch with a small sample size, overfitting may be aroused. To address this problem, a deep transfer learning-based method for radar HRRP target recognition with a small sample size is proposed.
[05] To address the problem of low recognition rate of HRRPs with a small sample size, an objective of the present disclosure is to provide a deep transfer learning-based method for radar HRRP target recognition with a small sample size. The proposed method has an increased convergence rate and improved model stability while having improved recognition accuracy.
[06] The technical solutions of the present disclosure are as follows: training a pre-trained model from scratch with a source domain dataset; and fine-tuning the pre-trained model with target domain dataset. To achieve the above objective, the following steps are performed.
[07] The process of pre-training includes:
[08] input: N -class simulated HRRP target dataset
[09] output: structure and weights of the convolutional layers in a pre-trained model
[10] step 1: constructing a pre-trained model according to model A as shown in FIG. 2, initializing weight values of the model, defining a weight value of a convolutional layer as ,={k,b }and a weight value of a fully connected layer as W, 0, and W both being normally distributed with a mean of 0 and a variance of 2/(n,+n) , where n, and no represent
dimensions of an input vector and an output vector of the corresponding layer, respectively;
[11] step 2: forward propagation: calculating a loss function of min-batch samples during each iteration according to a formula;
[12] step 3: back propagation: calculating a gradient by a chain rule and updating parameters by using a stochastic gradient descent algorithm; and
[13] step 4: repeating steps 2 and 3 until the loss function converges and stops declining, and then finishing the training process and saving the structure and weights of the model.
[14] The process of fine-tuning includes:
[15] input: M -class measured HRRP target dataset
[16] output: a fine-tuned model for target recognition with a small sample size
[17] step 5: constructing a fine-tuned model according to model B as shown in FIG. 3, initializing weight values of the model, where an initial weight value of a convolutional layer is identical to the weight value of the convolutional layer saved in step 4 of the model pre-training process, and a weight value W of a fully connected layer is normally distributed: W~-N(0, 2/(n, + n,)) ;
[18] step 6: forward propagation: calculating a loss function of min-batch samples during each iteration according to a formula;
[19] step 7: back propagation: calculating a gradient by a chain rule; setting learning rates of all convolutional layers to 0 first, updating only weight values of the fully connected layer and an output layer, followed by orderly setting learning rates of convolutional layers C4-C1 to non-zero values, and updating weight values layer by layer; and
[20] step 8: repeating steps 6 and 7 until the loss function converges and stops declining, and then finishing the training process and saving the structure and weights of the model.
[21] Compared with the prior art, the present disclosure has the following advantages:
[22] (1) The proposed model is a data-driven end-to-end model, and the trained model can automatically extract the deep features of a target.
[23] (2) A suitable pre-trained model based on the characteristic of a small sample size of the target domain is designed according to the proposed method, and a loss function that can improve the generalization performance of the pre-trained model is proposed.
[24] (3) During fine-tuning, to solve the problems of the dissatisfactory recognition performance due to the small data size and class imbalance of the measured HRRP dataset, a loss function is proposed to reduce recognition deviations induced by imbalanced samples between classes and to improve feature separability.
[25] FIG. 1 is a flowchart of transfer learning according to an embodiment of the present disclosure.
[26] FIG. 2 is a structural diagram of a pre-trained model (model A) according to an embodiment of the present disclosure.
[27] FIG. 3 is a structural diagram of afine-tuned model (model B) according to an embodiment of the present disclosure.
[28] The present disclosure is described in detail below with reference to the accompanying drawings.
[29] In the present disclosure, N -class simulated HRRP target dataset is used as the source domain, while M -class measured HRRP target dataset is used as the target domain. Therefore, the source domain (source task) and the target domain (target task) are different. The process of transfer learning is as shown in FIG. 1. Firstly, a pre-trained model is designed based on the characteristics of the target domain and the source domain task and trained by using the source domain. Secondly, a fine-tuned model based on the pre-trained model is designed according to the target domain task and trained by the target domain.
[30] The proposed method will be described and analyzed in detail below in two aspects: 1, the process of model pre-training, and 2, the process of modelfine-tuning.
[31] 1 The process of model pre-training
[32] (1) Pre-trained model
[33] A deep convolutional neural network needs to be trained from scratch with mass training data because insufficient training data would result in overfitting and poor generalization performance of the model. Model depth will have a great influence on recognition accuracy.
Shallow features of the deep convolutional neural network are lower-order structural features. Deep features are higher-order semantic features, and in order to achieve a better recognition effect, a particular depth must be guaranteed. In view of the problem of recognition with a small sample size, it is unfavorable to use a pre-trained model with too many layers in the method proposed in the present disclosure. The pre-trained model (hereinafter referred to as model A) used in the proposed method has a structure as shown in FIG. 2.
[34] The model A includes four convolutional layers, four pooling layers, one fully connected layer and one output layer. The first three convolutional layers have 16, 32, 32 convolution kernels with a size of 3*1, respectively, and the fourth convolutional layer has 64 convolution kernels with a size of 1*1. The pooling layers are all maximum pooling layers with a step size of 2. The fully connected layer and the output layer have 50 and N neurons, respectively.
[35] (2) Loss function
[36] HRRPs are susceptible to attitude angles, and the HRRPs of the same target at different attitude angles may differ greatly. HRRP samples corresponding to some attitude angles may contain a lot of scattering point information and be easy to recognize, while HRRP samples corresponding to other attitude angles may contain little scattering point information and be hard to recognize. Nevertheless, the HRRP samples corresponding to all attitude angles are equally important to target recognition and determine the generalization performance of a model. The accuracy of target recognition with a small sample size can be significantly improved byfine-tuning a pre-trained model with high generalization performance. To guarantee efficient extraction of invariant features of complete attitude angles of HRRPs by the pre-trained model, an output probability of a class corresponding to HRRP samples difficult to recognize needs to be increased, and this requirement cannot be met by a cross entropy loss function.
[37] In view of the above problem, the method proposed in the present disclosure provides a fuzzy-truncation cross entropy loss function L, which is divided into two parts. The first part is a fuzzy cross entropy loss function which is mainly directed to the problem of overconfident model classification results and allows the output result of each neuron to come into play during propagation by fuzzifying the output results and narrowing the gap between the neurons, thereby avoiding model overconfidence. The second part is a truncation cross entropy loss function which is mainly directed to the problem of low output probability of a class corresponding to some HRRPs. This function enables back propagation of output results meeting conditions by using a truncation function and increases weights of these HRRPs, allowing the model to efficiently extract features of a confusing target. The loss function L, is expressed as follows:
[38] LP=Lh+aL, (1)
[39] Lb = -(1- c) log(ew h)- I log(e /Zew**) (2) IC i
[401 LT== lyi ,i )yilg i (3)
[411 =1-O(y,-m)0(f,-m)-6(1-m-y,)0(1-m-jf,)(4)
1 ,x>O 1
[421 6(x)=- , x=O (5) 2 L0 ,x<0
[43] where, L, represents the fuzzy cross entropy loss function, while L, the truncation cross entropy loss function, a the weight of L, , y,=(y,yi 2 ,K ,y) the class label, y, = (j, ,K1 ,)the output result of the output layer, Z(yj) the truncation function, m a truncation threshold, and 0(x) a unit step function. L, can be involved in back propagation only when the output result satisfies 1- m < y, < m .
[44] (2) The process of model fine-tuning
[45] (1) Fine-tuned model
[46] Since the source domain and target domain are different in dimensions and the convolutional layers and the pooling layers have no requirement on input dimensions, the fine-tuned model (hereinafter referred to as model B) has the same structure with model A only in the convolutional layers and the pooling layers, and the fully connected layer and the output layer both need to be reset. Initial weight values of the convolutional layers and the pooling layers of the model B are the weight values of the trained model A, and the initial weight values of the fully connected layer and the output layer are normally distributed: W ~ N(O, 2/(n,+no)) , where n, and no represent dimensions of an input vector and an output vector of the corresponding layer,
respectively. The model B has a structure as shown in FIG. 3.
[47] The model B includes four convolutional layers, four pooling layers, one fully connected layer and one output layer. Since the parameters of the fully connected layer and the output layer need to be trained from scratch, to prevent overfitting, the number of neurons of the fully connected layer is set to 10, and the number of neurons of the output layer is identical to the number of classes of the target domain, namely M . After the completion of model initialization, the model is fine-tuned layer by layer by using target domain dataset.
[48] In the model B, the parameters of the convolutional layers C1-C4, the fully connected layer and the output layer can be updated through back propagation, and corresponding learning rates of these layers are P- pc , u , and po , respectively. During fine-tuning, a layer can be frozen by
setting the learning rate to zero and then the weight value of this layer will not be updated. Since the weight values of the fully connected layer and the output layer are pre-trained, the learning rates p,, and u are set to be always greater than 0. Since features extracted at shallow convolutional
layers are mostly universal features and suitable for most tasks, while features extracted at deep convolutional layers are semantic features directed toward a specific task. Therefore, fine-tuning mainly refers to a process of updating the pre-trained weight values of the convolutional layers backward one by one.
[49] The fine-tuning of the model B specifically includes the followings steps. Firstly, ' -pU,, are all set to zero, and only the weight values of the fully connected layer and the output layer (the fully connected layer and the output layer can be regarded as nonlinear classifiers) are updated. Secondly, the learning rate p,, of the convolutional layer C4 is set to a non-zero value (also
known as release of the convolutional layer), and the network is trained continuously and the network of this layer is updated. The learning rates p,, -pu of the convolutional layers C3-C1 are
sequentially set to non-zero values, and the weight values are updated layer by layer. Due to a coupling relationship of features between adjacent layers, independent training of a single layer may easily cause disruption of features. To achieve more effective fine-tuning of the pre-trained model, one convolutional layer is released every 10 iterations during the above process until all the convolutional layers to be trained are completely released.
[50] (2) Loss function
[51] In case of a small sample size, samples are often imbalanced between classes. In back propagation, a class having a large number of samples makes up a high proportion of the loss function, easily leading to model optimization for facilitating outputting classification results of this class. To address this problem, the present disclosure proposes a multi-class balanced loss function LMB which can balance the proportions of different classes by reducing the weights of classes easy to classify. LMB is expressed as follows:
[52] LMB i ) logB= , (6)
[531 where y, = (yI,y,K , y) represents the class label, while , = (jaQj,K , j) the output result of the output layer, and y a hyper-parameter for adjusting the weight of the output.
Claims (5)
1. A deep transfer learning-based method for radar high resolution range profile (HRRP) target recognition with a small sample size, comprising a pre-training process and afine-tuning process. In a pre-training process, a pre-trained model is trained from scratch by a source domain dataset and in a fine-tuning process, a fine-tuned model is constructed based on the pre-trained model and is fine-tuned by a target domain dataset; and the final fine-tuned model can be used for target recognition with a small sample size.
2. The deep transfer learning-based method for radar HRRP target recognition with a small sample size according to claim 1, wherein the pre-training process is performed by the followings steps: input: N -class simulated complete-angular domain HRRP target dataset; output: structure and weights of convolutional layers in the pre-trained model; step 1: constructing the pre-trained model, initializing weight values of the model, defining a weight value of a convolutional layer as 0, = {k,b} and a weight value of a fully connected layer as W , 0, and W both being normally distributed with a mean of 0 and a variance of 2 /(n,+n) , wherein n, and n0 represent dimensions of an input vector and an output vector
of the corresponding layer, respectively; step 2: forward propagation: calculating a loss function L, of min-batch samples during each
iteration; step 3: back propagation: calculating a gradient by a chain rule and updating parameters by using a stochastic gradient descent algorithm; and step 4, repeating steps 2 and 3 until the loss function converges and stops declining, and then finishing the training process and saving the structure and weights of the model.
3. The deep transfer learning-based system for radar HRRP target recognition with a small sample size according to claim 2, wherein the loss function L, in step 2 is specifically as follows: =L +aL,
wherein a fuzzy cross entropy loss function L is specifically as follows:
L = -(1 )log(eW\")- C log(eW+i / e i) ;a truncation cross entropy loss function L, C
is specifically as follows: L, = -J (y,,f,)y, logf,; a is a weight of L, ; y, = (yI,y 2 ,K,y,)
represents a class label; y =(jj,K ,) represents an output result of an output layer; c represents a total number of classes; A(y,Jf) represents a truncation function which is specifically expressed as =1-6(y,-rm)6(f,-im)-0(1-n-y)(1-m-f,) ; m represents a truncation threshold; 0(x) represents a unit step function which is specifically expressed as
1 ,x>0 ) (x)= - ,x = 0 ; and L, is involved in back propagation only when the output result satisfies 2 0 ,x<O
1-m < , < m ; and the loss function enables the model to efficiently extract invariant features of complete attitude angles of source domain HRRPs, so that the ability of the model to extract HRRP universal features is improved.
4. A deep transfer learning-based system for radar HRRP target recognition with a small sample size, based on a pre-training method according to claim 2 or 3, further comprising the following steps in the fine-tuning process: input: M -class measured HRRP target dataset; output: a fine-tuned model for recognition with a small sample size; step 5: constructing a fine-tuned model, initializing weight values of the model, wherein an initial weight value of a convolutional layer is identical to the saved weight value of the convolutional layer of the model pre-training process in step 4, and a weight value W of a fully connected layer is normally distributed: W ~ N(0, 2/(n, +n0 )) ; step 6: forward propagation: calculating a loss function LMB of min-batch samples during each iteration; step 7: back propagation: updating weight values layer by layer; and step 8: repeating steps 6 and 7 until the loss function converges and stops declining, and then finishing the training process and saving the structure and weights of the model.
5. The deep transfer learning-based system for radar HRRP target recognition with a small sample size according to claim 4, wherein the loss function LMB instep6isspecificallyas follows: LMB Y 1 'lg
wherein y, = (y,y,K y,,) represents a class label, while y, =(j 1 ,Qj,K , j) an output result of an output layer, and y a hyper-parameter for adjusting a weight of an output; and the loss function is capable of reducing recognition deviations induced by imbalanced HRRP samples between different classes and greatly improving feature separability.
-1/1- Aug 2021
DRAWINGS 2021105247
FIG. 1
FIG. 2
FIG. 3
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2021105247A AU2021105247A4 (en) | 2021-08-10 | 2021-08-10 | Deep transfer learning-based method for radar HRRP target recognition with small sample size |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2021105247A AU2021105247A4 (en) | 2021-08-10 | 2021-08-10 | Deep transfer learning-based method for radar HRRP target recognition with small sample size |
Publications (1)
Publication Number | Publication Date |
---|---|
AU2021105247A4 true AU2021105247A4 (en) | 2021-10-07 |
Family
ID=77923908
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2021105247A Ceased AU2021105247A4 (en) | 2021-08-10 | 2021-08-10 | Deep transfer learning-based method for radar HRRP target recognition with small sample size |
Country Status (1)
Country | Link |
---|---|
AU (1) | AU2021105247A4 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114021459A (en) * | 2021-11-05 | 2022-02-08 | 西安晟昕科技发展有限公司 | Identification method of small sample radar radiation source |
CN114488140A (en) * | 2022-01-24 | 2022-05-13 | 电子科技大学 | Small sample radar one-dimensional image target identification method based on deep migration learning |
CN114821335A (en) * | 2022-05-20 | 2022-07-29 | 电子科技大学 | Unknown target discrimination method based on depth feature and linear discrimination feature fusion |
CN115792849A (en) * | 2022-11-23 | 2023-03-14 | 哈尔滨工程大学 | One-dimensional non-uniform array design method and system based on SAC algorithm |
CN116540627A (en) * | 2023-02-07 | 2023-08-04 | 广东工业大学 | Machine tool thermal error prediction compensation group control method and system based on deep transfer learning |
-
2021
- 2021-08-10 AU AU2021105247A patent/AU2021105247A4/en not_active Ceased
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114021459A (en) * | 2021-11-05 | 2022-02-08 | 西安晟昕科技发展有限公司 | Identification method of small sample radar radiation source |
CN114488140A (en) * | 2022-01-24 | 2022-05-13 | 电子科技大学 | Small sample radar one-dimensional image target identification method based on deep migration learning |
CN114488140B (en) * | 2022-01-24 | 2023-04-25 | 电子科技大学 | Small sample radar one-dimensional image target recognition method based on deep migration learning |
CN114821335A (en) * | 2022-05-20 | 2022-07-29 | 电子科技大学 | Unknown target discrimination method based on depth feature and linear discrimination feature fusion |
CN115792849A (en) * | 2022-11-23 | 2023-03-14 | 哈尔滨工程大学 | One-dimensional non-uniform array design method and system based on SAC algorithm |
CN115792849B (en) * | 2022-11-23 | 2024-09-06 | 哈尔滨工程大学 | One-dimensional non-uniform array design method and system based on SAC algorithm |
CN116540627A (en) * | 2023-02-07 | 2023-08-04 | 广东工业大学 | Machine tool thermal error prediction compensation group control method and system based on deep transfer learning |
CN116540627B (en) * | 2023-02-07 | 2024-04-12 | 广东工业大学 | Machine tool thermal error prediction compensation group control method and system based on deep transfer learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2021105247A4 (en) | Deep transfer learning-based method for radar HRRP target recognition with small sample size | |
CN112699966B (en) | Radar HRRP small sample target recognition pre-training and fine-tuning method based on deep migration learning | |
WO2022077646A1 (en) | Method and apparatus for training student model for image processing | |
CN109617888B (en) | Abnormal flow detection method and system based on neural network | |
CN110309868A (en) | In conjunction with the hyperspectral image classification method of unsupervised learning | |
CN109583635B (en) | Short-term load prediction modeling method for operational reliability | |
CN107563430A (en) | A kind of convolutional neural networks algorithm optimization method based on sparse autocoder and gray scale correlation fractal dimension | |
CN109284662B (en) | Underwater sound signal classification method based on transfer learning | |
CN116910571B (en) | Open-domain adaptation method and system based on prototype comparison learning | |
Chen et al. | Application of improved convolutional neural network in image classification | |
CN108983800A (en) | A kind of aspect control method based on deep learning | |
CN112766603A (en) | Traffic flow prediction method, system, computer device and storage medium | |
CN115393631A (en) | Hyperspectral image classification method based on Bayesian layer graph convolution neural network | |
CN114818853B (en) | Intention recognition method based on bidirectional gating circulating unit and conditional random field | |
CN116129219A (en) | SAR target class increment recognition method based on knowledge robust-rebalancing network | |
CN117171681B (en) | Unmanned plane control surface intelligent fault diagnosis method and device under unbalanced small sample | |
CN104573728B (en) | A kind of texture classifying method based on ExtremeLearningMachine | |
CN114091652A (en) | Impulse neural network model training method, processing chip and electronic equipment | |
CN112926739A (en) | Network countermeasure effectiveness evaluation method based on neural network model | |
CN114528990A (en) | Neural network searching method and system | |
CN116882490A (en) | Global model training method for federal long-tail learning | |
CN116304966A (en) | Track association method based on multi-source data fusion | |
CN114296067A (en) | Pulse Doppler radar low-slow small target identification method based on LSTM model | |
CN113238197A (en) | Radar target identification and data judgment method based on Bert and BiLSTM | |
Guo et al. | Deep Transfer Learning Based Method for Radar Automatic Recognition with Small Data Size |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FGI | Letters patent sealed or granted (innovation patent) | ||
MK22 | Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry |