WO2024082374A1 - Procédé de reconnaissance de cible de radar à quelques prises basé sur un méta-transfert hiérarchique - Google Patents

Procédé de reconnaissance de cible de radar à quelques prises basé sur un méta-transfert hiérarchique Download PDF

Info

Publication number
WO2024082374A1
WO2024082374A1 PCT/CN2022/133980 CN2022133980W WO2024082374A1 WO 2024082374 A1 WO2024082374 A1 WO 2024082374A1 CN 2022133980 W CN2022133980 W CN 2022133980W WO 2024082374 A1 WO2024082374 A1 WO 2024082374A1
Authority
WO
WIPO (PCT)
Prior art keywords
samples
category
meta
sample
feature
Prior art date
Application number
PCT/CN2022/133980
Other languages
English (en)
Chinese (zh)
Inventor
郭贤生
张玉坤
李林
司皓楠
钱博诚
钟科
黄健
Original Assignee
电子科技大学长三角研究院(衢州)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 电子科技大学长三角研究院(衢州) filed Critical 电子科技大学长三角研究院(衢州)
Publication of WO2024082374A1 publication Critical patent/WO2024082374A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Definitions

  • the invention belongs to the technical field of radar target recognition, and in particular relates to a small sample radar target recognition method based on hierarchical element migration.
  • Radar target recognition technology refers to the technology of using radar to detect targets and determine the type, model and other attributes of the target by analyzing the captured information. It shows great application potential in fields such as terrain exploration and battlefield reconnaissance.
  • artificial intelligence technology deep learning methods have attracted widespread attention from researchers due to their automatic and powerful feature extraction capabilities, which has promoted the emergence and advancement of intelligent radar target recognition technology.
  • deep learning model training often relies on a large number of labeled samples. Due to timeliness constraints and resource limitations, obtaining a large number of labeled samples consumes huge manpower, material resources and time costs. Therefore, using meta-learning to share knowledge in small sample scenarios to improve target recognition performance is one of the current research hotspots in the field of radar target recognition technology.
  • the purpose of the present invention is to provide a small sample radar target recognition method based on hierarchical meta-transfer to overcome the above-mentioned shortcomings.
  • the present invention extracts features based on the attention mechanism, and hierarchical deep knowledge transfer at the feature level, sample level, and task level to seek an embedding space that makes the sample close to the category atoms of the same type of target and far away from the category atoms of other types of targets.
  • a feature encoder based on the attention mechanism is designed at the feature level to fully exploit the global domain-invariant features of the sample to overcome the domain difference problem of the sample in the data distribution; an atom encoder is designed at the sample level to generate more stable category atoms to avoid the influence of outlier samples; at the task level, a meta-learner is designed to accumulate the learning experience of the training task and transfer it to the new task, cultivate the model's ability to transfer knowledge across tasks, and realize meta-transfer target recognition. Therefore, the small sample radar target recognition method based on hierarchical meta-transfer proposed in the present invention is an intelligent target recognition method.
  • a small sample radar target recognition method based on hierarchical element migration includes the following steps:
  • P is the total number of tasks, It includes support set and query set, where the support set is composed of labeled samples extracted from the source domain, and the query set is composed of labeled samples extracted from the target domain;
  • the deep global features of the query set samples and the distances between atoms of different categories are used to obtain the probability that the corresponding samples belong to different categories.
  • the meta-learner loss function is designed based on the probability, and the meta-learner is updated by minimizing the loss function to obtain the updated meta-learner.
  • step S4 Complete all training tasks by repeating step S3 to obtain the meta-learner trained by all meta-training tasks.
  • the trained meta-learner is recorded as
  • the labeled samples of the task to be tested are the support set, and the unlabeled samples to be tested are the query set; the meta-learner obtained in S4 is used for initialization
  • a feature encoder for target recognition and a category atom encoder are obtained, and the feature encoder for target recognition is used to extract deep global features for the support set and query set samples.
  • the category atom encoder for target recognition is used to calculate and update the category atoms based on the deep global features of the support set, and the distance function dist( ⁇ ) is used to calculate the distance between the deep global features of the sample to be tested in the query set and the atoms of different categories, and the label of the category atom with the closest distance is selected as the predicted label of the sample to be tested to obtain the recognition result.
  • the support set is constructed by extracting labeled samples in the source domain in the form of K way N shot, which is defined as K way N shot means randomly extracting N labeled training samples from each category of K types of targets. is the nth sample of the kth class target; the query set is composed of labeled samples extracted in the target domain in the form of K way M shot, defined as in, is the mth sample of the kth class target; the samples in the support set and query set are samples of the same class target in different domains, and the corresponding class labels are defined as in,
  • the feature encoder It includes a neural network module and an attention mechanism module.
  • the specific method of extracting deep global features is as follows:
  • the generalized features are divided into blocks and straightened into vectors.
  • the dimension of each vector is d 1 , denoted as [b 1 , b 2 , ..., b R ] T , where R is the number of blocks.
  • a learnable vector b 0 of the same dimension is added to represent the global features of the entire sample.
  • Feature B is first mapped to a high-dimensional space through a fully connected layer, and the dimension of the high-dimensional space is recorded as d2 , and then mapped back to a low-dimensional space of d1 to obtain the deep feature
  • d2 the dimension of the high-dimensional space
  • d1 the dimension of the high-dimensional space
  • the support set and query set are feature encoded to obtain: in,
  • the tasks are Deep global features of the support and query sets, and
  • step S32 the specific method of updating the category atom encoder and the category atom is:
  • sample-level global features are transformed back to d1 dimensions through linear mapping LN( ⁇ ), and the residual structure and deep global features are combined to obtain
  • the features are first Map to a high-dimensional space of d2 , and then map back to a low-dimensional space of d1 to obtain deep features With features
  • the residual structure is used to combine and obtain sample-level deep global features
  • sample-level deep global features are averaged to obtain the sample-level category atoms
  • step S33 the specific method for updating the meta-learner is:
  • margin is the set threshold
  • is the balance parameter
  • the meta-learner is updated by minimizing the loss function to obtain the updated meta-learner
  • the beneficial effects of the present invention are as follows: for small sample target recognition scenarios, the present invention fully mines the global features of samples at the feature level, fully explores the robustness features of different samples of the same target at the sample level, and designs a meta-learner at the task level to effectively accumulate learning experience of different tasks.
  • the quality of feature information is improved, the negative impact of outlier samples is reduced, the autonomous learning ability of the model is cultivated, and the robustness of small sample target recognition technology is improved.
  • the small sample radar target recognition method based on hierarchical meta-transfer proposed by the present invention is an intelligent radar target recognition method.
  • FIG. 1 is a flow chart of an algorithm of the present invention.
  • FIG. 2 is a comparison chart of the recognition accuracy of the background technology method and the method of the present invention.
  • the present invention designs a small sample radar target recognition method based on hierarchical meta-transfer, including feature level, sample level and task level.
  • an attention mechanism is used to construct a feature encoder to extract more important features in a single sample;
  • an attention mechanism is used to construct an atom encoder, and high-quality category atoms are generated as representative information of the corresponding category by integrating the information of different samples of the same type of target.
  • a meta-learner is constructed to acquire autonomous learning ability by accumulating learning experience of different meta-training tasks.
  • the trained meta-learner is further optimized based on a small number of labeled samples to generate high-quality category atoms for target recognition.
  • the sample to be tested is compared with the category atom, and the category of the category atom with the highest similarity is selected as the predicted category of the test sample to complete the recognition of the test sample.
  • This example is a practical application of the method according to the present invention.
  • synchronous initialization is performed when establishing the feature encoder and the category atom encoder so that they can be processed faster.
  • Step 1 Collect and preprocess original image samples in the source domain and target domain respectively, and preliminarily filter out redundant information of the target background to prepare for training the model.
  • the radar obtains the original images of each target at different pitch angles when it is static. At each fixed pitch angle, the target is observed at different azimuth angles. The acquired images are recorded as source domain and target domain according to the different pitch angles, and they are cut and preprocessed.
  • Step 2 Use samples to build training tasks
  • Each task includes a support set and a query set to train an object recognition model with autonomous learning capabilities.
  • K k classification task
  • the support set is composed of labeled samples extracted from the source domain in the form of K way N shot and recorded as Among them, K way N shot means randomly extracting N labeled training samples from each category of K types of targets. is the nth sample of the kth class target; the labeled samples are extracted in the target domain in the form of K way M shot to form a query set and recorded as in, is the mth sample of the kth class target.
  • the samples in the support set and query set should be samples of the same class target in different domains.
  • the corresponding class labels are in,
  • Step 3 In order to accumulate learning experience from different tasks and cultivate the model's ability to learn autonomously, the meta-learner is trained and learned through the hierarchical meta-transfer model.
  • the hierarchical meta-transfer model is composed of feature level, sample level and task level, specifically:
  • Step 31 Design feature encoder at feature level For the training task obtained in step 2
  • the support set and query set are respectively extracted with features to explore the deep information of the sample for identification. Further, the specific steps of step 31 are:
  • Step 31-1 Design feature encoder at feature level
  • the feature encoder consists of a neural network module and an attention mechanism module.
  • the neural network module has a strong feature extraction capability and can mine the deep features of the sample.
  • the attention mechanism module is to enable the model to selectively focus on the important information in the sample and improve the efficiency of the model's information processing.
  • Step 31-2 Use the neural network module and attention mechanism to extract the deep global features of the sample.
  • the specific steps are as follows:
  • Step 31-2-1 Use the convolutional neural network module conv( ⁇ ) to train the support set samples Extract generalized features.
  • the support set sample representation symbol is abbreviated as S.
  • the feature extraction process is as follows:
  • Step 31-2-2 Divide the sample generalization features obtained in step 31-2-1 into blocks and straighten them into vectors.
  • the dimension of each vector is d 1 .
  • All vectors are recorded as [b 1 , b 2 , ..., b R ] T , where R is the number of blocks.
  • Step 31-2-3 To further filter out redundant information, the feature B obtained in step 31-2-2 is transformed and reduced to different d-dimensional embedding subspaces:
  • V BW v (4)
  • Step 31-2-4 To alleviate the gradient disappearance, the global features obtained in step 31-2-3 are transformed back to d 1 dimensions through linear mapping LN( ⁇ ), and the residual structure is used to combine with the features obtained in step 31-2-2:
  • Step 31-3 Since the information in high-dimensional space is richer, a layer of fully connected network is used to map the features obtained in step 31-2 to the high-dimensional space. Note that the dimension of the high-dimensional space is d2 , and then a layer of fully connected network is used to map it back to the original dimension d1 .
  • Each fully connected layer is processed with an activation function to learn and obtain more abstract deep features. Enhance the expressiveness of information. To avoid the gradient vanishing problem, combine it with the features obtained in step 3-2 using a residual structure to obtain a deep global feature:
  • Step 31-4 Task The support set and query set are feature encoded: in, The tasks are Deep global features of the support and query sets, and
  • Step 32 Design an attention-based category atom encoder at the sample level And in the current training task The updated category atoms are calculated to provide reliable representative information for target recognition. Further, the specific steps of step 4 are as follows:
  • Step 32-1 For the task Designing category atom encoders at the sample level And using the current meta-learner Class Atom Encoder in Initialize it:
  • Step 32-2 Use the category atom encoder obtained in step 32-1
  • the deep global feature calculation task for the support set samples obtained in step 31 The specific steps are as follows:
  • Step 32-2-1 To remove redundant information, explore the deep features of samples in different embedding subspaces and analyze the deep global features of the support set samples. Transform and reduce the dimension to d dimension respectively:
  • Step 32-2-2 To alleviate the gradient vanishing, the sample-level global features obtained in step 32-2-1 are transformed back to d 1 dimensions through linear mapping LN( ⁇ ), and the residual structure is combined with the support set deep global features obtained in step 31:
  • Step 32-2-3 Since the information in high-dimensional space is richer, a layer of fully connected network is used to map the features obtained in step 32-2-2 to a high-dimensional space of d2 dimensions, and then a layer of fully connected network is used to map it back to the original dimension of d1 dimensions.
  • Each layer of fully connected layer is processed with an activation function to learn and obtain more abstract deep features. Enhance the expressiveness of information. To avoid the gradient vanishing problem, combine it with the features obtained in step 32-2-2 using a residual structure to obtain sample-level deep global features:
  • Step 32-2-4 Average the sample-level deep global features obtained in step 32-2-3 to obtain the category atoms after the sample-level attention mechanism exploration
  • Step 32-2-5 Calculation task All the class atoms in are represented as in Corresponding to the processing flow of step 32-2-1 to step 32-2-4.
  • Step 32-3 Calculate the task obtained in step 31
  • the deep global features of the support set samples and the distances of different types of atoms obtained in step 32-2 further obtain the samples
  • the probability of being judged as category k is:
  • dist( ⁇ ) is the distance function
  • Step 32-4 Design and minimize the category atom loss function according to probability to update the category atom encoder and category atoms. The specific steps are as follows:
  • Step 32-4-1 Design the following loss function so that the sample The probability of being judged as category k is as large as possible to obtain a model with recognition ability. Minimize the loss function and update the category atom encoder:
  • Step 32-4-2 Note that the updated model is The updated category atom is in,
  • Step 33 Accumulate the learning experience of the current training task at the task level and update the meta-learner to In order to enable the meta-learner to have autonomous learning capabilities to cope with new target recognition tasks, further, the specific steps of step 33 are:
  • Step 33-1 Calculate the task obtained in step 31
  • the deep global features of the query set samples and the distances of atoms of different categories obtained in step 32 are further obtained.
  • the probability of being judged as category k is:
  • dist( ⁇ ) is the distance function
  • Step 33-2 Design a meta-learner loss function based on probability, minimize the loss function to update the meta-learner, and obtain The specific steps are:
  • Step 33-2-1 Design the meta-learner classification loss function based on the classification probability obtained in step 33-1:
  • Step 33-2-2 In order to improve the separability of samples and enhance the recognition performance of the model, the model training also uses contrast loss as the loss function, which is defined as follows:
  • margin is the set threshold. This constraint can reduce the sample characteristics With the corresponding class atom The distance between them increases with the distance between atoms of other types. The distance between them should be as large as possible.
  • Step 33-2-3 Combine the loss functions of step 33-2-1 and step 33-2-2 to obtain the total meta-learner loss function:
  • is a balance parameter. Minimize the meta-learner loss function to update the meta-learner and obtain The updated meta-learner Thus accumulating in the task learning experience.
  • Step 5 The labeled samples of the task to be tested are called the support set, and the unlabeled samples to be tested are called the query set. To identify the sample to be tested, further, the specific steps of step 5 are:
  • Step 5-1 Process the task to be tested based on the learning experience accumulated on the training task, and initialize the task model to be tested according to step 31 And extract deep global features for support set and query set samples.
  • Step 5-2 Initialize the task model to be tested according to step 32 Calculate and update the category atoms using the support set;
  • Step 5-3 Use the distance function dist( ⁇ ) to calculate the deep global features of the query set sample and the distances between atoms of different categories, select the label of the category atom with the closest distance as the predicted label of the sample to be tested, and obtain the recognition result.
  • the implementation model is used to experiment with the MSTAR dataset for acquisition and recognition of moving and stationary targets.
  • the sensor of this dataset uses a high-resolution spotlight synthetic aperture radar, adopts HH polarization mode, works in the X-band, and has a resolution of 0.3m ⁇ 0.3m.
  • Most of the data are SAR slice images of stationary vehicles, which contain a total of ten types of targets, namely BMP2, T72, BTR70, 2S1, BRDM2, BTR60, D7, T62, ZIL131, ZSU234 and T72. Seven types of targets are taken to form the meta-training task, and the remaining three types of targets are used to construct the task to be tested.
  • the sample data observed at a pitch angle of 17° is used as the source domain sample, and the sample data observed at a pitch angle of 15° is used as the target domain sample.
  • Table 1 The specific number of samples in the experiment is shown in Table 1.
  • the sample image size is cut into 64 ⁇ 64 at the center.
  • This case uses a 3-classification task, that is, each meta-training task and the task to be tested contains three types of targets.
  • 3 of the 7 types of targets are randomly selected to form the meta-training task.
  • samples are randomly extracted from the source domain in the form of 3way 5shot to form the support set of the task, that is, 5 samples are randomly extracted from each of the 3 types of targets in the source domain for this task;
  • the query set is composed of samples randomly extracted from the target domain in the form of 3way 15shot, that is, 15 samples are randomly extracted from each of the 3 types of targets in this task.
  • the samples in the support set and the query set are all labeled samples.
  • samples of the target category to be tested are randomly extracted to form the task to be tested, where the support set comes from the source domain and is the labeled samples observed at a pitch angle of 17°, and the query set comes from the target domain and is the samples to be tested observed at a pitch angle of 15°.
  • this case also simulates target domain samples under different noise environments.
  • a certain percentage of pixels are randomly selected from the test samples of the query set in the test task, and the pixels are destroyed by replacing the intensity of their pixels with independent samples that obey the uniform distribution.
  • the added random noise obeys the uniform distribution of [0, ⁇ max ], where ⁇ max is the maximum value of the pixels in the image.
  • the selected pixel ratios are 0%, 5%, and 15%, respectively, representing the target domains under different noise environments, where 0% represents the test samples constructed from the 15° pitch angle observation samples in the original data set.
  • the present invention designs experiments in different noise environments for small sample target recognition to verify the superiority of the proposed algorithm, and compares the recognition results of the background technology method and the method of the present invention on the task to be tested.
  • the neural network module of the feature encoder consists of four convolutional layers, and the maximum pooling operation is used after each convolutional layer to reduce the size of the model and improve the calculation speed.
  • Table 2 shows the detailed parameters of each convolutional layer and pooling operation, including the size of the convolution kernel, the step size during convolution, the padding size, and the size of the pooling window.
  • the background technology methods all show a significant decline to varying degrees.
  • the recognition accuracy of background technology method 1 in 0% and 15% noise environments is 77.43% and 71.66% respectively, and the recognition accuracy of the background technology method is 71.67% and 68.1%, while the method of the present invention can still maintain a high recognition rate, with recognition accuracy rates of 83.86%, 82.24%, and 81.92% in 0%, 5%, and 15% noise environments, respectively, which has obvious advantages.
  • the experimental results prove that the present invention effectively explores the deep global features of samples in small sample target recognition scenarios, cultivates the autonomous learning ability of the model, establishes a more stable meta-learning model, and improves target recognition performance.
  • Convolutional Layer Convolution kernel size Step Length Filling size Pooling window size level one 5 ⁇ 5 1 0 2 ⁇ 2 Second floor 3 ⁇ 3 1 0 2 ⁇ 2 the third floor 3 ⁇ 3 1 1 2 ⁇ 2 Fourth floor 3 ⁇ 3 1 1 2 ⁇ 2

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention appartient au domaine technique de la reconnaissance de cible, et concerne en particulier un procédé de reconnaissance de cible de radar à quelques prises basé sur un méta-transfert hiérarchique. Dans la présente invention, des caractéristiques sont extraites sur la base d'un mécanisme d'attention et un transfert de connaissance profonde hiérarchique à un niveau de caractéristique, un niveau d'échantillon et un niveau de tâche est effectué afin de rechercher un espace d'incorporation, de telle sorte qu'un échantillon est proche d'un atome de catégorie de cibles de la même catégorie et est éloigné d'un atome de catégorie de cibles d'autres catégories. Au niveau de caractéristique, un codeur de caractéristique basé sur le mécanisme d'attention est conçu et des représentations invariantes, dans le domaine global, d'échantillons sont entièrement exploitées, de telle sorte que le problème de différence de domaine dans la distribution de données des échantillons est résolu; au niveau d'échantillon, un codeur d'atomes est conçu, et des atomes de catégorie plus stables sont générés, de telle sorte que l'influence d'échantillons aberrants est évitée; et au niveau de tâche, un méta-apprenant est conçu, l'expérience d'apprentissage des tâches d'apprentissage est accumulée et est transférée à une nouvelle tâche, et la capacité de transfert de connaissances inter-tâches d'un modèle est développée, de telle sorte qu'une reconnaissance de cible basée sur un méta-transfert est réalisée. Le procédé de reconnaissance de cible selon la présente invention est un procédé de reconnaissance de cible intelligent.
PCT/CN2022/133980 2022-10-19 2022-11-24 Procédé de reconnaissance de cible de radar à quelques prises basé sur un méta-transfert hiérarchique WO2024082374A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211276348.4 2022-10-19
CN202211276348.4A CN115345322B (zh) 2022-10-19 2022-10-19 一种基于层级化元迁移的小样本雷达目标识别方法

Publications (1)

Publication Number Publication Date
WO2024082374A1 true WO2024082374A1 (fr) 2024-04-25

Family

ID=83957489

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/133980 WO2024082374A1 (fr) 2022-10-19 2022-11-24 Procédé de reconnaissance de cible de radar à quelques prises basé sur un méta-transfert hiérarchique

Country Status (2)

Country Link
CN (1) CN115345322B (fr)
WO (1) WO2024082374A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115345322B (zh) * 2022-10-19 2023-02-07 电子科技大学长三角研究院(衢州) 一种基于层级化元迁移的小样本雷达目标识别方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210003700A1 (en) * 2019-07-02 2021-01-07 Wuyi University Method and apparatus for enhancing semantic features of sar image oriented small set of samples
CN112990334A (zh) * 2021-03-29 2021-06-18 西安电子科技大学 基于改进原型网络的小样本sar图像目标识别方法
CN114488140A (zh) * 2022-01-24 2022-05-13 电子科技大学 一种基于深度迁移学习的小样本雷达一维像目标识别方法
CN114492581A (zh) * 2021-12-27 2022-05-13 中国矿业大学 基于迁移学习和注意力机制元学习应用在小样本图片分类的方法
CN114511739A (zh) * 2022-01-25 2022-05-17 哈尔滨工程大学 一种基于元迁移学习的任务自适应的小样本图像分类方法
CN115345322A (zh) * 2022-10-19 2022-11-15 电子科技大学长三角研究院(衢州) 一种基于层级化元迁移的小样本雷达目标识别方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114387524B (zh) * 2022-03-24 2022-06-03 军事科学院系统工程研究院网络信息研究所 基于多层级二阶表征的小样本学习的图像识别方法和系统
CN114859316A (zh) * 2022-06-14 2022-08-05 中国人民解放军海军航空大学 基于任务相关度加权的雷达目标智能识别方法
CN114879185A (zh) * 2022-06-14 2022-08-09 中国人民解放军海军航空大学 基于任务经验迁移的雷达目标智能识别方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210003700A1 (en) * 2019-07-02 2021-01-07 Wuyi University Method and apparatus for enhancing semantic features of sar image oriented small set of samples
CN112990334A (zh) * 2021-03-29 2021-06-18 西安电子科技大学 基于改进原型网络的小样本sar图像目标识别方法
CN114492581A (zh) * 2021-12-27 2022-05-13 中国矿业大学 基于迁移学习和注意力机制元学习应用在小样本图片分类的方法
CN114488140A (zh) * 2022-01-24 2022-05-13 电子科技大学 一种基于深度迁移学习的小样本雷达一维像目标识别方法
CN114511739A (zh) * 2022-01-25 2022-05-17 哈尔滨工程大学 一种基于元迁移学习的任务自适应的小样本图像分类方法
CN115345322A (zh) * 2022-10-19 2022-11-15 电子科技大学长三角研究院(衢州) 一种基于层级化元迁移的小样本雷达目标识别方法

Also Published As

Publication number Publication date
CN115345322A (zh) 2022-11-15
CN115345322B (zh) 2023-02-07

Similar Documents

Publication Publication Date Title
CN111583263B (zh) 一种基于联合动态图卷积的点云分割方法
CN110956185A (zh) 一种图像显著目标的检测方法
CN112446423B (zh) 一种基于迁移学习的快速混合高阶注意力域对抗网络的方法
CN111079847B (zh) 一种基于深度学习的遥感影像自动标注方法
CN112990334A (zh) 基于改进原型网络的小样本sar图像目标识别方法
Wang et al. Distilling knowledge from an ensemble of convolutional neural networks for seismic fault detection
CN112347970B (zh) 一种基于图卷积神经网络的遥感影像地物识别方法
WO2022218396A1 (fr) Procédé et appareil de traitement d'image et support de stockage lisible par ordinateur
CN110175615A (zh) 模型训练方法、域自适应的视觉位置识别方法及装置
CN113420593B (zh) 基于混合推理网络的小样本sar自动目标识别方法
CN113221848B (zh) 基于多分类器域对抗网络的高光谱开放集领域自适应方法
WO2024082374A1 (fr) Procédé de reconnaissance de cible de radar à quelques prises basé sur un méta-transfert hiérarchique
CN110263855A (zh) 一种利用共基胶囊投影进行图像分类的方法
CN115292532A (zh) 基于伪标签一致性学习的遥感图像域适应检索方法
CN116452863A (zh) 面向遥感影像场景分类的类中心知识蒸馏方法
CN112270285A (zh) 一种基于稀疏表示和胶囊网络的sar图像变化检测方法
CN118097304A (zh) 一种基于神经架构搜索的声纳图像分类方法
Gu et al. A classification method for polsar images using SLIC superpixel segmentation and deep convolution neural network
CN113239895A (zh) 基于注意力机制的胶囊网络的sar图像变化检测方法
Rozlan et al. Efficacy of chili plant diseases classification using deep learning: a preliminary study
CN111209813B (zh) 基于迁移学习的遥感图像语义分割方法
CN117409260A (zh) 一种基于深度子空间嵌入的小样本图像分类方法及装置
CN116973872A (zh) 一种基于领域知识聚类的跨域一维距离像识别方法
CN113409351B (zh) 基于最优传输的无监督领域自适应遥感图像分割方法
CN109815889A (zh) 一种基于特征表示集的跨分辨率人脸识别方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22962553

Country of ref document: EP

Kind code of ref document: A1