CN114612729A - Image classification model training method and device based on SAR image - Google Patents
Image classification model training method and device based on SAR image Download PDFInfo
- Publication number
- CN114612729A CN114612729A CN202210340575.2A CN202210340575A CN114612729A CN 114612729 A CN114612729 A CN 114612729A CN 202210340575 A CN202210340575 A CN 202210340575A CN 114612729 A CN114612729 A CN 114612729A
- Authority
- CN
- China
- Prior art keywords
- image
- infrared
- visible light
- sar
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012549 training Methods 0.000 title claims abstract description 41
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000013145 classification model Methods 0.000 title claims abstract description 28
- 238000013528 artificial neural network Methods 0.000 claims abstract description 28
- 230000006870 function Effects 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 3
- 230000004083 survival effect Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 2
- 230000004927 fusion Effects 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000002902 bimodal effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000003595 mist Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides an SAR image-based image classification model training method and device, which are used for acquiring multi-source images at the same angle aiming at the same target, wherein the multi-source images comprise SAR, infrared and visible light images and corresponding image classification data, and the SAR image-based image classification model is trained in an auxiliary mode by training infrared-visible light characteristics acquired by an auxiliary neural network based on the infrared and visible light images, wherein the infrared and visible light images are only used as auxiliary modes in the training process, and the input of the network in practical application is an SAR single-mode image. The invention uses a multitask learning method, improves the precision of image classification based on the SAR image, and provides a method for solving the limitation of the single-mode SAR image in practical application.
Description
Technical Field
The invention relates to the field of images and deep learning, in particular to an image classification model training method and device based on an SAR image.
Background
Synthetic Aperture radar (sar) is an active imaging method, and uses the doppler shift theory and the radar coherence principle. The SAR has strong penetrating effect, can effectively detect the camouflage target, and has high practical application value in the fields of military reconnaissance, geographic mapping, disaster monitoring and the like because the imaging is not limited by light, climate and cloud and mist. But is limited by resolution, and a single-mode image classification network based on SAR images has certain limitations.
The infrared image can distinguish the target from the background, and can keep a good imaging effect all day long. While visible light images have high spatial resolution and can provide finer texture details. The advantages of the two images are fused, and the heat radiation information in the infrared image and the fine information in the visible light image can be combined. The infrared and visible image fusion is superior in image processing, and the bimodal image can acquire scene information in multiple aspects and extract rich target image information. Therefore, the multi-mode fusion image classification network based on the infrared and visible light modes can show excellent performance in the image classification task.
A Convolutional Neural Network (CNN) is a feed-forward type Neural Network, and is superior in image processing, particularly, large-scale image processing due to its Network structural characteristics, and thus, CNN is used in large-scale applications such as image recognition and object detection. The CNN has obvious advantages in terms of computational complexity compared with other network structures, and thus is widely applied.
The infrared and visible light images and the SAR image belong to different modes, the imaging mechanism is greatly different, so the applications are different, the SAR can provide rich target information and is hardly influenced by weather, but the single-mode SAR image is limited by resolution in practical application. The infrared and visible light bimodal fusion image classification network has better performance on classification precision. Therefore, the image which is based on the SAR image and can perform the image classification task with higher precision is a necessary network by taking the infrared and visible light modes as assistance, the precision of the SAR image-based single-mode image classification network can be improved to a certain extent, and meanwhile, a thought is provided for other applications based on the SAR image single mode to achieve the improvement of the performance.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an SAR image-based image classification model training method and device, aiming at the limitation of a single-mode SAR image in practical application, the infrared-visible light characteristics are used as priori knowledge to improve the SAR image-based single-mode image classification network, and the network only needs to input the SAR image in practical application without participation of other modes. The invention uses a multi-task learning method to enable the image classification network based on the SAR image to learn richer image information so as to improve the network classification precision.
The purpose of the invention is realized by the following technical scheme:
the method comprises the following steps:
step S1, multi-source image data including SAR, infrared and visible light image data and image classification data are obtained aiming at the same target;
step S2, constructing and training an auxiliary neural network by using the acquired infrared and visible light images, wherein the auxiliary neural network takes the preprocessed infrared and visible light images as input and takes the predicted image classification result and the infrared-visible light characteristics as output, and the method specifically comprises the following steps:
step S21: constructing an auxiliary neural network, respectively extracting infrared and visible light single-mode features by adopting a main convolutional layer of a ResNet-50 network structure, fusing the infrared and visible light single-mode features by using the convolutional layer to obtain the infrared-visible light features, adopting a prediction layer of the ResNet-50 network structure as a prediction layer of the prediction layer, and inputting a prediction image classification result by using the infrared-visible light features;
step S22: training an auxiliary neural network by using the acquired infrared and visible light images and image classification data;
step S23: extracting and storing the infrared-visible light characteristics of the auxiliary neural network trained in the step S22;
step S3, constructing and training a target network by using the acquired SAR image and the infrared-visible light characteristics stored in the step S23, wherein the target network only needs the SAR single-mode image as input and outputs an image classification result and fitting of the infrared-visible light characteristics; the method comprises the following specific steps:
step S31: constructing a target network by adopting a DenseNet-121 network structure;
step S32: training a target network by using the acquired SAR image, the image classification data and the infrared-visible light characteristics stored in the step S23, wherein two target outputs of the network are an image classification result and the infrared-visible light characteristics, errors of the two target outputs and corresponding truth values form actual errors of the network, and the loss is expressed as:
Loss=loss1+loss2*φ
wherein loss1 is the loss between the classification result predicted by the network and the real classification result, loss2 is the loss between the infrared-visible light characteristic and the infrared-visible light characteristic fitted by the network, and phi is a hyper-parameter balancing loss1 and loss 2; the two losses jointly determine the weight update of the target network, so that an image classification model based on the SAR image is obtained.
Preferably, the classification accuracy requirement of the auxiliary neural network is greater than or equal to 0.95;
the auxiliary neural network requires that the image classification precision is two percent higher than that of the SAR image single-mode network.
Preferably, the target network loss functions are cross-entropy loss functions and mean square error loss functions used in loss1 and loss2, respectively.
Preferably, the method is characterized in that: the target network is a multitask target network.
Preferably, in the multitasking target network, the target function is:
s.t.UUT=I
wherein m is the number of the target network tasks, niIn order to train the number of samples,is the label of the sample j of task i,as a loss function, b ═ b1,...bm)TOffset compensation representing i tasks, U ∈ Rd×dComprises weight parameters of i tasks, d is parameter dimension, | A | | survival2 2,1Regularizing the array for L2, aiA weight parameter representing the task, I being a unit matrix and λ being a regularization parameter; the first half of the equation (1) shown represents all the loss of i tasks, and the second half ensures the known row sparsity and orthogonalization of the constraint matrix U using L2 regularization, which can be expressed as:
s.t.D≥0,tr(D)≤1
whereinIs the first half of equation (1), tr (W)TD-1W) is the trace of the matrix, Wi=UaiThen it is the weight parameter of task i, D ≧ 0 specifies that the D matrix is a semi-positive definite matrix. Solving the multi-tasking problem of the target network by solving the covariance matrix DDecoupling to achieve the purpose of parallel computing, namely optimizing the multitask target network.
A training device of an image classification model training method based on SAR images specifically comprises the following steps:
the multi-source image acquisition unit is used for acquiring multi-source images with the same angle for the same target, wherein the multi-source images comprise SAR images, infrared images and visible light images, and labeling classification labels on the images to obtain image classification data;
the infrared-visible light characteristic acquisition unit is used for acquiring infrared-visible light characteristics through an auxiliary neural network;
the target network construction unit is used for taking the SAR single-mode image as input, outputting the predicted infrared-visible light characteristic and the image classification result, and training by using the acquired SAR image, the classification result data and the infrared-visible light characteristic extracted by the auxiliary neural network to obtain an image classification model based on the SAR image;
and the multi-source image preprocessing unit is used for processing the corresponding SAR image, the infrared image and the visible light image into pictures with consistent sizes.
The trained image classification model based on the SAR image can acquire rich image information only by using SAR single-mode image data, so that better performance is represented. Specifically, the method comprises the following steps:
an image classification device based on SAR images comprises:
the image data acquisition module is used for acquiring SAR single-mode image data to perform an image classification task;
and the SAR image classification module is used for inputting the SAR image into an image classification model which is obtained by training by any one of the training methods and is based on the SAR image, and obtaining an image classification result.
The invention uses a multitask learning method and utilizes a fusion image classification neural network based on infrared and visible light images to assist in training an image classification model based on an SAR image. The method aims to assist training of the SAR image classification model through infrared-visible light characteristics and improve the image classification accuracy based on the SAR image. Meanwhile, the infrared and visible light images are only used as priori knowledge in the training process, and are not required to be used as network input in practical application. The invention provides a method for solving the limitation of a single mode in practical application based on an SAR image.
Drawings
Fig. 1 is a flowchart of an image classification model based on an SAR image.
Fig. 2 is a structural diagram of a neural network of an image classification model based on an SAR image.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments.
As shown in fig. 1 and fig. 2, the image classification model training method based on the SAR image provided by the invention only needs the SAR single-mode image as input includes the following steps:
step S1, acquiring a multi-source image data set for an image classification task, which comprises the following steps:
step S11, acquiring multi-source images with the same angle for the same target, wherein the multi-source images comprise SAR images, infrared images and visible light images, dividing a training set and a verification set according to a certain proportion, and establishing an annotation file for storing image classification data;
step S12, preprocessing the obtained multi-source image, and clipping all image data sizes to 224 × 224 in this embodiment to satisfy the network input size requirement shown in fig. 1 and fig. 2. Preprocessing to obtain a multi-modal data set for an image classification task;
step S2, constructing and training an auxiliary neural network by using the acquired infrared and visible light image data and image classification data, which comprises the following specific steps:
step S21, constructing an auxiliary neural network as shown in figure 1, wherein the single-mode feature extraction network adopts a main convolutional layer with a ResNet-50 structure; the fusion layer is a convolution layer, and the infrared-visible light characteristic size and the number of channels are required to be consistent with the single-mode characteristic diagram in the embodiment; the prediction layer adopts a ResNet-50 structure, namely a prediction layer consisting of a pooling layer at the tail part of the network and a full-connection layer; wherein, the specific structure of ResNet is shown in Table 1:
table 1: ResNet network structure
Step S22, training an auxiliary neural network by taking the acquired infrared and visible light image data and image classification data as input, wherein the classification precision of the auxiliary neural network is required to be more than or equal to 0.95 in the embodiment;
and S23, extracting the infrared-visible light characteristics of the auxiliary neural network trained in the step S22, namely outputting the characteristics through a fusion layer as shown in the attached figure 2, wherein the stored characteristics need to be normalized by using a Sigmoid function.
S3, constructing and training a target network by using the acquired SAR image data, the infrared-visible light characteristics and the image classification data stored in the step S23, and specifically comprising the following steps:
step S31, constructing a target network, as shown in FIG. 1, in this embodiment, a DenseNet-121 network structure is adopted to construct the target network; wherein, the specific structure of DenseNet is shown in Table 2:
table 2: DenseNet network architecture
And step S32, training a target network, wherein the network takes the SAR single-mode image as input, the main task is image classification, and the auxiliary task is fitting of the infrared-visible light characteristics saved in the step S23, so that the network comprises two outputs, namely SAR image classification results and fitting output of the infrared-visible light characteristics. The error of the network is composed of two outputs and their corresponding true values, and the loss is expressed as:
Loss=loss1+loss2*φ
wherein, loss1 and loss2 are respectively loss between the classification result predicted by the network and the real classification result and loss between the characteristic diagram obtained by SAR image through the main convolutional layer and the infrared-visible light characteristic, and phi is a hyper-parameter for balancing loss1 and loss 2. The loss1 adopts a cross entropy loss function, and the loss2 adopts a mean square error loss function; the two losses jointly determine the weight update of the target network, and an image classification model based on the SAR image is obtained through training.
The invention provides a training device of an image classification model based on an SAR image, which is based on the training method of the image classification model based on the SAR image, and specifically comprises the following units:
the system comprises a multi-source image acquisition unit, a multi-source image classification unit and a multi-source image classification unit, wherein the multi-source image acquisition unit is used for acquiring multi-source images with the same angle for the same target, and marking classification labels on the images to obtain image classification data;
and the infrared-visible light characteristic acquisition unit is used for acquiring the infrared-visible light characteristics through the auxiliary neural network.
And the target network construction unit is used for training to obtain an image classification model based on the SAR image by taking the SAR single-mode image as input and the predicted infrared-visible light characteristic and the image classification result as output and by utilizing the acquired SAR image, the classification result data and the infrared-visible light characteristic extracted by the auxiliary neural network.
And the multi-source image preprocessing unit is used for processing the corresponding SAR image, the infrared image and the visible light image into pictures with consistent sizes.
The above embodiments are not intended to limit the present invention, and the present invention is not limited to the above embodiments, and all embodiments are within the scope of the present invention as long as the requirements of the present invention are met.
Those skilled in the art will appreciate that the invention may be practiced without these specific details.
Claims (6)
1. An image classification model training method based on SAR images is characterized by comprising the following steps:
step S1, multi-source image data including SAR, infrared and visible light image data and image classification data are obtained aiming at the same target;
step S2, constructing and training an auxiliary neural network by using the acquired infrared and visible light images, wherein the auxiliary neural network takes the preprocessed infrared and visible light images as input and takes the predicted image classification result and the infrared-visible light characteristics as output, and the method specifically comprises the following steps:
step S21: constructing an auxiliary neural network, respectively extracting infrared and visible light single-mode features by adopting a main convolutional layer of a ResNet-50 network structure, fusing the infrared and visible light single-mode features by using the convolutional layer to obtain the infrared-visible light features, adopting a prediction layer of the ResNet-50 network structure as a prediction layer of the prediction layer, and inputting a prediction image classification result by using the infrared-visible light features;
step S22: training an auxiliary neural network by using the acquired infrared and visible light images and image classification data;
step S23: extracting and storing the infrared-visible light characteristics of the auxiliary neural network trained in the step S22;
step S3, constructing and training a target network by using the acquired SAR image and the infrared-visible light characteristics stored in the step S23, wherein the target network only needs the SAR single-mode image as input and outputs an image classification result and fitting of the infrared-visible light characteristics; the method comprises the following specific steps:
step S31: constructing a target network by adopting a DenseNet-121 network structure;
step S32: training a target network by using the acquired SAR image, the image classification data and the infrared-visible light characteristics stored in the step S23, wherein two target outputs of the network are an image classification result and the infrared-visible light characteristics, errors of the two target outputs and corresponding truth values form actual errors of the network, and the loss is expressed as:
Loss=loss1+loss2*φ
wherein loss1 is the loss between the classification result predicted by the network and the real classification result, loss2 is the loss between the infrared-visible light characteristic and the infrared-visible light characteristic fitted by the network, and phi is a hyper-parameter balancing loss1 and loss 2; the two losses jointly determine the weight update of the target network, so that an image classification model based on the SAR image is obtained.
2. The SAR image-based image classification model training method according to claim 1, characterized in that the classification accuracy requirement of the auxiliary neural network is greater than or equal to 0.95.
3. The SAR image-based image classification model training method according to claim 1, characterized in that in the target network loss function, loss1 and loss2 respectively use cross entropy loss function and mean square error loss function.
4. The SAR image-based image classification model training method according to claim 1, characterized in that: the target network is a multitask target network.
5. The SAR image-based image classification model training method according to claim 4, characterized in that:
the multitask target network comprises the following objective functions:
s.t.UUT=I
wherein m is the number of the target network tasks, niIn order to train the number of samples,is the label of the sample j of task i,as a loss function, b ═ b1,...bm)TRepresents the offset compensation of i tasks, U epsilon Rd×dComprises weight parameters of i tasks, d is parameter dimension, | A | | survival2 2,1Regularizing the array for L2, aiA weight parameter representing the task, I being a unit matrix and λ being a regularization parameter; the first half of the equation (1) shown represents all the loss of i tasks, and the second half ensures the known row sparsity and orthogonalization of the constraint matrix U using L2 regularization, which can be expressed as:
s.t.D≥0,tr(D)≤1
6. A training device of an image classification model training method based on SAR images specifically comprises the following steps:
the system comprises a multi-source image acquisition unit, a multi-source image classification unit and a multi-source image classification unit, wherein the multi-source image acquisition unit is used for acquiring multi-source images with the same angle for the same target, and marking classification labels on the images to obtain image classification data;
the infrared-visible light characteristic acquisition unit is used for acquiring infrared-visible light characteristics through an auxiliary neural network;
the target network construction unit is used for taking the SAR single-mode image as input, outputting the predicted infrared-visible light characteristic and the image classification result, and training by using the acquired SAR image, the classification result data and the infrared-visible light characteristic extracted by the auxiliary neural network to obtain an image classification model based on the SAR image;
and the multi-source image preprocessing unit is used for processing the corresponding SAR image, the infrared image and the visible light image into pictures with consistent sizes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210340575.2A CN114612729A (en) | 2022-03-31 | 2022-03-31 | Image classification model training method and device based on SAR image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210340575.2A CN114612729A (en) | 2022-03-31 | 2022-03-31 | Image classification model training method and device based on SAR image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114612729A true CN114612729A (en) | 2022-06-10 |
Family
ID=81867590
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210340575.2A Pending CN114612729A (en) | 2022-03-31 | 2022-03-31 | Image classification model training method and device based on SAR image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114612729A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117130394A (en) * | 2023-10-26 | 2023-11-28 | 科莱克芯电科技(深圳)有限公司 | Photovoltaic equipment control method and system based on artificial intelligence |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109583412A (en) * | 2018-12-07 | 2019-04-05 | 中国科学院遥感与数字地球研究所 | A kind of training method and its ship detecting method carrying out ship detecting using convolutional neural networks |
CN110363215A (en) * | 2019-05-31 | 2019-10-22 | 中国矿业大学 | The method that SAR image based on production confrontation network is converted into optical imagery |
-
2022
- 2022-03-31 CN CN202210340575.2A patent/CN114612729A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109583412A (en) * | 2018-12-07 | 2019-04-05 | 中国科学院遥感与数字地球研究所 | A kind of training method and its ship detecting method carrying out ship detecting using convolutional neural networks |
CN110363215A (en) * | 2019-05-31 | 2019-10-22 | 中国矿业大学 | The method that SAR image based on production confrontation network is converted into optical imagery |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117130394A (en) * | 2023-10-26 | 2023-11-28 | 科莱克芯电科技(深圳)有限公司 | Photovoltaic equipment control method and system based on artificial intelligence |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108038445B (en) | SAR automatic target identification method based on multi-view deep learning framework | |
CN109584213B (en) | Multi-target number selection tracking method | |
CN106056070B (en) | Restore the SAR target identification method with rarefaction representation based on low-rank matrix | |
US10755146B2 (en) | Network architecture for generating a labeled overhead image | |
Wang et al. | Target detection and recognition based on convolutional neural network for SAR image | |
Yan et al. | Monocular depth estimation with guidance of surface normal map | |
CN113705331B (en) | SAR ship detection method based on quaternary feature pyramid network | |
CN113536963A (en) | SAR image airplane target detection method based on lightweight YOLO network | |
CN116310852A (en) | Double-time-phase remote sensing image unsupervised classification and change detection method and system | |
CN114612729A (en) | Image classification model training method and device based on SAR image | |
Karthikeswaran et al. | RETRACTED ARTICLE: Video surveillance system against anti-terrorism by Using Adaptive Linear Activity Classification (ALAC) Technique | |
Sun et al. | Cycle-SfM: Joint self-supervised learning of depth and camera motion from monocular image sequences | |
Jiang et al. | Semantic segmentation network combined with edge detection for building extraction in remote sensing images | |
CN116543192A (en) | Remote sensing image small sample classification method based on multi-view feature fusion | |
Xue et al. | Target recognition for SAR images based on heterogeneous CNN ensemble | |
CN115984592A (en) | Point-line fusion feature matching method based on SuperPoint + SuperGlue | |
Yin et al. | M2F2-RCNN: Multi-functional faster RCNN based on multi-scale feature fusion for region search in remote sensing images | |
Xu et al. | SAR target recognition based on variational autoencoder | |
Oh et al. | Local selective vision transformer for depth estimation using a compound eye camera | |
Shuai et al. | SAFuseNet: integration of fusion and detection for infrared and visible images | |
CN108985445A (en) | A kind of target bearing SAR discrimination method based on machine Learning Theory | |
Zhai et al. | Computational Resource Constrained Deep Learning Based Target Recognition from Visible Optical Images. | |
CN111339836A (en) | SAR image ship target detection method based on transfer learning | |
Zhou et al. | LSCB: a lightweight feature extraction block for SAR automatic target recognition and detection | |
Yang et al. | Remote Sensing Image Object Detection Based on Improved YOLOv3 in Deep Learning Environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |