CN112395958A - Remote sensing image small target detection method based on four-scale depth and shallow layer feature fusion - Google Patents
Remote sensing image small target detection method based on four-scale depth and shallow layer feature fusion Download PDFInfo
- Publication number
- CN112395958A CN112395958A CN202011183190.7A CN202011183190A CN112395958A CN 112395958 A CN112395958 A CN 112395958A CN 202011183190 A CN202011183190 A CN 202011183190A CN 112395958 A CN112395958 A CN 112395958A
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- network structure
- sensing image
- feature fusion
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 60
- 230000004927 fusion Effects 0.000 title claims abstract description 31
- 238000012549 training Methods 0.000 claims abstract description 32
- 238000000034 method Methods 0.000 claims abstract description 23
- 230000006870 function Effects 0.000 claims abstract description 15
- 230000008569 process Effects 0.000 claims abstract description 9
- 238000013526 transfer learning Methods 0.000 claims abstract description 5
- 230000006872 improvement Effects 0.000 claims description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 238000013135 deep learning Methods 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000001965 increasing effect Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000001627 detrimental effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/194—Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
Abstract
The invention provides a remote sensing image small target detection method based on four-scale depth and shallow layer feature fusion, which comprises the following steps: constructing a remote sensing image small target detection network structure based on four-scale depth and shallow layer feature fusion; training the network structure by adopting transfer learning to obtain a trained network structure; inputting a remote sensing data set into the trained network structure to obtain a target detection result of a remote sensing image: in the network structure training process, extracting features of each layer of an input image by adopting VGG16, and fusing the extracted features of each layer by utilizing a feature fusion module to obtain 4 output feature layers; and inputting the output characteristic layer to a detection layer, and training the network structure by using an improved loss function to obtain a trained network. The beneficial effects provided by the invention are as follows: the small target detection capability, speed, robustness and accuracy of the high-resolution remote sensing image are improved.
Description
Technical Field
The invention relates to the field of remote sensing image processing, in particular to a remote sensing image small target detection method based on four-scale depth layer feature fusion.
Background
The detection of the small target of the remote sensing image is one of the research hotspots in the field of remote sensing. The development of High Spatial Resolution (HSR) remote sensing image sensors has accelerated the acquisition of aerial and satellite images with sufficiently detailed Spatial structure information from various remote sensing images. These remote sensing images may facilitate a wide range of military and civilian applications, such as ocean monitoring, urban detection, cargo transportation, port management, and the like. Unlike acquiring natural images of the ground from the horizontal direction, obtaining high spatial resolution remote sensing images requires an image acquisition manner which is easily affected by weather and illumination from the top-down perspective. In addition to this, the small and varying scale nature of multiple classes of geospatial targets and the lack of manually labeled training samples make the detection task more challenging. Many studies have been conducted relating to the detection of small objects in remotely sensed images.
The method for detecting the remote sensing image target based on the traditional machine learning method mainly comprises the following steps: 1) traversing the picture by adopting a sliding window to obtain an interested area; 2) statistics of underlying features in the image, such as Histogram of Oriented Gradient (HOG) features; 3) training a classifier, such as a commonly used Support Vector Machine (SVM) method, with the extracted features; 4) it is determined whether the region of interest contains an object. Cheng et al extracts features by using HOG features and a sliding window method, and realizes target identification of remote sensing images. Aytekin et al detect airport objects based on textural features of the images. This type of algorithm has two main drawbacks: 1) the region selection strategy based on the sliding window has no pertinence, the window is redundant, and the time complexity is high; 2) manual features designed based on background knowledge are not very robust to diverse environmental changes.
The target detection algorithm based on deep learning has two main flows: one class is a two-stage detection algorithm represented by Fast-regions with conditional Neural Network, and R-FCN (Region-based fused Neural Networks), and the other class is a single-stage detection algorithm represented by YOLO (you Only Look one) series and SSD. In the two-stage detection algorithm, a candidate region is extracted from an input image in the first stage, and a prediction result is obtained in the second stage according to a mapping feature map of the candidate region. The single-stage detection algorithm obtains a prediction box directly through the anchor box and generates a prediction result at the same time. The speed of the single-phase detection algorithm is much faster than the two-phase detection algorithm, but the accuracy is slightly less than the latter. With the rapid development of deep learning technology, scholars begin to solve the problem of target detection of remote sensing images by using a deep learning method.
Disclosure of Invention
The invention provides a remote sensing image small target detection method based on four-scale depth-depth layer feature fusion, which aims at solving the problems that the existing small target occupies fewer image pixels in a high-resolution remote sensing image, contains unobvious information feature information, frequently appears missing detection and false detection during detection, seriously influences the target detection effect and is insufficient in high-quality small target training data. The technical problem actually solved by the invention is as follows: how to improve the small target detection capability, speed, robustness and accuracy of the high-resolution remote sensing image.
The invention provides a remote sensing image small target detection method based on four-scale depth and shallow layer feature fusion, which specifically comprises the following steps:
s101: constructing a remote sensing image small target detection network structure based on four-scale depth and shallow layer feature fusion; the network structure is an improved SDD network;
s102: training the network structure by adopting transfer learning to obtain a trained network structure;
s103: inputting a remote sensing data set to the trained network structure to obtain a target detection result of a remote sensing image;
in the network structure training process, extracting features of each layer of an input image by adopting VGG16, and fusing the extracted features of each layer by utilizing a feature fusion module to obtain 4 output feature layers;
and inputting the output characteristic layer to a detection layer, and training the network structure by using an improved loss function to obtain the trained network structure.
The beneficial effects provided by the invention are as follows: the small target detection capability, speed, robustness and accuracy of the high-resolution remote sensing image are improved.
Drawings
FIG. 1 is a schematic diagram of an improved network architecture of the present invention;
FIG. 2 is a diagram of the feature fusion process of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be further described with reference to the accompanying drawings.
A remote sensing image small target detection method based on four-scale depth and superficial layer feature fusion comprises the following steps:
s101: constructing a remote sensing image small target detection network structure based on four-scale depth and shallow layer feature fusion; the network structure is an improved SDD network;
s102: training the network structure by adopting transfer learning to obtain a trained network structure;
in the network structure training process, extracting features of each layer of an input image by adopting VGG16, and fusing the extracted features of each layer by utilizing a feature fusion module to obtain 4 output feature layers;
referring to fig. 1, fig. 1 is a schematic diagram of an improved network structure according to the present invention; when the feature fusion module fuses features of each layer, only the first 4 feature layers are fused, and the two latter feature layers are not changed.
The feature fusion module adopts any one of a feature splicing algorithm or a feature addition algorithm.
Referring to fig. 2, in order to implement fusion between feature layers with different scales, shallow network features are subjected to scale transformation through a maximum pooling operation and are fused with a next feature layer. By using 3 × 3 convolution, Batch Normalization (BN) and ReLU operations, the detection speed is increased by Batch Normalization and the nonlinearity between feature layers is enhanced by a ReLU activation function while further enhancing the features.
And inputting the output characteristic layer to a detection layer, and training the network structure by using an improved loss function to obtain the trained network structure.
In the SSD algorithm, the loss function is defined as:
it is a multitasking loss function consisting of a position loss function and a classification loss function. In the formula: x represents a real box; c represents a prediction box; l is predicted location information; n is the number of prior frames (number of true samples) matched with the true frames; l isconf(x, c) is the classification loss; l isloc(x, l, g) is the loss of position; alpha is a weight coefficient, and the invention is set to 1. L isloc(x, l, g) borrow from the position regression function smooth of fast R-CNNL1Expressed as:
in the formula:when in useThe time indicates that the ith prior frame is matched with the jth real frame, the category is k, and otherwise, the category is 0; n is a radical ofposRepresenting a set of positive samples; { CX,CYW, h represents the coordinate of the central pixel of the bounding box and the width and height respectively;the real frame position parameter after being coded;representing the predicted values of the prior boxes. smoothL1The function can be expressed as:
the accuracy of the target detection model depends on whether the anchor box is effectively trained.
In the training process of the SSD algorithm, the problems of unbalanced positive and negative samples and multitask exist.
According to the invention, the samples with the position loss value being greater than or equal to 1 are called outliers, and the other samples are called internal values.
One natural solution is to adjust the weights of the two types of losses, however, directly increasing the weight of the position loss will make the model more sensitive to outliers due to unbounded target regression positions. These outliers will produce very large gradients for difficult samples, which is detrimental to the training process, in which case the internal values will contribute less to the total gradient for simple samples.
Therefore, in order to alleviate the above problems to some extent, the present invention proposes an improved L1 loss function to increase the contribution of the internal value to the regression gradient, and rebalance the positive and negative samples in the training process, thereby achieving more balanced classification and location training.
Based on the above thought, the invention is improved to replace smooth in the traditional SSD algorithmL1The gradient formula for the loss is defined as:
by integrating the gradient formula and the corresponding experimental results, the invention is finally improvedSmooth after and afterL1Loss function:
wherein: a is 1, b is 2, c is 1 and d is 1/3.
S103: and inputting the remote sensing data set to the trained network structure to obtain a target detection result of the remote sensing image.
To verify the performance of the algorithm of the present invention, the present invention was tested on both indoor and outdoor image datasets. Four algorithms are used for comparison tests, namely YOLOv2, YOLOv3, YOLOv4 and SSD. All experiments are carried out on a 64-bit Windows10 computer provided with a Core i7-8700K CPU and an NVIDIA GeForec GTX 1080Ti with 11G video memory, and all experimental results are obtained under the conditions of the same IoU threshold value of 0.5 and the detection confidence coefficient threshold value of 0.5.
For reliable evaluation and verification of the proposed method, the present invention employs a NWPU VHR-10 dataset proposed by the professor korean military in 2014, which includes not only optical remote sensing images but also color infrared images. The image acquisition method comprises 715 color images and 85 color infrared images, wherein the color images are acquired from Google Earth Proc, the spatial resolution is 0.5 m to 2 m, the infrared image is acquired from a Vaihingen data set, and the spatial resolution is 0.08 m. The team divides the whole data set into a positive set and a negative set, wherein the positive set comprises 650 images which totally contain 10 types of target categories, the negative set comprises 150 images, and each image does not contain the target categories. Since the negative set is mainly used for the weakly supervised learning task [31] and the semi-supervised learning task [32], the negative set is not used in the experiment, and the whole positive set is divided into a training set (accounting for 80%) and a test set (accounting for 20%). The data set specific data is shown in table 1.
TABLE 1 NWPU VHR-10 dataset
In the training process, only the marked positive sample images in the NWPU VHR-10 data set are adopted, and 650 positive sample images are divided into 550 training sets and 100 testing sets.
The multi-strategy training comprises two stages: a pre-training phase and a training set training phase. In the pre-training stage, firstly, the convolutional layer of the VGG16 is pre-trained by adopting an ImageNet data set, then, the convolutional layer of the VGG16 in the SSD model is frozen, other network parts are trained by using a VOC2007+ VOC2012 data set, and then, the whole network is finely tuned. In the training stage, firstly, a network is initialized by adopting a model obtained by pre-training, parameters of the VGG16 convolutional layer are frozen, other parameters are trained, then, the whole network is finely adjusted, and finally, a target model is obtained.
The data enhancement adopts random cutting, random rotation and scaling operation, the total loss function is optimally trained by adopting a random gradient descent method, the set batch processing size is 10, the initial learning rate of the training set in the training stage is set to be 0.001, and the attenuation is 50% after 1 ten thousand times of iteration.
The present invention sets the IoU values for the prediction box and the true box to be greater than 0.5, and the test is deemed to be correct. Meanwhile, the algorithm provided by the invention has better performance on the NWPU VHR-10 data set compared with other detection algorithms. Quantitative results of different methods on the NWPU VHR-10 dataset include AP values and maps values of 10 categories, as shown in table 2, the maps value of the Ours-2 algorithm is improved by 29.5%, 7.7%, 4.6% and 12.5% respectively compared with YOLOv2, YOLOv3, YOLOv4 and SSD512, and particularly, significant performance improvement is obtained on oil tanks, vehicles and bridges. The result shows that compared with the existing popular general target detection algorithm, the algorithm provided by the invention has better performance advantage in the detection of the remote sensing image small target. Compared with the FPS of different methods, the algorithm provided by the invention can improve the detection precision on the basis of time-guaranteeing detection after the detection layers are cancelled and the fusion module is added, and has good performance in a remote sensing image small target detection task. The experimental results in table 3 confirm that the network structure selects the output feature layer with four dimensions.
TABLE 2 comparison of Algorithm Performance
TABLE 3 average accuracy of different scales as output feature layers
The feature fusion structure provides more structural space information of the detected target, so that stronger semantic information and more detailed information can be obtained. Experimental results show that the algorithm provided by the invention has excellent performance in the small target detection of the remote sensing image. The present invention also demonstrates that it helps to obtain a better target detector by making full use of deep and shallow feature information
The invention researches the task of rapidly detecting small targets in the optical remote sensing image by improving the network structure model, modifies the layer number of the network structure in order to improve the detection accuracy and accelerate the detection speed, adds the characteristic fusion module and makes evaluation on the verification set
And (4) evaluating the experiment. Experiments prove that the BFSSD algorithm improves the problem of low small target detection precision of the SSD algorithm, and the following conclusion is obtained by analyzing the detection results of small and medium targets such as airplanes, oil storage tanks and vehicles:
1) the algorithm network structure provided by the invention is more reasonable, the training process is easier to converge, and good experimental results can be obtained under the support of transfer learning.
2) Compared with the original SSD algorithm and the YOLO series algorithm, the algorithm provided by the invention has obviously improved overall precision, has more obvious effect on detecting small targets, and shows that better semantic information and position information are obtained after deep-layer features and shallow-layer features are fused by the fusion module provided by the invention.
3) Compared with the original SSD algorithm, the average accuracy of the algorithm provided by the invention is greatly improved, the detection speed of the algorithm is slower than that of the original SSD algorithm, but the algorithm provided by the invention has the best performance in all algorithms participating in comparison by comprehensively considering mAP and FPS indexes.
The remote sensing image small target detection algorithm based on the four-scale depth-depth layer feature fusion has a certain value for the deep learning method to be efficiently applied to the rapid target detection of the remote sensing image, particularly has a great advantage in the detection of the remote sensing image small target, provides a certain reference value for the remote sensing image processing field and related work such as traffic management, military target detection and the like, and can continuously optimize a network result in the next work to improve the calculation speed of the network result.
The beneficial effects provided by the invention are as follows: the small target detection capability, speed, robustness and accuracy of the high-resolution remote sensing image are improved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (5)
1. A remote sensing image small target detection method based on four-scale depth and superficial layer feature fusion comprises the following steps:
the method is characterized in that:
s101: constructing a remote sensing image small target detection network structure based on four-scale depth and shallow layer feature fusion; the network structure is an improved SDD network;
s102: training the network structure by adopting transfer learning to obtain a trained network structure;
s103: inputting a remote sensing data set to the trained network structure to obtain a target detection result of a remote sensing image;
in the network structure training process, extracting features of each layer of an input image by adopting VGG16, and fusing the extracted features of each layer by utilizing a feature fusion module to obtain 4 output feature layers;
and inputting the output characteristic layer to a detection layer, and training the network structure by using an improved loss function to obtain the trained network structure.
2. The method for detecting the small target of the remote sensing image based on the four-scale depth-shallow layer feature fusion as claimed in claim 1, characterized in that:
the feature fusion module adopts any one of a feature splicing algorithm or a feature addition algorithm.
3. The method for detecting the small target of the remote sensing image based on the four-scale depth-shallow layer feature fusion as claimed in claim 2, characterized in that:
when the feature fusion module fuses features of each layer, only the first 4 feature layers are fused, and the two latter feature layers are not changed.
4. The method for detecting the small target of the remote sensing image based on the four-scale depth-shallow layer feature fusion as claimed in claim 3, characterized in that: the feature fusion module fuses by using 3 x 3 convolution, batch regularization, and ReLU.
5. The method for detecting the small target of the remote sensing image based on the four-scale depth-shallow layer feature fusion as claimed in claim 1, characterized in that:
the formula of the improvement loss function is as follows:
wherein x represents a real box; c denotes toMeasuring a frame; l is predicted location information; n is the prior frame number matched with the real frame; l isconf(x, c) is the classification loss; l isloc(x, l, g) is the loss of position; alpha is a weight coefficient which is a preset value;when in useThe time indicates that the ith prior frame is matched with the jth real frame, the category is k, and otherwise, the category is 0; n is a radical ofposRepresenting a set of positive samples; { CX,CYW, h represents the coordinate of the central pixel of the bounding box and the width and height respectively; a. b, c and d are preset values.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011183190.7A CN112395958A (en) | 2020-10-29 | 2020-10-29 | Remote sensing image small target detection method based on four-scale depth and shallow layer feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011183190.7A CN112395958A (en) | 2020-10-29 | 2020-10-29 | Remote sensing image small target detection method based on four-scale depth and shallow layer feature fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112395958A true CN112395958A (en) | 2021-02-23 |
Family
ID=74598463
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011183190.7A Pending CN112395958A (en) | 2020-10-29 | 2020-10-29 | Remote sensing image small target detection method based on four-scale depth and shallow layer feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112395958A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113177456A (en) * | 2021-04-23 | 2021-07-27 | 西安电子科技大学 | Remote sensing target detection method based on single-stage full convolution network and multi-feature fusion |
CN113537244A (en) * | 2021-07-23 | 2021-10-22 | 深圳职业技术学院 | Livestock image target detection method and device based on light-weight YOLOv4 |
CN113688830A (en) * | 2021-08-13 | 2021-11-23 | 湖北工业大学 | Deep learning target detection method based on central point regression |
CN113903009A (en) * | 2021-12-10 | 2022-01-07 | 华东交通大学 | Railway foreign matter detection method and system based on improved YOLOv3 network |
CN115063651A (en) * | 2022-07-08 | 2022-09-16 | 北京百度网讯科技有限公司 | Training method and device for target object detection model and computer program product |
CN113688830B (en) * | 2021-08-13 | 2024-04-26 | 湖北工业大学 | Deep learning target detection method based on center point regression |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109359557A (en) * | 2018-09-25 | 2019-02-19 | 东北大学 | A kind of SAR remote sensing images Ship Detection based on transfer learning |
WO2019160975A1 (en) * | 2018-02-13 | 2019-08-22 | Slingshot Aerospace, Inc. | Conditional loss function modification in a neural network |
CN110728658A (en) * | 2019-09-16 | 2020-01-24 | 武汉大学 | High-resolution remote sensing image weak target detection method based on deep learning |
CN111091105A (en) * | 2019-12-23 | 2020-05-01 | 郑州轻工业大学 | Remote sensing image target detection method based on new frame regression loss function |
CN111666836A (en) * | 2020-05-22 | 2020-09-15 | 北京工业大学 | High-resolution remote sensing image target detection method of M-F-Y type lightweight convolutional neural network |
CN111723748A (en) * | 2020-06-22 | 2020-09-29 | 电子科技大学 | Infrared remote sensing image ship detection method |
CN111797676A (en) * | 2020-04-30 | 2020-10-20 | 南京理工大学 | High-resolution remote sensing image target on-orbit lightweight rapid detection method |
-
2020
- 2020-10-29 CN CN202011183190.7A patent/CN112395958A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019160975A1 (en) * | 2018-02-13 | 2019-08-22 | Slingshot Aerospace, Inc. | Conditional loss function modification in a neural network |
CN109359557A (en) * | 2018-09-25 | 2019-02-19 | 东北大学 | A kind of SAR remote sensing images Ship Detection based on transfer learning |
CN110728658A (en) * | 2019-09-16 | 2020-01-24 | 武汉大学 | High-resolution remote sensing image weak target detection method based on deep learning |
CN111091105A (en) * | 2019-12-23 | 2020-05-01 | 郑州轻工业大学 | Remote sensing image target detection method based on new frame regression loss function |
CN111797676A (en) * | 2020-04-30 | 2020-10-20 | 南京理工大学 | High-resolution remote sensing image target on-orbit lightweight rapid detection method |
CN111666836A (en) * | 2020-05-22 | 2020-09-15 | 北京工业大学 | High-resolution remote sensing image target detection method of M-F-Y type lightweight convolutional neural network |
CN111723748A (en) * | 2020-06-22 | 2020-09-29 | 电子科技大学 | Infrared remote sensing image ship detection method |
Non-Patent Citations (4)
Title |
---|
YANGYANG LI 等: "Anchor-Free Single Stage Detector in Remote Sensing Images Based on Multiscale Dense Path Aggregation Feature Pyramid Network", 《IEEE ACCESS》 * |
赵亚男 等: "基于多尺度融合SSD的小目标检测算法", 《计算机工程》 * |
陈小波: "基于多尺度特征融合与方向边界框预测的光学遥感图像目标检测", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》 * |
陈小波: "基于多尺度特征融合与方向边界框预测的光学遥感图像目标检测", 《中国优秀硕士学位论文全文数据库(工程科技Ⅱ辑)》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113177456A (en) * | 2021-04-23 | 2021-07-27 | 西安电子科技大学 | Remote sensing target detection method based on single-stage full convolution network and multi-feature fusion |
CN113177456B (en) * | 2021-04-23 | 2023-04-07 | 西安电子科技大学 | Remote sensing target detection method based on single-stage full convolution network and multi-feature fusion |
CN113537244A (en) * | 2021-07-23 | 2021-10-22 | 深圳职业技术学院 | Livestock image target detection method and device based on light-weight YOLOv4 |
CN113537244B (en) * | 2021-07-23 | 2024-03-15 | 深圳职业技术学院 | Livestock image target detection method and device based on lightweight YOLOv4 |
CN113688830A (en) * | 2021-08-13 | 2021-11-23 | 湖北工业大学 | Deep learning target detection method based on central point regression |
CN113688830B (en) * | 2021-08-13 | 2024-04-26 | 湖北工业大学 | Deep learning target detection method based on center point regression |
CN113903009A (en) * | 2021-12-10 | 2022-01-07 | 华东交通大学 | Railway foreign matter detection method and system based on improved YOLOv3 network |
CN113903009B (en) * | 2021-12-10 | 2022-07-05 | 华东交通大学 | Railway foreign matter detection method and system based on improved YOLOv3 network |
CN115063651A (en) * | 2022-07-08 | 2022-09-16 | 北京百度网讯科技有限公司 | Training method and device for target object detection model and computer program product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110276269B (en) | Remote sensing image target detection method based on attention mechanism | |
Wang et al. | Multiscale visual attention networks for object detection in VHR remote sensing images | |
Zou et al. | Random access memories: A new paradigm for target detection in high resolution aerial remote sensing images | |
Chen et al. | Multi-scale spatial and channel-wise attention for improving object detection in remote sensing imagery | |
Suhao et al. | Vehicle type detection based on deep learning in traffic scene | |
CN112395958A (en) | Remote sensing image small target detection method based on four-scale depth and shallow layer feature fusion | |
Li et al. | Road network extraction via deep learning and line integral convolution | |
CN110929607B (en) | Remote sensing identification method and system for urban building construction progress | |
Lu et al. | Gated and axis-concentrated localization network for remote sensing object detection | |
Zhao et al. | Incorporating metric learning and adversarial network for seasonal invariant change detection | |
CN109871902B (en) | SAR small sample identification method based on super-resolution countermeasure generation cascade network | |
CN111126202A (en) | Optical remote sensing image target detection method based on void feature pyramid network | |
Yu et al. | Vehicle detection from high-resolution remote sensing imagery using convolutional capsule networks | |
Peng et al. | Object-based change detection from satellite imagery by segmentation optimization and multi-features fusion | |
CN114511710A (en) | Image target detection method based on convolutional neural network | |
CN113344045A (en) | Method for improving SAR ship classification precision by combining HOG characteristics | |
Liu et al. | Building segmentation from satellite imagery using U-Net with ResNet encoder | |
Thirumaladevi et al. | Remote sensing image scene classification by transfer learning to augment the accuracy | |
Lv et al. | Novel automatic approach for land cover change detection by using VHR remote sensing images | |
Liu et al. | Density saliency for clustered building detection and population capacity estimation | |
Zhao et al. | Vehicle counting in very low-resolution aerial images via cross-resolution spatial consistency and Intraresolution time continuity | |
CN113128564B (en) | Typical target detection method and system based on deep learning under complex background | |
Wang et al. | MashFormer: A novel multiscale aware hybrid detector for remote sensing object detection | |
Wan et al. | Random Interpolation Resize: A free image data augmentation method for object detection in industry | |
CN114283336A (en) | Anchor-frame-free remote sensing image small target detection method based on mixed attention |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210223 |
|
RJ01 | Rejection of invention patent application after publication |