CN110991359A - Satellite image target detection method based on multi-scale depth convolution neural network - Google Patents

Satellite image target detection method based on multi-scale depth convolution neural network Download PDF

Info

Publication number
CN110991359A
CN110991359A CN201911243932.8A CN201911243932A CN110991359A CN 110991359 A CN110991359 A CN 110991359A CN 201911243932 A CN201911243932 A CN 201911243932A CN 110991359 A CN110991359 A CN 110991359A
Authority
CN
China
Prior art keywords
neural network
data set
scale
satellite image
target detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911243932.8A
Other languages
Chinese (zh)
Inventor
丁忆
李朋龙
曾安明
李晓龙
马泽忠
肖禾
罗鼎
段松江
胡艳
王岚
陈静
刘金龙
刘朝晖
魏文杰
谭攀
范文武
林熙
刘建
叶涛
袁力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Geographic Information And Remote Sensing Application Center (chongqing Surveying And Mapping Product Quality Inspection And Testing Center)
Original Assignee
Chongqing Geographic Information And Remote Sensing Application Center (chongqing Surveying And Mapping Product Quality Inspection And Testing Center)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Geographic Information And Remote Sensing Application Center (chongqing Surveying And Mapping Product Quality Inspection And Testing Center) filed Critical Chongqing Geographic Information And Remote Sensing Application Center (chongqing Surveying And Mapping Product Quality Inspection And Testing Center)
Priority to CN201911243932.8A priority Critical patent/CN110991359A/en
Publication of CN110991359A publication Critical patent/CN110991359A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a satellite image target detection method based on a multi-scale depth convolution neural network, which comprises the steps of collecting a satellite image training data set and carrying out sample labeling; preprocessing a satellite image training data set; building a multi-scale deep convolution neural network; inputting the preprocessed training data set into a target detection framework based on the multi-scale deep convolution neural network for training to obtain a trained target detection neural network; inputting a satellite image set to be detected, adopting the trained target detection neural network to perform target detection, and outputting a recognition result. The remarkable effects are as follows: the method improves the capability of the network on detecting results of fine-grained features and distinguishing different objects, improves the detection effect on small objects and dense object groups, has stronger robustness, effectively improves the target detection efficiency, and reduces the hardware requirement.

Description

Satellite image target detection method based on multi-scale depth convolution neural network
Technical Field
The invention relates to the technical field of detecting a multi-scale satellite image target based on a convolutional neural network, in particular to a satellite image target detection method based on a multi-scale depth convolutional neural network.
Background
In recent years, the efficiency of the target detection algorithm based on deep learning is greatly improved, but a series of problems exist in the processing based on the satellite images. The satellite image is an important resource, can measure the land resources through the satellite image, detect the ground condition and record the change of the ground surface with high view to the far-paying attention.
However, the satellite images are very large and objects therein are relatively small, and the problem of lack of training data of high-quality satellite images is that the satellite images are used for target detection, which is a challenging task. For example: the detection effect on small objects and dense object groups in the satellite image is poor; the existing target detection algorithm lacks certain generalization capability for unusual proportion or new image distribution, and meanwhile, because the possible directions and the size proportion of an object are different, the detection of a special target can be failed due to the limited proportion change of the algorithm; meanwhile, the existing target detection algorithm processes the whole image, but for satellite images with hundreds of millions of pixels, the hardware display card memory can hardly meet the large requirement.
Disclosure of Invention
In view of the deficiencies of the prior art, the object of the present invention is to provide a method for detecting a satellite image target based on a multi-scale deep convolutional neural network, so as to solve the various drawbacks described in the background art.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a satellite image target detection method based on a multi-scale depth convolution neural network is characterized by comprising the following steps: the method comprises the following steps:
step 1: collecting a satellite image training data set, and carrying out sample labeling;
step 2: preprocessing a satellite image training data set;
and step 3: building a multi-scale deep convolution neural network, wherein the building steps are as follows:
step 3.1: constructing basic grouping convolution modules of 17 layers of convolution layers in total, and connecting the pooling layers in the 1 st, 2 nd, 5 th and 8 th convolution layers respectively;
step 3.2: constructing a residual error connection convolution module on the basis of the step 3.1, and sequentially connecting each output end of each layer of the residual error connection convolution module with a batch normalization and LeakyReLu activation function module;
step 3.3: connecting a channel layer after the 15 th convolutional layer, connecting a linear activation function module at each output end of the last convolutional layer to form a multi-scale deep convolutional neural network with a 22-layer structure in total, and connecting an output section of the multi-scale deep convolutional neural network with a Focal loss function module;
and 4, step 4: inputting the preprocessed training data set into a target detection framework based on the multi-scale deep convolution neural network for training to obtain a trained target detection neural network;
and 5: inputting a satellite image set to be detected, adopting the trained target detection neural network to perform target detection, and outputting a recognition result.
Further, the sample labeled in step 1 includes an automobile sample data set, a building plane sample data set, an airplane sample data set, a ship sample data set, and an airport sample data set, wherein:
the automobile sample data set adopts a COWC data set, images are processed by using a Gaussian core on the basis of a GSD scale of 15cm, and a frame of 3m is marked for each automobile on the GSD scale of 30 cm;
and marking the samples by adopting a SpaceNet data set under the GSD scale of 30cm in the building plane sample data set.
Further, the pretreatment process in step 2 comprises the following specific steps:
step 2.1: rotating and cutting the collected satellite image training data set according to the preset image size and the preset overlapping rate;
step 2.2: and performing HSV enhancement processing on the training data blocks obtained after the cutting processing to form a complete training data set.
Further, the training process of the target detection framework of the multi-scale deep convolutional neural network in step 4 is as follows:
step 4.1: selecting equipment used in the training process, and setting parameters;
step 4.2: selecting at least two training targets on a classifier of the initialized multi-scale deep convolutional neural network;
step 4.3: setting windows with different scales according to different training targets, and carrying out window sliding detection on the large-scale image based on different scales to detect the training targets;
step 4.4: and obtaining the weight of each training target after the multi-scale deep convolution neural network training.
Further, the training device is an NVIDIA Titan X GPU; the parameter setting includes: the learning rate is set to 0.001, the weight attenuation is set to 0.0005, and the momentum is set to 0.9.
Further, the training targets are ships and airplanes.
Further, the expression of the Focal loss function is as follows:
FL(pt)=-αt(1-pt)γlog(pt),
wherein, αtFor the balance parameter, gamma is the suppression parameter, ptIs a predicted value.
Further, the specific steps of performing target detection on the satellite image set to be detected in step 5 are as follows:
step 5.1: cutting a satellite image to be detected into small pictures at an overlapping rate of 15%, and inputting the small pictures into the target detection neural network;
step 5.2: splicing the detected small pictures into a complete image according to the position marks, and performing non-maximum suppression processing on the spliced image to obtain an output result.
The invention modifies the network architecture of the deep convolutional neural network, uses the cut regional image as input, and adopts a multi-scale detection mode to carry out target detection on the image, thereby improving the detection result of the network on fine-grained characteristics and the capability of distinguishing different objects, and optimizing high-resolution input and dense small objects. Meanwhile, the traditional target detection algorithm lacks certain generalization capability for unusual proportion or new image distribution, so that the method performs rotation and random enhancement of HSV (hue, saturation and value) on the test data set, and the algorithm has stronger robustness for different sensors, atmospheric conditions and illumination conditions.
The invention has the following remarkable effects:
1. the invention improves the deep convolutional neural network architecture, improves the capability of the network on detecting fine-grained characteristics and distinguishing different objects, and improves the detection effect on small objects and dense object groups.
2. In order to solve the problem, the invention performs rotation and random enhancement of HSV (hue, saturation and value) on data, solves the problems that the traditional target detection algorithm lacks certain generalization capability on unusual proportion or new image distribution, the detection of a special target can be failed due to the limited proportion change of the algorithm because the possible directions and the size proportion of an object are different, and the like, and has stronger robustness on different sensors, atmospheric conditions and illumination conditions.
3. The invention aims to solve the problem of high-speed detection of multi-scale targets by using regional images as input and using a multi-scale detection method, thereby effectively improving the target detection efficiency and reducing the hardware requirement.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a schematic structural diagram of the multi-scale deep convolutional neural network;
FIG. 3 is a graph showing the results of detection.
Detailed Description
The following provides a more detailed description of the embodiments and the operation of the present invention with reference to the accompanying drawings.
As shown in fig. 1, a method for detecting a satellite image target based on a multi-scale depth convolution neural network includes the following specific steps:
step 1: collecting a satellite image training data set, and carrying out sample labeling;
in this example, the sample labeled by the sample label includes an automobile sample data set, a building plane sample data set, an airplane sample data set, a ship sample data set, and an airport sample data set, wherein:
the automobile sample data set adopts a COWC data set, images are processed by using a Gaussian core on the basis of a GSD scale of 15cm, a frame of 3m is marked for each automobile on the GSD scale of 30cm, and 13303 samples are marked in total;
marking the samples in the building plane sample data set by adopting a SpaceNet data set at a GSD (generalized space display) scale of 30cm, and marking 221336 samples in total;
the aircraft sample data set is marked with 230 samples in total by using eight DigitalGlobe pictures;
the ship sample data set is marked with 556 samples in total by using three DigitalGlobe pictures;
the airport sample data set utilizes 37 pictures as training samples, wherein airport runways are included, and 4-proportion downsampling is carried out.
Step 2: preprocessing a satellite image training data set;
in this example, the pretreatment process includes the specific steps of:
step 2.1: using python to rotate and cut the collected satellite image training data set according to the preset image size and the preset overlapping rate;
step 2.2: and performing HSV (hue, saturation and value) enhancement processing on the training data block obtained after the cutting processing to form a complete training data set.
And step 3: building a multi-scale deep convolution neural network, wherein the building steps are as follows:
step 3.1: constructing basic grouping convolution modules of 17 layers of convolution layers in total, and connecting the maximum pooling layers after the 1 st, 2 nd, 5 th and 8 th convolution layers respectively;
step 3.2: constructing a residual error connection convolution module on the basis of the step 3.1, wherein the specific process is as follows: after the maximum pooling layer is added, directly connecting the maximum pooling output of the third layer to the convolution output of the sixth layer, performing point-to-point superposition on data, and inputting the data into the maximum pooling layer; directly connecting the maximum pooling output of the seventh layer to the convolution output of the tenth layer, performing point-to-point superposition on data, and inputting the data into the maximum pooling layer of the 11 th layer;
then, each output end of each layer of the residual error connection convolution module is sequentially connected with a Batch Normalization module and a LeakyReLu activation function module, wherein the Batch Normalization processing is to respectively perform Batch Normalization processing on output data of a plurality of channels of convolution calculation, each channel has independent stretching and offset parameters which are scalar quantities, so that the output of the convolution layer obeys normal distribution, and then the result of the Batch Normalization calculation is restored to the original input characteristic through scaling and translation, so that the training speed can be improved, and the training effect can be improved;
after convolution calculation, the normalized output data with similar distribution of each characteristic is transmitted into an activation function for linear transformation. The activation function has the function of converting input from linearity to nonlinearity, so that a nonlinear factor is added to the network, and the network has more expressive force;
step 3.3: connecting the channel layer after the 15 th convolutional layer, and connecting a linear activation function module at each output end of the last convolutional layer to form a multi-scale deep convolutional neural network with a 22-layer structure;
the Focal loss function is characterized in that a balance parameter α and a suppression parameter gamma are added on the basis of standard cross entropy loss, so that the unbalanced condition of positive and negative sample data is optimized and samples are not easy to classify through key learning.
The output of the multi-scale depth convolution neural network is processed by adopting a Focal loss function;
in this example, the expression of the Focal loss function is:
FL(pt)=-αt(1-pt)γlog(pt),
wherein, αtThe balance parameter is used for balancing the number of samples; gamma is an inhibition parameter, is equivalent to a penalty item and is used for controlling the excavation of the difficultly divided samples; p is a radical oftFor prediction output, α in the inventiontTaking 0.25, gamma 2, ptIs a predicted value.
And 4, step 4: inputting the preprocessed training data set into a target detection framework based on the multi-scale deep convolution neural network for training to obtain a trained target detection neural network;
in this example, the training process of the network is:
step 4.1: selecting an NVIDIA Titan X GPU as equipment used in a training process, and setting parameters, wherein the method comprises the following steps: learning rate is set to 0.001, weight attenuation is set to 0.0005, momentum is set to 0.9;
step 4.2: selecting two training targets on the classifier of the initialized multi-scale deep convolutional neural network: ships and airplanes;
step 4.3: setting windows with different scales according to different training targets: optimizing 120-meter windows for finding small ships and airplanes and 225-meter windows for large ships and commercial airliners, and performing window sliding on large images based on different scales to detect the training target;
step 4.4: and obtaining the weight of each training target after the multi-scale deep convolution neural network training, wherein the data training is carried out on the NVIDIA Titan X GPU for 2-3 days.
And 5: after a plurality of times of training iterations, inputting a satellite image set to be detected, adopting the trained target detection neural network to perform target detection, and outputting a recognition result, wherein the specific steps are as follows:
step 5.1: cutting a satellite image to be detected into small pictures at an overlapping rate of 15%, and inputting the small pictures into the target detection neural network;
step 5.2: splicing the detected small pictures into a complete image according to the position marks, and performing non-maximum suppression (NMS) processing on the spliced image to obtain an output result, as shown in FIG. 3, wherein FIG. 3(a) is an identification result graph of an automobile sample; FIG. 3(b) is a diagram showing the result of recognition of a sample of a building plane; FIG. 3(c) is a diagram of the identification results of a sample ship; fig. 3(d) is a diagram showing the recognition result of an airport sample.
The technical solution provided by the present invention is described in detail above. The principles and embodiments of the present invention are explained herein using specific examples, which are presented only to assist in understanding the method and its core concepts. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Claims (8)

1. A satellite image target detection method based on a multi-scale depth convolution neural network is characterized by comprising the following steps:
step 1: collecting a satellite image training data set, and carrying out sample labeling;
step 2: preprocessing a satellite image training data set;
and step 3: building a multi-scale deep convolution neural network, wherein the building steps are as follows:
step 3.1: constructing basic grouping convolution modules of 17 layers of convolution layers in total, and connecting the pooling layers in the 1 st, 2 nd, 5 th and 8 th convolution layers respectively;
step 3.2: constructing a residual error connection convolution module on the basis of the step 3.1, and sequentially connecting each output end of each layer of the residual error connection convolution module with a batch normalization and LeakyReLu activation function module;
step 3.3: connecting a channel layer after the 15 th convolutional layer, connecting a linear activation function module at each output end of the last convolutional layer to form a multi-scale deep convolutional neural network with a 22-layer structure in total, and connecting an output section of the multi-scale deep convolutional neural network with a Focalloss loss function module;
and 4, step 4: inputting the preprocessed training data set into a target detection framework based on the multi-scale deep convolution neural network for training to obtain a trained target detection neural network;
and 5: inputting a satellite image set to be detected, adopting the trained target detection neural network to perform target detection, and outputting a recognition result.
2. The method for detecting the satellite image target based on the multi-scale depth convolutional neural network as claimed in claim 1, wherein: the sample labeled in the step 1 comprises an automobile sample data set, a building plane sample data set, an airplane sample data set, a ship sample data set and an airport sample data set, wherein:
the automobile sample data set adopts a COWC data set, images are processed by using a Gaussian core on the basis of a GSD scale of 15cm, and a frame of 3m is marked for each automobile on the GSD scale of 30 cm;
and marking the samples by adopting a SpaceNet data set under the GSD scale of 30cm in the building plane sample data set.
3. The method for detecting the satellite image target based on the multi-scale depth convolutional neural network as claimed in claim 1, wherein: the pretreatment process in the step 2 comprises the following specific steps:
step 2.1: rotating and cutting the collected satellite image training data set according to the preset image size and the preset overlapping rate;
step 2.2: and performing HSV enhancement processing on the training data blocks obtained after the cutting processing to form a complete training data set.
4. The method for detecting the satellite image target based on the multi-scale depth convolutional neural network as claimed in claim 1, wherein: the training process of the target detection framework of the multi-scale deep convolutional neural network in the step 4 is as follows:
step 4.1: selecting equipment used in the training process, and setting parameters;
step 4.2: selecting at least two training targets on a classifier of the initialized multi-scale deep convolutional neural network;
step 4.3: setting windows with different scales according to different training targets, and carrying out window sliding detection on the large-scale image based on different scales to detect the training targets;
step 4.4: and obtaining the weight of each training target after the multi-scale deep convolution neural network training.
5. The method for detecting the satellite image target based on the multi-scale depth convolutional neural network as claimed in claim 4, wherein: the training equipment is NVIDIA Titan X GPU; the parameter setting includes: the learning rate is set to 0.001, the weight attenuation is set to 0.0005, and the momentum is set to 0.9.
6. The method for detecting the satellite image target based on the multi-scale depth convolutional neural network as claimed in claim 4, wherein: the training targets are ships and airplanes.
7. The method for detecting the satellite image target based on the multi-scale depth convolutional neural network as claimed in claim 1, wherein: the expression of the Focal loss function is as follows:
FL(pt)=-αt(1-pt)γlog(pt),
wherein, αtFor the balance parameter, gamma is the suppression parameter, ptIs a predicted value.
8. The method for detecting the satellite image target based on the multi-scale depth convolutional neural network as claimed in claim 1, wherein: the specific steps of performing target detection on the satellite image set to be detected in the step 5 are as follows:
step 5.1: cutting a satellite image to be detected into small pictures at an overlapping rate of 15%, and inputting the small pictures into the target detection neural network;
step 5.2: splicing the detected small pictures into a complete image according to the position marks, and performing non-maximum suppression processing on the spliced image to obtain an output result.
CN201911243932.8A 2019-12-06 2019-12-06 Satellite image target detection method based on multi-scale depth convolution neural network Pending CN110991359A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911243932.8A CN110991359A (en) 2019-12-06 2019-12-06 Satellite image target detection method based on multi-scale depth convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911243932.8A CN110991359A (en) 2019-12-06 2019-12-06 Satellite image target detection method based on multi-scale depth convolution neural network

Publications (1)

Publication Number Publication Date
CN110991359A true CN110991359A (en) 2020-04-10

Family

ID=70090989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911243932.8A Pending CN110991359A (en) 2019-12-06 2019-12-06 Satellite image target detection method based on multi-scale depth convolution neural network

Country Status (1)

Country Link
CN (1) CN110991359A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111619755A (en) * 2020-06-09 2020-09-04 中国船舶科学研究中心 Hull profile design method based on convolutional neural network
CN112560706A (en) * 2020-12-18 2021-03-26 广东电网有限责任公司电力科学研究院 Method and device for identifying water body target of multi-source satellite image
CN112751633A (en) * 2020-10-26 2021-05-04 中国人民解放军63891部队 Broadband spectrum detection method based on multi-scale window sliding
CN115100266A (en) * 2022-08-24 2022-09-23 珠海翔翼航空技术有限公司 Digital airport model construction method, system and equipment based on neural network
CN115546660A (en) * 2022-11-25 2022-12-30 成都国星宇航科技股份有限公司 Target detection method, device and equipment based on video satellite data
CN117726807A (en) * 2024-02-08 2024-03-19 北京理工大学 Infrared small target detection method and system based on scale and position sensitivity

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220703A (en) * 2016-12-29 2017-09-29 恩泊泰(天津)科技有限公司 A kind of deep neural network based on multiple scale detecting
CN108460403A (en) * 2018-01-23 2018-08-28 上海交通大学 The object detection method and system of multi-scale feature fusion in a kind of image
CN109117894A (en) * 2018-08-29 2019-01-01 汕头大学 A kind of large scale remote sensing images building classification method based on full convolutional neural networks
CN110070142A (en) * 2019-04-29 2019-07-30 上海大学 A kind of marine vessel object detection method based on YOLO neural network
CN110147807A (en) * 2019-01-04 2019-08-20 上海海事大学 A kind of ship intelligent recognition tracking
CN110287927A (en) * 2019-07-01 2019-09-27 西安电子科技大学 Based on the multiple dimensioned remote sensing image object detection method with context study of depth

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220703A (en) * 2016-12-29 2017-09-29 恩泊泰(天津)科技有限公司 A kind of deep neural network based on multiple scale detecting
CN108460403A (en) * 2018-01-23 2018-08-28 上海交通大学 The object detection method and system of multi-scale feature fusion in a kind of image
CN109117894A (en) * 2018-08-29 2019-01-01 汕头大学 A kind of large scale remote sensing images building classification method based on full convolutional neural networks
CN110147807A (en) * 2019-01-04 2019-08-20 上海海事大学 A kind of ship intelligent recognition tracking
CN110070142A (en) * 2019-04-29 2019-07-30 上海大学 A kind of marine vessel object detection method based on YOLO neural network
CN110287927A (en) * 2019-07-01 2019-09-27 西安电子科技大学 Based on the multiple dimensioned remote sensing image object detection method with context study of depth

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ADAM VAN ETTEN: ""You Only Look Twice: Rapid Multi-Scale Object Detection In Satellite Imagery"", 《ARXIV.ORG》 *
ADAM VAN ETTEN: "Satellite Imagery Multiscale Rapid Detection withWindowed Networks", 《2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION》 *
邓志鹏 等: "基于多尺度形变特征卷积网络的高分辨率遥感影像目标检测", 《测绘学报》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111619755A (en) * 2020-06-09 2020-09-04 中国船舶科学研究中心 Hull profile design method based on convolutional neural network
CN111619755B (en) * 2020-06-09 2021-05-04 中国船舶科学研究中心 Hull profile design method based on convolutional neural network
CN112751633A (en) * 2020-10-26 2021-05-04 中国人民解放军63891部队 Broadband spectrum detection method based on multi-scale window sliding
CN112751633B (en) * 2020-10-26 2022-08-26 中国人民解放军63891部队 Broadband spectrum detection method based on multi-scale window sliding
CN112560706A (en) * 2020-12-18 2021-03-26 广东电网有限责任公司电力科学研究院 Method and device for identifying water body target of multi-source satellite image
CN112560706B (en) * 2020-12-18 2022-03-29 广东电网有限责任公司电力科学研究院 Method and device for identifying water body target of multi-source satellite image
CN115100266A (en) * 2022-08-24 2022-09-23 珠海翔翼航空技术有限公司 Digital airport model construction method, system and equipment based on neural network
CN115100266B (en) * 2022-08-24 2022-12-06 珠海翔翼航空技术有限公司 Method, system and equipment for constructing digital airport model based on neural network
CN115546660A (en) * 2022-11-25 2022-12-30 成都国星宇航科技股份有限公司 Target detection method, device and equipment based on video satellite data
CN115546660B (en) * 2022-11-25 2023-04-07 成都国星宇航科技股份有限公司 Target detection method, device and equipment based on video satellite data
CN117726807A (en) * 2024-02-08 2024-03-19 北京理工大学 Infrared small target detection method and system based on scale and position sensitivity

Similar Documents

Publication Publication Date Title
CN110991359A (en) Satellite image target detection method based on multi-scale depth convolution neural network
CN110570396B (en) Industrial product defect detection method based on deep learning
CN109035251B (en) Image contour detection method based on multi-scale feature decoding
CN108960135B (en) Dense ship target accurate detection method based on high-resolution remote sensing image
CN109949316A (en) A kind of Weakly supervised example dividing method of grid equipment image based on RGB-T fusion
CN111257341B (en) Underwater building crack detection method based on multi-scale features and stacked full convolution network
CN114092832B (en) High-resolution remote sensing image classification method based on parallel hybrid convolutional network
WO2021238019A1 (en) Real-time traffic flow detection system and method based on ghost convolutional feature fusion neural network
CN110569843B (en) Intelligent detection and identification method for mine target
CN113436169A (en) Industrial equipment surface crack detection method and system based on semi-supervised semantic segmentation
CN109409327B (en) RRU module object pose detection method based on end-to-end deep neural network
CN106408030A (en) SAR image classification method based on middle lamella semantic attribute and convolution neural network
CN111507275B (en) Video data time sequence information extraction method and device based on deep learning
US20220315243A1 (en) Method for identification and recognition of aircraft take-off and landing runway based on pspnet network
CN115830471B (en) Multi-scale feature fusion and alignment domain self-adaptive cloud detection method
CN111178177A (en) Cucumber disease identification method based on convolutional neural network
CN114022408A (en) Remote sensing image cloud detection method based on multi-scale convolution neural network
CN103500449A (en) Satellite visible light remote sensing image cloud detection method
CN114581782A (en) Fine defect detection method based on coarse-to-fine detection strategy
CN106645180A (en) Method for checking defects of substrate glass, field terminal and server
CN114782982A (en) Marine organism intelligent detection method based on deep learning
CN113379737A (en) Intelligent pipeline defect detection method based on image processing and deep learning and application
CN114241423A (en) Intelligent detection method and system for river floaters
CN117496426A (en) Precast beam procedure identification method and device based on mutual learning
CN116994066A (en) Tail rope detection system based on improved target detection model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200410