CN110189304B - Optical remote sensing image target on-line rapid detection method based on artificial intelligence - Google Patents
Optical remote sensing image target on-line rapid detection method based on artificial intelligence Download PDFInfo
- Publication number
- CN110189304B CN110189304B CN201910377070.1A CN201910377070A CN110189304B CN 110189304 B CN110189304 B CN 110189304B CN 201910377070 A CN201910377070 A CN 201910377070A CN 110189304 B CN110189304 B CN 110189304B
- Authority
- CN
- China
- Prior art keywords
- target
- remote sensing
- optical remote
- module
- sensing image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003287 optical effect Effects 0.000 title claims abstract description 64
- 238000001514 detection method Methods 0.000 title claims abstract description 55
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 17
- 238000012549 training Methods 0.000 claims abstract description 17
- 238000000605 extraction Methods 0.000 claims abstract description 11
- 238000000034 method Methods 0.000 claims description 21
- 230000006870 function Effects 0.000 claims description 10
- 230000000903 blocking effect Effects 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 238000012795 verification Methods 0.000 claims description 4
- 235000002566 Capsicum Nutrition 0.000 claims description 2
- 239000006002 Pepper Substances 0.000 claims description 2
- 235000016761 Piper aduncum Nutrition 0.000 claims description 2
- 235000017804 Piper guineense Nutrition 0.000 claims description 2
- 244000203593 Piper nigrum Species 0.000 claims description 2
- 235000008184 Piper nigrum Nutrition 0.000 claims description 2
- 150000001875 compounds Chemical class 0.000 claims description 2
- 230000005764 inhibitory process Effects 0.000 claims description 2
- 150000003839 salts Chemical class 0.000 claims description 2
- 238000012216 screening Methods 0.000 claims description 2
- 238000011897 real-time detection Methods 0.000 abstract description 2
- 238000004364 calculation method Methods 0.000 description 5
- 238000003062 neural network model Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000003203 everyday effect Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an optical remote sensing image target online rapid detection method based on artificial intelligence, which comprises the following steps: acquiring an original optical remote sensing image, and establishing an optical remote sensing image target data set; constructing an image feature extraction network, and constructing a target rapid detection network model by combining a decoder; training and evaluating a target rapid detection network model by using the optical remote sensing image target data set; and carrying out target detection on the optical remote sensing image to be detected by utilizing the trained target rapid detection network model. The optical remote sensing image target online rapid detection method based on artificial intelligence can cope with complex external interference, has the advantages of high detection precision, high detection speed, small occupied memory, low cost, low power consumption and the like, is suitable for embedded mobile platforms and the like, can obtain real-time detection speed and higher detection precision on the embedded platforms, and can be used for remote sensing target detection of mobile ends of unmanned aerial vehicle airborne platforms or satellite platforms and the like.
Description
Technical Field
The invention belongs to the technical field of remote sensing and deep learning, and particularly relates to an optical remote sensing image target online rapid detection method based on artificial intelligence.
Background
With the development of computer vision technology and image parallel processing technology, deep learning has increasingly wide application in military field and civil fields such as aerospace, scientific exploration, astronomical observation, video monitoring and the like. The world famous high-resolution satellite imaging system reaches the sub-meter level and even the high-resolution level of 0.1m, the Jilin I light high-resolution remote sensing satellite optical imaging system can acquire 15 ten thousand square kilometers of high-resolution remote sensing image data every day, and the satellite-borne high-capacity full-color imaging system of the WorldView commercial satellite system of Digitalglobe company can shoot 0.5 m resolution images of up to 50 ten thousand square kilometers every day. Remote sensing image data accumulated by a satellite platform and an unmanned aerial vehicle platform are accumulated continuously, and a lightweight deep learning model which is suitable for a mobile platform, occupies less resources and has high calculation efficiency is urgently needed for target detection and identification tasks of the satellite-borne or airborne platform.
The current deep learning methods for target detection and identification are generally divided into two types: two-stage deep neural network models (e.g., Faster R-CNN) and one-stage deep neural network models (e.g., YOLO, SSD). The two-stage model firstly selects some candidate regions on a given image, then extracts features of the regions, and finally carries out classification and identification by using a trained classifier. However, both of these identification methods have disadvantages: the double-stage deep neural network model has no pertinence in a region selection strategy based on a sliding window, high time complexity and redundant windows, and brings great difficulty to users; the single-stage model utilizes the whole graph as the input of the network, directly outputs the position and the category of the regression frame on the output layer, and although the higher processing speed is achieved under the acceleration of the GPU platform, the single-stage model has high calculation cost and large power consumption in unit time and is not suitable for embedded mobile terminals and the like. And the single-stage or double-stage model has the problem of large memory occupation, and the real-time performance on the embedded platform is difficult to meet.
Disclosure of Invention
The invention aims to provide a method for rapidly detecting a remote sensing target on line under an unmanned aerial vehicle airborne or satellite platform by utilizing a deep neural network, a multi-scale characteristic diagram and a target course angle prediction method.
The technical solution for realizing the purpose of the invention is as follows: an optical remote sensing image target on-line rapid detection method based on artificial intelligence comprises the following steps:
step 1, obtaining an original optical remote sensing image, and establishing an optical remote sensing image target data set;
step 2, constructing an image feature extraction network, and constructing a target rapid detection network model by combining a decoder;
step 3, training and evaluating a target rapid detection network model by using the optical remote sensing image target data set;
and 4, carrying out target detection on the optical remote sensing image to be detected by using the trained target rapid detection network model.
Compared with the prior art, the invention has the following remarkable advantages: 1) by using the multi-scale characteristic diagram to participate in prediction, the effect of effectively improving the target positioning precision and the classification precision is achieved; 2) target course prediction is used as a regression problem and introduced into a network model for direct prediction, so that the extraction of the information of the rotating target course angle in a single-stage deep neural network model is realized; 3) by combining the target course angle information with the rectangular marking frame, a new method for establishing a rotating target data set is provided; 4) the network model is optimized by utilizing the expansion convolution, so that the calculated amount and the model volume are reduced, and the calculation process is accelerated; 5) the designed model has the advantages of small memory occupation, low calculation cost, high calculation efficiency and better precision, and is suitable for embedded mobile platforms and the like.
The present invention is described in further detail below with reference to the attached drawing figures.
Drawings
FIG. 1 is a flow chart of the method for rapidly detecting the target of the optical remote sensing image on line based on artificial intelligence.
FIG. 2 is a schematic diagram of a target heading angle tag according to the present invention.
Detailed Description
With reference to fig. 1, the method for rapidly detecting the target on line based on the artificial intelligence optical remote sensing image comprises the following steps:
step 1, obtaining an original optical remote sensing image, and establishing an optical remote sensing image target data set;
step 2, constructing an image feature extraction network, and constructing a target rapid detection network model by combining a decoder;
step 3, training and evaluating a target rapid detection network model by using the optical remote sensing image target data set;
and 4, carrying out target detection on the optical remote sensing image to be detected by using the trained target rapid detection network model.
Further, step 1 obtains an original optical remote sensing image, and establishes an optical remote sensing image target data set, specifically:
1-1, selecting an optical remote sensing image containing an interested area from an original optical remote sensing image;
step 1-2, storing the optical remote sensing image containing the region of interest in a blocking mode to obtain an optical remote sensing image set;
step 1-3, respectively carrying out image preprocessing on each block image, and storing the images before and after processing to expand the optical remote sensing image set;
step 1-4, randomly selecting p% of block images from the expanded optical remote sensing image set as a training set, and taking the rest block images as a verification set; wherein, p% is more than 50%;
and 1-5, acquiring the position, size, category and course angle of an interested target in each block image, and forming an optical remote sensing image target data set by the data and the optical remote sensing image set.
Exemplary preference is given to p% ═ 75%.
Illustratively, the areas of interest in step 1-1 include airports, ports, and sea areas; the objects of interest in steps 1-5 include aircraft, ships.
Further, the image preprocessing in step 1-3 includes geometric transformation of the image or changing the contrast of the image or changing the brightness of the image or adding noise.
Further preferably, the image geometric transformation comprises rotation, mirroring; wherein the rotation is in a counter-clockwise or clockwise direction, including the rotation theta 1 Angle, rotation theta 2 Degree n 0 DEG < theta i Degree < 360 °, i ═ 1,2, ·, n; the mirror image comprises a horizontal mirror image and a vertical mirror image; the added noise includes salt and pepper noise and banded noise.
Illustratively, rotation includes rotation by 90 °, rotation by 180 °, and rotation by 270 °.
Further, step (ii)1-5, acquiring the position, size, category and course angle of an interested target in each block image, specifically: drawing a minimum circumscribed rectangle of the target of interest, and acquiring the coordinates (X) of the center point of the rectangle c ,Y c ) The position, the width w and the height h of the target are the size of the target, the corresponding target class number and the corresponding target course angle theta, wherein the target class number is a number corresponding to each class of target; the target heading angle theta is the included angle between the target orientation and the horizontal right direction, and is more than or equal to 0 degrees and less than or equal to 360 degrees as shown in figure 2.
Further, step 2, constructing an image feature extraction network, and constructing a target rapid detection network model by combining a decoder, specifically:
the target rapid detection network model comprises an image feature extraction network and a decoder, wherein the image feature extraction network consists of 2 convolutional layers, 7 expansion convolutional structures and 10 expansion convolutional residual error structures, and the decoder is used for predicting the position, size, category and course angle of a target;
the 2 convolutional layers comprise a first convolutional layer and a second convolutional layer, the 7 extended convolution structures comprise a first extended convolution module, a second extended convolution module, a third extended convolution module, a fourth extended convolution module, a fifth extended convolution module, a sixth extended convolution module and a seventh extended convolution module, and the 10 extended convolution residual structures comprise a first extended convolution residual module, a second extended convolution residual module, a third extended convolution residual module, a fourth extended convolution residual module, a fifth extended convolution residual module, a sixth extended convolution residual module, a seventh extended convolution residual module, an eighth extended convolution residual module, a ninth extended convolution residual module and a tenth extended convolution residual module;
the optical remote sensing image in the optical remote sensing image set is used as the input of the first convolution layer; a feature map output after the first convolutional layer, the first extended convolution module, the second extended convolution module, the first extended convolution residual module, the second extended convolution residual module, the third extended convolution residual module, the fourth extended convolution module, the fifth extended convolution residual module, the sixth extended convolution residual module, the fifth extended convolution module, the seventh extended convolution residual module, the sixth extended convolution module, the eighth extended convolution residual module, the ninth extended convolution residual module, the tenth extended convolution residual module, the seventh extended convolution module and the second convolutional layer are sequentially cascaded serves as the input of a decoder; the output of the seventh expanded convolution residual module is simultaneously cascaded with the eighth expanded convolution module, the output of the eighth expanded convolution module is fused with the output of the seventh expanded convolution module, and then the feature map output after the eighth expanded convolution residual module is sequentially cascaded with the ninth expanded convolution module and the third convolution layer is also used as the input of a decoder, and the decoder predicts the position, the size, the category and the course angle of a target to realize the rapid detection of the target.
Furthermore, the expansion convolution module comprises an input layer, a first 1 × 1 convolution layer, a first 3 × 3 convolution layer, a second 1 × 1 convolution layer and an output layer; and the expansion convolution residual error module connects the input layer with the output layer to obtain the output layer of the expansion convolution residual error module on the basis of the expansion convolution module.
Illustratively, the decoder employs a YOLO v3 decoder.
Further, step 3, training and evaluating the target rapid detection network model by using the optical remote sensing image target data set, specifically:
3-1, pre-training the target rapid detection network model by using a COCO data set to obtain a pre-training model;
step 3-2, initializing a target rapid detection network parameter and a hyper-parameter by using a pre-training model, and inputting an image of the training set in the target rapid detection network model for forward propagation so as to calculate target prediction information and a loss function value; the prediction information of each target corresponds to a prediction frame, and the target prediction information comprises the position, the size, the category and the course angle of the target;
wherein the loss function formula is:
in the formula (I), the compound is shown in the specification,indicating that there is a target in the jth prediction box in the ith block,denotes predicting no target in jth prediction box in ith block, lambda coord 、λ obj 、λ noobj 、λ θ For the weight terms of the parts of the loss function, S represents the number of grid cells, B represents the number of rotated bounding boxes per grid cell, (x) j ,y j ,w j ,h j ,θ j ) Respectively predicting the horizontal coordinate, the vertical coordinate, the width of the prediction frame, the height of the prediction frame and the course angle information of the center point of the target in the jth detection frame in the ith block,respectively is a sample true value of the abscissa, the ordinate, the width of the predicted frame, the height of the predicted frame and the course angle information of the central point of the target in the jth detection frame in the ith block, c j In order to be a confidence score,is the intersection of the predicted bounding box with the true bounding box, p i (c) Is the probability that the object contained in the ith prediction box is in the class c,the true value of the object contained in the ith prediction box is the true value of the class of the object;
3-3, adjusting the network weight parameters through back propagation to reduce the loss function value;
step 3-4, repeating the step 3-2 to the step 3-3 until the maximum iteration times or the loss function value reaches the training target requirement;
and 3-5, evaluating the target rapid detection network performance and the occupied memory on the hardware platform by using the verification set.
Illustratively, λ is set coord =5,λ obj =1,λ noobj =0.5,λ θ =2.5。
Further, step 4, performing target detection on the optical remote sensing image to be detected by using the trained target rapid detection network model, specifically:
step 4-1, blocking the optical remote sensing image to be detected, wherein each complete target to be detected needs to be ensured in a certain blocked image in the process;
step 4-2, inputting the block images into the trained target rapid detection network model, obtaining a plurality of pieces of preliminary target information, and drawing a corresponding prediction frame according to each piece of preliminary target information; the preliminary target information comprises target position, size, category and course angle information;
and 4-3, screening the prediction frame by using a non-maximum value inhibition method, wherein the preliminary target information corresponding to the screened prediction frame is the target information of the optical remote sensing image to be detected.
Exemplarily, the optical remote sensing image to be measured is equally partitioned in the step 4-1, and each image is square, that is, w 'is h', w 'is the image width, and h' is the image height; step 4-2 Co-obtainingPreliminary target information.
The method for rapidly detecting the remote sensing optical image target on line based on the artificial intelligence can cope with complex external interference, can obtain real-time detection speed and higher detection precision on an embedded platform, and can be used for detecting the remote sensing target of a mobile end such as an unmanned aerial vehicle airborne platform or a satellite platform.
Claims (8)
1. An optical remote sensing image target on-line rapid detection method based on artificial intelligence is characterized by comprising the following steps:
step 1, obtaining an original optical remote sensing image, and establishing an optical remote sensing image target data set;
step 2, constructing an image feature extraction network, and constructing a target rapid detection network model by combining a decoder;
step 3, training and evaluating a target rapid detection network model by using the optical remote sensing image target data set; the method specifically comprises the following steps:
3-1, pre-training the target rapid detection network model by using a COCO data set to obtain a pre-training model;
step 3-2, initializing a target rapid detection network parameter and a hyper-parameter by using a pre-training model, and inputting an image of a training set in the target rapid detection network model for forward propagation to calculate target prediction information and a loss function value; the prediction information of each target corresponds to a prediction frame, and the target prediction information comprises the position, the size, the category and the course angle of the target;
wherein the loss function formula is:
in the formula (I), the compound is shown in the specification,indicating that there is a target in the jth prediction box in the ith block,denotes predicting no target in jth prediction box in ith block, lambda coord 、λ obj 、λ noobj 、λ θ For the weight terms of the parts of the loss function, S represents the number of grid cells, B represents the number of rotated bounding boxes per grid cell, (x) j ,y j ,w j ,h j ,θ j ) Respectively predicting the horizontal coordinate, the vertical coordinate, the width of the prediction frame, the height of the prediction frame and the course angle information of the center point of the target in the jth detection frame in the ith block,respectively in the jth detection frame in the ith blockSample truth values of the horizontal coordinate, vertical coordinate, predicted frame width, predicted frame height and course angle information of the center point, c j In order to be a confidence score,is the intersection of the predicted bounding box with the true bounding box, p i (c) Is the probability that the object contained in the ith prediction box is in the class c,the true value of the object contained in the ith prediction box is the true value of the class of the object;
3-3, adjusting the network weight parameters through back propagation to reduce the loss function value;
step 3-4, repeating the step 3-2 to the step 3-3 until the maximum iteration times or the loss function value reaches the training target requirement;
3-5, evaluating the network performance of the target rapid detection and the occupied memory of the target rapid detection on a hardware platform by using a verification set;
step 4, carrying out target detection on the optical remote sensing image to be detected by utilizing the trained target rapid detection network model; the method specifically comprises the following steps:
step 4-1, blocking the optical remote sensing image to be detected, wherein each complete target to be detected needs to be ensured in a certain blocked image in the process;
step 4-2, inputting the block images into the trained target rapid detection network model, obtaining a plurality of pieces of preliminary target information, and drawing a corresponding prediction frame according to each piece of preliminary target information; the preliminary target information comprises target position, size, category and course angle information;
and 4-3, screening the prediction frame by using a non-maximum value inhibition method, wherein the preliminary target information corresponding to the screened prediction frame is the target information of the optical remote sensing image to be detected.
2. The method for rapidly detecting the optical remote sensing image target on line based on the artificial intelligence as claimed in claim 1, wherein the step 1 of obtaining the original optical remote sensing image and establishing the optical remote sensing image target data set specifically comprises:
1-1, selecting an optical remote sensing image containing an interested area from an original optical remote sensing image;
step 1-2, storing the optical remote sensing image containing the region of interest in a blocking mode to obtain an optical remote sensing image set;
step 1-3, respectively carrying out image preprocessing on each block image, and storing the images before and after processing to expand the optical remote sensing image set;
step 1-4, randomly selecting p% of block images from the expanded optical remote sensing image set as a training set, and taking the rest block images as a verification set; wherein, p% is more than 50%;
and 1-5, acquiring the position, size, category and course angle of an interested target in each block image, and forming an optical remote sensing image target data set by the data and the optical remote sensing image set.
3. The method for rapidly detecting the target of the optical remote sensing image on line based on the artificial intelligence as claimed in claim 2, wherein the region of interest in the step 1-1 comprises an airport, a port and a sea area; steps 1-5 the object of interest comprises an aircraft, a ship.
4. The method for rapidly detecting the target of the optical remote sensing image on line based on the artificial intelligence as claimed in claim 3, wherein the image preprocessing of the step 1-3 comprises image geometric transformation or image contrast change or image brightness change or noise addition.
5. The method for rapidly detecting the optical remote sensing image target on line based on the artificial intelligence as claimed in claim 4, wherein the image geometric transformation comprises rotation and mirror image; wherein the rotation is in a counter-clockwise or clockwise direction, including the rotation theta 1 Angle, rotation theta 2 Degree n 0 DEG < theta i °<360°,i=1,2,. ang, n; the mirror image comprises a horizontal mirror image and a vertical mirror image; the added noise includes salt and pepper noise and banded noise.
6. The method for rapidly detecting the target of the optical remote sensing image on line based on the artificial intelligence as claimed in claim 5, wherein the steps 1-5 of obtaining the position, the size, the category and the course angle of the target of interest in each block image are as follows: drawing a minimum circumscribed rectangle of the target of interest, and acquiring the coordinates (X) of the center point of the rectangle c ,Y c ) The position, the width w and the height h of the target are the size of the target, the corresponding target class number and the corresponding target course angle theta, wherein the target class number is a number corresponding to each class of target; the target course angle theta is an included angle between the target orientation and the horizontal right direction, and theta is more than or equal to 0 degree and less than or equal to 360 degrees.
7. The method for rapidly detecting the optical remote sensing image target on line based on the artificial intelligence as claimed in claim 6, wherein the step 2 of building the image feature extraction network and building a target rapid detection network model by combining a decoder specifically comprises the following steps:
the target rapid detection network model comprises an image feature extraction network and a decoder, wherein the image feature extraction network consists of 2 convolutional layers, 7 expansion convolutional structures and 10 expansion convolutional residual error structures, and the decoder is used for predicting the position, size, category and course angle of a target;
the 2 convolutional layers comprise a first convolutional layer and a second convolutional layer, the 7 extended convolution structures comprise a first extended convolution module, a second extended convolution module, a third extended convolution module, a fourth extended convolution module, a fifth extended convolution module, a sixth extended convolution module and a seventh extended convolution module, and the 10 extended convolution residual structures comprise a first extended convolution residual module, a second extended convolution residual module, a third extended convolution residual module, a fourth extended convolution residual module, a fifth extended convolution residual module, a sixth extended convolution residual module, a seventh extended convolution residual module, an eighth extended convolution residual module, a ninth extended convolution residual module and a tenth extended convolution residual module;
the optical remote sensing image in the optical remote sensing image set is used as the input of the first convolution layer; a feature map output after the first convolutional layer, the first extended convolution module, the second extended convolution module, the first extended convolution residual module, the second extended convolution residual module, the third extended convolution residual module, the fourth extended convolution module, the fifth extended convolution residual module, the sixth extended convolution residual module, the fifth extended convolution module, the seventh extended convolution residual module, the sixth extended convolution module, the eighth extended convolution residual module, the ninth extended convolution residual module, the tenth extended convolution residual module, the seventh extended convolution module and the second convolutional layer are sequentially cascaded serves as the input of a decoder; the output of the seventh expanded convolution residual module is simultaneously cascaded with the eighth expanded convolution module, the output of the eighth expanded convolution module is fused with the output of the seventh expanded convolution module, and then the feature map output after the eighth expanded convolution residual module is sequentially cascaded with the ninth expanded convolution module and the third convolution layer is also used as the input of a decoder, and the decoder predicts the position, the size, the category and the course angle of a target to realize the rapid detection of the target.
8. The optical remote sensing image target online rapid detection method based on artificial intelligence of claim 7, wherein the expansion convolution module comprises an input layer, a first 1 x 1 convolution layer, a first 3 x 3 convolution layer, a second 1 x 1 convolution layer and an output layer;
and the expansion convolution residual error module connects the input layer with the output layer on the basis of the expansion convolution module to obtain the output layer of the expansion convolution residual error module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910377070.1A CN110189304B (en) | 2019-05-07 | 2019-05-07 | Optical remote sensing image target on-line rapid detection method based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910377070.1A CN110189304B (en) | 2019-05-07 | 2019-05-07 | Optical remote sensing image target on-line rapid detection method based on artificial intelligence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110189304A CN110189304A (en) | 2019-08-30 |
CN110189304B true CN110189304B (en) | 2022-08-12 |
Family
ID=67715837
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910377070.1A Active CN110189304B (en) | 2019-05-07 | 2019-05-07 | Optical remote sensing image target on-line rapid detection method based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110189304B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111352415B (en) * | 2019-10-11 | 2020-12-29 | 西安科技大学 | Coal mine snake-shaped detection robot positioning method |
CN110765951B (en) * | 2019-10-24 | 2023-03-10 | 西安电子科技大学 | Remote sensing image airplane target detection method based on bounding box correction algorithm |
CN111476756B (en) * | 2020-03-09 | 2024-05-14 | 重庆大学 | Method for identifying casting DR image loosening defect based on improved YOLOv network model |
CN111797676B (en) * | 2020-04-30 | 2022-10-28 | 南京理工大学 | High-resolution remote sensing image target on-orbit lightweight rapid detection method |
CN111833329A (en) * | 2020-07-14 | 2020-10-27 | 中国电子科技集团公司第五十四研究所 | Manual evidence judgment auxiliary method for large remote sensing image |
CN112329550A (en) * | 2020-10-16 | 2021-02-05 | 中国科学院空间应用工程与技术中心 | Weak supervision learning-based disaster-stricken building rapid positioning evaluation method and device |
CN112528948A (en) * | 2020-12-24 | 2021-03-19 | 山东仕达思生物产业有限公司 | Method and equipment for detecting gardnerella rapidly labeled and based on regional subdivision and storage medium |
CN112946684B (en) * | 2021-01-28 | 2023-08-11 | 浙江大学 | Electromagnetic remote sensing intelligent imaging system and method based on optical target information assistance |
CN112861720B (en) * | 2021-02-08 | 2024-05-14 | 西北工业大学 | Remote sensing image small sample target detection method based on prototype convolutional neural network |
CN113221775B (en) * | 2021-05-19 | 2022-04-26 | 哈尔滨工程大学 | Method for detecting target remote sensing image with single-stage arbitrary quadrilateral regression frame large length-width ratio |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016165082A1 (en) * | 2015-04-15 | 2016-10-20 | 中国科学院自动化研究所 | Image stego-detection method based on deep learning |
CN109271856B (en) * | 2018-08-03 | 2021-09-03 | 西安电子科技大学 | Optical remote sensing image target detection method based on expansion residual convolution |
-
2019
- 2019-05-07 CN CN201910377070.1A patent/CN110189304B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110189304A (en) | 2019-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110189304B (en) | Optical remote sensing image target on-line rapid detection method based on artificial intelligence | |
CN111797676B (en) | High-resolution remote sensing image target on-orbit lightweight rapid detection method | |
CN108596101B (en) | Remote sensing image multi-target detection method based on convolutional neural network | |
Cao et al. | Adversarial objects against lidar-based autonomous driving systems | |
CN112884760B (en) | Intelligent detection method for multi-type diseases of near-water bridge and unmanned ship equipment | |
CN109255286B (en) | Unmanned aerial vehicle optical rapid detection and identification method based on deep learning network framework | |
CN116229295A (en) | Remote sensing image target detection method based on fusion convolution attention mechanism | |
Javadi et al. | Vehicle detection in aerial images based on 3D depth maps and deep neural networks | |
CN114842681A (en) | Airport scene flight path prediction method based on multi-head attention mechanism | |
CN114565842A (en) | Unmanned aerial vehicle real-time target detection method and system based on Nvidia Jetson embedded hardware | |
Wang et al. | Toward structural learning and enhanced YOLOv4 network for object detection in optical remote sensing images | |
CN117115686A (en) | Urban low-altitude small unmanned aerial vehicle detection method and system based on improved YOLOv7 | |
Liu et al. | Dlc-slam: A robust lidar-slam system with learning-based denoising and loop closure | |
CN114943870A (en) | Training method and device of line feature extraction model and point cloud matching method and device | |
Pei et al. | Small target detection with remote sensing images based on an improved YOLOv5 algorithm | |
Ozaki et al. | DNN-based self-attitude estimation by learning landscape information | |
CN109657679B (en) | Application satellite function type identification method | |
Moraguez et al. | Convolutional neural network for detection of residential photovoltalc systems in satellite imagery | |
Neloy et al. | Alpha-N-V2: Shortest path finder automated delivery robot with obstacle detection and avoiding system | |
CN115984443A (en) | Space satellite target image simulation method of visible light camera | |
CN115410102A (en) | SAR image airplane target detection method based on combined attention mechanism | |
Feng et al. | Lightweight detection network for arbitrary-oriented vehicles in UAV imagery via precise positional information encoding and bidirectional feature fusion | |
CN115471763A (en) | SAR image airplane target identification method based on semantic and textural feature fusion | |
CN116758363A (en) | Weight self-adaption and task decoupling rotary target detector | |
CN115115936A (en) | Remote sensing image target detection method in any direction based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |