CN114494872A - Embedded lightweight remote sensing target detection system - Google Patents
Embedded lightweight remote sensing target detection system Download PDFInfo
- Publication number
- CN114494872A CN114494872A CN202210081517.2A CN202210081517A CN114494872A CN 114494872 A CN114494872 A CN 114494872A CN 202210081517 A CN202210081517 A CN 202210081517A CN 114494872 A CN114494872 A CN 114494872A
- Authority
- CN
- China
- Prior art keywords
- target detection
- convolution
- model
- remote sensing
- embedded
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 73
- 238000013136 deep learning model Methods 0.000 claims abstract description 16
- 238000010606 normalization Methods 0.000 claims description 15
- 238000013507 mapping Methods 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000000034 method Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 9
- 238000013461 design Methods 0.000 description 8
- 238000012549 training Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 238000012360 testing method Methods 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000008034 disappearance Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000011897 real-time detection Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical compound OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an embedded light remote sensing target detection system, which comprises: the target detection system comprises embedded equipment and a target detection model deployed on the embedded equipment; the target detection model adopts a deep learning model, and is deployed to the embedded equipment after being trained on a GPU platform to obtain parameters; the target detection model includes: the structure parameterization module is used for absorbing redundant residual errors and convolution structures in the deep learning model into a backbone network; and the improved residual error structure module is used for realizing the same input and output dimensionality through an improved residual error connection mode. The embedded lightweight remote sensing target detection system is based on structural parameterization, the use of residual connection is increased, the residual connection mode when the input dimension and the output dimension are different is improved, the FLOPs of the model are reduced, the detection precision of the model is improved, the reasoning speed of the model is kept, the real-time performance of remote sensing target detection is met by embedded equipment under limited computing capacity, and the work efficiency is improved conveniently.
Description
Technical Field
The invention relates to the field of intelligent interpretation, deep learning and embedded systems of remote sensing images, in particular to an embedded light remote sensing target detection system.
Background
Currently, with the continuous development of high-resolution remote sensing image technology, remote sensing image data is widely applied to various fields, such as safety monitoring, resource exploration, disaster prevention and relief, military reconnaissance and the like. The remote sensing data acquisition method has the advantages of breaking through time and region limitation, being large in data volume, saving cost and the like. However, at the same time, only a few specific targets in the large-format remote sensing image are often the objects of interest, so how to detect the objects of interest from the large-format remote sensing image is a hot point problem in the field of digital image processing. A large number of algorithms based on the deep neural network enable the target detection precision of the remote sensing image to reach a high level. However, high precision algorithm operation requires a large amount of memory space and is time-complex, and thus highly dependent on a high performance graphics processor. In the practical engineering application of remote sensing target detection, the current data processing is centralized on a ground measurement and control station. On one hand, the processing platform is provided with a large number of graphic processors which consume space and resources, so that the embedded system is a more universal platform; on the other hand, under the limited computing power and memory space of the embedded system, the task also requires that the algorithm has higher detection precision and running speed. Although the traditional target detection method is low in calculation consumption and high in running speed, the detection precision is far from meeting the task requirement. Therefore, how to transplant the remote sensing target detection algorithm based on deep learning into an embedded system is a research field with application value.
In the field of target detection of images, the conventional method is to extract the local region features of the images by using artificially designed operators such as SIFT features, HOG features and the like, and then classify the features by using classifiers such as a support vector machine and the like. With the development of deep learning technology, a method based on a convolutional neural network is emerging continuously, the detection precision and the operation speed exceed those of the traditional method, and the method becomes the mainstream and the frontier of research gradually. However, remote sensing images also have their specificity with respect to more widely studied natural images. In remote sensing images, people often pay attention to sensitive targets such as airplanes, oil tanks and ships, and pay attention to sensitive ground objects such as airports and ports. Therefore, in the target detection of the remote sensing image, the problems such as large target size difference, unbalanced class labeling of the target and the like are particularly prominent, which is also a problem to be solved in the field of target detection of the remote sensing image. On the other hand, the remote sensing image has the characteristic of real-time performance, and image processing needs to be performed on terminal equipment such as a satellite and an airplane in real time. In practical application, the equipment storage resources and the computing power of the terminal are limited, and an excessively complex and huge model cannot be laid and cannot meet the requirement of real-time performance.
Disclosure of Invention
In view of the above problems, the present invention provides an embedded light remote sensing target detection system, which can solve the problems that the current embedded system has limited calculation capability in target detection, cannot meet real-time performance, and has low working efficiency.
The embodiment of the invention provides an embedded light remote sensing target detection system, which comprises: the target detection system comprises an embedded device and a target detection model deployed on the embedded device;
the target detection model adopts a deep learning model, is trained on a GPU platform to obtain parameters and then is deployed to the embedded equipment; the target detection model includes:
the structure parameterization module is used for absorbing redundant residual errors and convolution structures in the deep learning model into a backbone network;
and the improved residual error structure module is used for realizing the same input and output dimensionality through an improved residual error connection mode.
Further, a C + + based TensrT library is used for recompiling the structure of the target detection model, converting the trained parameters, and deploying the parameters on the embedded equipment for operation.
Further, the structural re-parameterization module is specifically configured to convert residual concatenation of 1 × 1 convolution and identity mapping in the deep learning model into a 3 × 3 convolution form.
Further, the residual concatenation of the 1 × 1 convolution and the identity mapping in the deep learning model is converted into a 3 × 3 convolution form, which includes:
converting the 1 × 1 convolution and identity map to a 3 × 3 convolution, and integrating the batch normalization operation into the convolution parameters by linear transformation, wherein the batch normalization operation is shown as follows:
wherein bn (·) is a batch normalization operation; m is a feature map extracted from the remote sensing image; mu, sigma, gamma and beta are batch normalization parameters;
bn (M × W, μ, σ, γ, β) ═ M × W '+ b'; the batch normalization is absorbed into the convolution kernel.
Further, the improved residual error structure module is specifically configured to, when the output dimension is smaller than the input dimension, directly intercept a feature map having the same output dimension and add the feature map to the convolution result.
Further, the improved residual error structure module is specifically configured to, when the output dimension is greater than the input dimension, copy and overlap the input feature map along the dimension to obtain a feature map with the same dimension as the convolution output, and add the feature map to the convolution output to complete residual error connection.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
the embodiment of the invention provides an embedded light remote sensing target detection system, which comprises: the system comprises an embedded device and a target detection model deployed on the embedded device; the target detection model adopts a deep learning model, is trained on a GPU platform to obtain parameters and then is deployed to the embedded equipment; the target detection model includes: the structure parameterization module is used for absorbing redundant residual errors and convolution structures in the deep learning model into a backbone network; and the improved residual error structure module is used for realizing the same input and output dimensionality through an improved residual error connection mode. According to the embedded light remote sensing target detection system, light design is carried out on a deep learning model, and target detection is carried out on a remote sensing image by using the embedded system. The method has the advantages that a network design concept based on depth separable convolution and structural weight parameterization is explored, the influence of floating point calculation numbers FLOPs, memory access capacity and other factors on the model reasoning speed is considered, the use of residual connection is increased based on the structural weight parameterization, the residual connection mode when the input dimension and the output dimension are different is improved, the FLOPs of the model are reduced, the model detection precision is improved, the reasoning speed of the model is kept, a lightweight target detection model with excellent comprehensive performance is obtained, the embedded device meets the real-time performance and the accuracy of remote sensing target detection under limited computing capacity, and the working efficiency is improved.
Furthermore, in order to meet the real-time requirement of the on-board intelligent processing system, a lightweight target detection model can be deployed on the terminal embedded system by using TensorRT, so that real-time embedded system target detection is realized.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic diagram of an embedded light remote sensing target detection system for detecting targets of an oil tank and an airplane according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an overall architecture of a target detection model according to an embodiment of the present invention;
fig. 3a is a schematic diagram of a residual error structure in a training phase according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of a residual error structure at the testing stage according to an embodiment of the present invention;
FIG. 4a is a schematic diagram of the connection of residuals with the same input and output dimensions;
FIG. 4b is a schematic diagram illustrating a residual connection mode when the input dimension is larger than the output dimension according to an embodiment of the present invention;
FIG. 4c is a schematic diagram illustrating a residual connection mode when the input dimension is smaller than the output dimension according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a deployment process of the TensorRT according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Referring to fig. 1, an embodiment of the present invention provides an embedded light-weight remote sensing target detection system, which aims to perform light-weight design on a deep learning model and perform target detection on a remote sensing image by using the embedded system, for example, identification of storage equipment such as an oil tank, an airplane, a ship and a dam. By exploring a network design concept based on depth separable convolution and structural weight parameterization and considering the influence of factors such as floating point computation numbers FLOPs, memory access amount and the like on the model reasoning speed, a lightweight target detection model capable of carrying out real-time detection in a terminal embedded system is designed.
This embedded lightweight remote sensing target detection system includes: the target detection system comprises embedded equipment and a target detection model deployed on the embedded equipment; the target detection model adopts a deep learning model, is trained on a GPU platform to obtain parameters and then is deployed to the embedded equipment; the target detection model includes:
the structure parameterization module is used for absorbing redundant residual errors and convolution structures in the deep learning model into a backbone network;
and the improved residual error structure module is used for realizing the same input and output dimensionality through an improved residual error connection mode.
As shown in fig. 2, for the overall architecture diagram of the complete target detection model, the target detection model fully models the features by using a complex multi-convolution kernel multi-residual mode in the training stage, absorbs the redundant residual and convolution structure into the backbone network by structure re-parameterization in the testing stage, and performs single-channel inference in the testing stage, thereby improving the inference speed of the network.
In the training stage, the network module adopts a multi-branch structure, and simultaneously extracts the features by using 3 × 3 convolution, 1 × 1 convolution and identity transformation, and improves the feature expression capability of the model by adopting residual connection. In the testing stage, the structure of the training model is adjusted by using the structure parameterization, and the 1 × 1 convolution and the identity transformation operation are absorbed in the 3 × 3 convolution under the condition of keeping the mathematical operation unchanged, so that the testing model is in the form of single-channel information flow. Such a structure can maintain a fast inference speed. Meanwhile, a residual error connection mode when the input and output dimensions are different is modified, 1 multiplied by 1 convolution is omitted, a stacked characteristic layer mode is adopted, and the model inference speed is improved. And (3) for the embedded system, a model structure is recompiled by utilizing a C + + based TensrT library, and the trained parameters are converted, so that the running speed of the embedded system is further improved, and real-time detection is realized.
In the embodiment, the target detection model is based on the structural parameterization, the use of residual connection is increased, the residual connection mode when the input dimension and the output dimension are different is improved, the FLOPs of the model is reduced, the precision of the model for detecting the remote sensing target is improved, and the reasoning speed of the model is kept. In specific implementation, for example, in order to meet the real-time requirement of the on-board intelligent processing system, the target detection model may be deployed on a terminal embedded system by using a TensorRT, so as to implement real-time embedded system target detection.
The technical scheme of the invention is concretely illustrated by three aspects as follows:
1. lightweight design of model based on structural weight parameterization
In practical application, the reasoning speed of the model is also influenced by factors such as memory access amount and hardware optimization. In the most popular residual error structure of the current convolutional neural network, because intermediate results need to be temporarily reserved in a memory when residual errors are calculated, the instantaneous memory access amount of a model is larger than that of a channel model. In addition, in the current mainstream neural network framework, such as PyTorch, etc., optimization for convolution results in a broken convolution structure in the deep separable convolution rather to be unfavorable for parallel execution of convolution calculation, resulting in a decrease in operation speed. Because the network structure and the parameters have a one-to-one corresponding relation, if the parameters of multiple layers can be combined, the network structure can respond to the change, the macro process of the structure reparameterization is to connect and convert the residual errors of 1 × 1 convolution and identity mapping into a 3 × 3 convolution form, and then combine 3 branches into one branch, thereby realizing the model of a single branch; at a microscopic level, 1 × 1 convolution and identity mapping are easily regarded as 3 × 3 convolution, and then batch normalization operation is integrated into convolution parameters through linear transformation to enhance the generalization of the model, wherein the batch normalization operation is shown as the following formula:
wherein bn (·) is a batch normalization operation; m is a feature map extracted from the remote sensing image; mu, sigma, gamma and beta are batch normalization parameters;
bn (M × W, μ, σ, γ, β) ═ M × W '+ b'; after the batch normalization is absorbed into the convolution kernel, the residual errors can be connected and fused into a branch circuit in a simple addition mode.
2. Residual structure is improved to improve reasoning speed
Since ResNet uses residual connection in a deep neural network for the first time, the structure is widely applied to various network structures. The residual structure can keep the shallow gradient and prevent the gradient in deep network training from disappearing. Meanwhile, more nonlinear transformation can be introduced through the ReLU activation function, and the expression capability of the features is improved. However, the residual structure used for the structure re-parameterization is a 'pseudo residual' structure without an implicit feature layer, and if the network depth is too large, gradient disappearance still can occur in the gradient back propagation process.
Referring to fig. 3a, the input is residual connected with the input after passing through two parallel structures, and then the output is obtained through the ReLU activation function. In the inference phase model structure shown in fig. 3b, the 1 × 1 convolution and the identical residual connection of the parallel structure are absorbed into the 3 × 3 convolution by the structure reparameterization, thus becoming a single-branch structure. But the outermost residual concatenation is preserved and added to the convolved results before activating the function. This structure preserves residual concatenation during the process of reasoning and preserves the network's ability to resist gradient disappearance by the ReLU nonlinear variation after residual concatenation.
In addition, the improved residual error connection mode can also meet the residual error connection requirements with different input and output dimensions. As shown in fig. 4a-4c, the residual is connected differently for different input and output dimensions. Fig. 4a shows the case where the input and output dimensions are the same. For this case, the residual is connected in the same way as in most networks, using identity mapping and adding to the convolved signature. For the case that the output dimension shown in fig. 4b is smaller than the input dimension (which is often found in the down-sampling stage), the feature dimension of the residual branch is not changed by 1 × 1 convolution, but the feature map with the same output dimension is directly truncated and added to the convolution result. For the case that the output dimension shown in fig. 4c is larger than the input dimension (which is often found in the up-sampling stage), the 1 × 1 convolution is not performed, and residual connection is performed by copying and overlapping the input feature maps along the dimension to obtain a feature map with the same dimension as the output of the convolution, and then adding the feature map to the output of the convolution. For the case of different input and output resolutions after down-sampling, the maximum pooling is performed in the residual branch, and then the identity mapping, interception, or stacking in fig. 4a-4c is performed. By the residual connection mode, residual branches do not consume any FLOPs, and the running speed is increased by reducing parallel convolution calculation.
3. Deployment of lightweight models in embedded systems
The current neural network models mostly rely on a deep learning framework such as TensorFlow, PyTorch, etc. towards high performance servers. On one hand, these frameworks are based on Python language, which is simpler and more friendly to develop and design, but is less efficient to operate; on the other hand, the frames are developed on the basis of CUDA, are optimized for a GPU platform, and are not adapted to the computing mode of the embedded system, so that the running speed on the embedded system is poor. Therefore, the embodiment of the invention also utilizes the TensorRT base specially designed for the quantification of the embedded system to rewrite the model structure, so that the model structure can be deployed on the embedded platform, and the reasoning speed of the model is further improved.
Because the computing power of the embedded platform is poor and training cannot be performed, the embedded platform needs to be trained on a GPU in advance to obtain parameters and then deployed to the embedded platform for reasoning. Using the TensorRT library, the pytorre model for the GPU can be converted to a model for the embedded platform, and the deployment process is as shown in fig. 5. Firstly, according to a designed model structure, a Python-based Pythch framework is utilized to construct a Pythich model at a server side, and a training image is input for iterative training to finally obtain model parameters. And then, building the model again by using a C + + based TensorRT library, wherein the structure of the model needs to be consistent with that of a PyTorch model so as to ensure the correctness of parameter loading. After the embedded terminal compiles the TensorRT model and loads model parameters, a model engine for reasoning can be obtained. As shown in fig. 5, inputting the test image into the inference engine can perform inference in the embedded system, and further output accurate detection of the target in the test image.
For example, the large-scale computing power and the intelligent degree of the current satellite remote sensing processing system are poor, and a lightweight target detection model can be designed by using the embedded lightweight remote sensing target detection system provided by the embodiment of the invention. The embodiment of the invention designs the remote sensing target detection model in a light weight manner, the structure heavy parameterization module designs a light weight target detection network based on a structure heavy parameterization method, analyzes the influence of residual branch on the inference speed by improving the residual structure module, improves the residual connection mode when the input dimension and the output dimension are different, reduces the FLOPs of the model, improves the detection precision of the model, keeps the inference speed of the model, realizes that embedded equipment meets the real-time and accuracy of remote sensing target detection under the limited computing capacity, and is beneficial to improving the working efficiency.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (6)
1. An embedded lightweight remote sensing target detection system, comprising: the target detection system comprises an embedded device and a target detection model deployed on the embedded device;
the target detection model adopts a deep learning model, is trained on a GPU platform to obtain parameters and then is deployed to the embedded equipment; the target detection model includes:
the structure parameterization module is used for absorbing redundant residual errors and convolution structures in the deep learning model into a backbone network;
and the improved residual error structure module is used for realizing the same input and output dimensionality through an improved residual error connection mode.
2. The embedded light-weight remote sensing target detection system according to claim 1, wherein a C + + based TensrT library is used for recompiling the structure of the target detection model, converting the trained parameters, and deploying the parameters on an embedded device for operation.
3. The embedded light-weight remote sensing target detection system of claim 2, wherein the structural parameterization module is specifically configured to transform residual connections of 1 x 1 convolution and identity mapping in the deep learning model into a 3 x 3 convolution form.
4. The embedded light-weight remote sensing target detection system as claimed in claim 3, wherein the residual connection of 1 x 1 convolution and identity mapping in the deep learning model is converted into a 3 x 3 convolution form, comprising:
converting the 1 × 1 convolution and identity map to a 3 × 3 convolution, and integrating the batch normalization operation into the convolution parameters by linear transformation, wherein the batch normalization operation is shown as follows:
wherein bn (·) is a batch normalization operation; m is a feature map extracted from the remote sensing image; mu, sigma, gamma and beta are batch normalization parameters;
bn (M × W, μ, σ, γ, β) ═ M × W '+ b'; the batch normalization is absorbed into the convolution kernel.
5. The embedded light-weight remote sensing target detection system of claim 4, wherein the improved residual structure module is specifically configured to, when an output dimension is smaller than an input dimension, directly intercept a feature map having the same output dimension and add the convolution result.
6. The embedded light-weight remote sensing target detection system of claim 5, wherein the improved residual structure module is further specifically configured to, when the output dimension is greater than the input dimension, copy and superimpose the input feature map along the dimension to obtain a feature map with a dimension equal to that of the convolution output, and then add the feature map to the convolution output to complete residual connection.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210081517.2A CN114494872A (en) | 2022-01-24 | 2022-01-24 | Embedded lightweight remote sensing target detection system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210081517.2A CN114494872A (en) | 2022-01-24 | 2022-01-24 | Embedded lightweight remote sensing target detection system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114494872A true CN114494872A (en) | 2022-05-13 |
Family
ID=81474487
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210081517.2A Pending CN114494872A (en) | 2022-01-24 | 2022-01-24 | Embedded lightweight remote sensing target detection system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114494872A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115512324A (en) * | 2022-10-13 | 2022-12-23 | 中国矿业大学 | Pavement disease detection method based on edge symmetric filling and large receptive field |
CN115512324B (en) * | 2022-10-13 | 2024-07-12 | 中国矿业大学 | Pavement disease detection method based on edge symmetrical filling and large receptive field |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109190491A (en) * | 2018-08-08 | 2019-01-11 | 上海海洋大学 | Residual error convolutional neural networks SAR image sea ice classification method |
CN110599401A (en) * | 2019-08-19 | 2019-12-20 | 中国科学院电子学研究所 | Remote sensing image super-resolution reconstruction method, processing device and readable storage medium |
WO2021169351A1 (en) * | 2020-02-24 | 2021-09-02 | 华为技术有限公司 | Method and apparatus for anaphora resolution, and electronic device |
CN113408423A (en) * | 2021-06-21 | 2021-09-17 | 西安工业大学 | Aquatic product target real-time detection method suitable for TX2 embedded platform |
CN113762479A (en) * | 2021-09-10 | 2021-12-07 | 深圳朴生智能科技有限公司 | Neural network optimization method and device |
-
2022
- 2022-01-24 CN CN202210081517.2A patent/CN114494872A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109190491A (en) * | 2018-08-08 | 2019-01-11 | 上海海洋大学 | Residual error convolutional neural networks SAR image sea ice classification method |
CN110599401A (en) * | 2019-08-19 | 2019-12-20 | 中国科学院电子学研究所 | Remote sensing image super-resolution reconstruction method, processing device and readable storage medium |
WO2021169351A1 (en) * | 2020-02-24 | 2021-09-02 | 华为技术有限公司 | Method and apparatus for anaphora resolution, and electronic device |
CN113408423A (en) * | 2021-06-21 | 2021-09-17 | 西安工业大学 | Aquatic product target real-time detection method suitable for TX2 embedded platform |
CN113762479A (en) * | 2021-09-10 | 2021-12-07 | 深圳朴生智能科技有限公司 | Neural network optimization method and device |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115512324A (en) * | 2022-10-13 | 2022-12-23 | 中国矿业大学 | Pavement disease detection method based on edge symmetric filling and large receptive field |
CN115512324B (en) * | 2022-10-13 | 2024-07-12 | 中国矿业大学 | Pavement disease detection method based on edge symmetrical filling and large receptive field |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105630882A (en) | Remote sensing data deep learning based offshore pollutant identifying and tracking method | |
Xu et al. | On-board ship detection in SAR images based on L-YOLO | |
Astolfi et al. | Model reduction by moment matching: Beyond linearity a review of the last 10 years | |
CN116563726A (en) | Remote sensing image ship target detection method based on convolutional neural network | |
Yang et al. | Remote sensing image aircraft target detection based on GIoU-YOLO v3 | |
CN117115686A (en) | Urban low-altitude small unmanned aerial vehicle detection method and system based on improved YOLOv7 | |
Zhang et al. | Study on the situational awareness system of mine fire rescue using faster Ross Girshick-convolutional neural network | |
CN114743110A (en) | Multi-scale nested remote sensing image change detection method and system and computer terminal | |
Yang et al. | The evaluation of DCNN on vector-SIMD DSP | |
Yin et al. | An enhanced lightweight convolutional neural network for ship detection in maritime surveillance system | |
Yin et al. | High-order spatial interactions enhanced lightweight model for optical remote sensing image-based small ship detection | |
CN114494872A (en) | Embedded lightweight remote sensing target detection system | |
CN117454213A (en) | Multi-view data clustering method, device, equipment and medium | |
CN116662929A (en) | Training method of radar signal recognition model and radar signal recognition method | |
Yue et al. | SAR Ship detection method based on convolutional neural network and multi-layer feature fusion | |
Wang et al. | YOLO-ERF: lightweight object detector for UAV aerial images | |
Zhang et al. | Transmission tower detection algorithm based on feature-enhanced convolutional network in remote sensing image | |
CN116266274A (en) | Neural network adjusting method and corresponding device | |
Cui et al. | SDA-Net: a detector for small, densely distributed, and arbitrary-directional ships in remote sensing images | |
Liu et al. | Object Detection Algorithm Based on Improved YOLOv5 for Basketball Robot | |
Bass et al. | Machine learning in problems involved in processing satellite images | |
Chen et al. | Synthetic Aperture Radar Image Ship Detection Based on YOLO-SARshipNet | |
Pengcheng et al. | Software-Hardware Cooperative Lightweight Research of Remote Sensing Target Detection Algorithms for Space-Borne Edge Computing | |
Li et al. | Automatic reading algorithm of pointer water meter based on deep learning and double centroid method | |
Sun et al. | An FPGA-Based Balanced and High-Efficiency Two-Dimensional Data Access Technology for Real-Time Spaceborne SAR |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |