CN114821289A - Forest fire picture real-time segmentation and fire edge point monitoring algorithm - Google Patents
Forest fire picture real-time segmentation and fire edge point monitoring algorithm Download PDFInfo
- Publication number
- CN114821289A CN114821289A CN202210051209.5A CN202210051209A CN114821289A CN 114821289 A CN114821289 A CN 114821289A CN 202210051209 A CN202210051209 A CN 202210051209A CN 114821289 A CN114821289 A CN 114821289A
- Authority
- CN
- China
- Prior art keywords
- fire
- attention
- picture
- calculating
- longitude
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a real-time segmentation and fire edge point monitoring algorithm for forest fire pictures, which sequentially comprises the following steps: building a MobileNet network model; constructing an attention module; constructing an attention residual error network; obtaining fine-tuned picture characteristics by using residual errors of the obtained space attention characteristics and the original characteristics; an attention residual module obtained; extracting picture characteristics of forest fires; dividing the fire region in the picture; calculating fire boundary point pixels; calculating the transverse longitudinal distance of the airplane; calculating longitude and latitude coordinates of a central point of the image; and calculating the longitude and latitude of the fire boundary point. The algorithm provided by the invention can help the staff to check the real-time condition of the fire and monitor the fire boundary; through the real-time quick analysis to the forest condition of a fire, can carry out effective processing to the conflagration, reduce the loss that brings the conflagration.
Description
Technical Field
The invention relates to a real-time segmentation and fire edge point monitoring algorithm for a forest fire picture, which is used for carrying out real-time segmentation and fire edge point monitoring on the picture of a forest fire, particularly segmenting the picture by using a deep learning and neural network and calculating fire edge points after segmentation.
Background
Forest fires have extremely strong destructive power and dangerousness, the loss caused by the fires can not be compensated sometimes, and water and soil loss, wild animals harm and people property threat can be caused under the condition of serious fires. When a forest is in fire, the real-time situation of the forest fire is monitored, and the forest fire can be effectively controlled only by knowing the real-time situation of the fire and the real-time boundary range of the fire.
With the rapid development of deep learning in recent years, many fields combine the technology to solve practical problems, and great success is achieved. The Convolutional Neural Network (CNN) has strong learning ability, and semantic segmentation is often constructed by a CNN classification network. Lenet is one of the earliest neural networks, and can simply and quickly extract features of pictures, but neglects the relationship between feature channels. Therefore, an algorithm for real-time segmentation of forest fire pictures and fire edge point monitoring is urgently needed to solve the problem in the prior art.
In order to solve the technical problems, a new technical scheme is especially provided.
Disclosure of Invention
The invention aims to provide a forest fire picture real-time segmentation and fire edge point monitoring algorithm to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme: a real-time segmentation and fire edge point monitoring algorithm for forest fire pictures comprises the following steps:
s1, building a MobileNet network model, wherein the basic unit of the network structure is a deep separable convolution;
s2, constructing an attention module which is a channel attention module and a space attention module respectively;
s3, constructing an attention residual error network, and obtaining finely adjusted picture characteristics by using residual errors between the obtained space attention characteristics and the original characteristics;
s4, constructing a depth attention module, adding a depth convolution module, connecting a pointwise convolution, and finally adding an attention residual error module obtained in S3;
s5, constructing a MobileCBAM-Net network model, synthesizing a MobileCBAM-Net by taking MobileNet constructed in S1 as a basic network architecture and taking S2, S3 and S4 as components, and extracting picture characteristics of forest fires;
s6, inputting the fire picture into the MobileCBAM-Net network model constructed in the S5, assigning the pixel points of the area with the fire request to be red, and assigning the pixel points of the area which is not the fire area to be black, so that the fire area in the picture is divided;
s7, calculating fire boundary point pixels to obtain the picture segmented in S6, wherein if one point around one pixel is not red, the point is a boundary point pixel, and otherwise, the point is a non-boundary point pixel;
s8, calculating the transverse longitudinal distance of the airplane, processing the input flying included angle to convert the flying included angle into a required positive included angle, and calculating the transverse longitudinal distance of the airplane by combining the pitch angles of the airplane and the nacelle and the flying height of the airplane;
s9, calculating longitude and latitude coordinates of the central point of the image, and calculating the longitude and latitude coordinates of the central point of the image by combining the longitude and latitude of the airplane and the result in the S8;
s10, calculating the longitude and latitude of the fire boundary point, acquiring the longitude and latitude of the edge point pixel in S7 and the image in S9, calculating the longitude and latitude of the upper left corner, and calculating the longitude and latitude of the other three points according to the longitude and latitude of the upper left corner so as to obtain the longitude and latitude of the boundary point;
preferably, in S1, the depth separable convolution may be specifically divided into two operations depthwise convolution and pointwise convolution;
preferably, the depthwise convolution operation is different from the standard convolution operation, the depthwise convolution operation is different for each channel using a different convolution kernel, and one convolution kernel convolves one channel;
preferably, in S5, the mobilebcam-Net includes three modules, an attention module, a residual network, and a depth separable convolution; firstly, establishing two attention modules which are a channel attention module and a space attention module respectively;
preferably, the channel attention module inputs the multi-channel feature map while passing through a maximum pooling layer and an average pooling layer; respectively inputting the obtained results into a shared MLP network, then simultaneously passing through a maximum pooling layer and an average pooling layer, adding the results of the maximum pooling layer and the average pooling layer, and finally obtaining adjusted channel attention characteristics after passing through an activation function;
preferably, the input of the spatial attention module is adjusted channel attention characteristics, spatial characteristic extraction is carried out, the spatial attention characteristics are obtained by adding an activation function after a maximum pooling is carried out and an average pooling layer is carried out;
preferably, the obtained spatial attention feature and the original feature are subjected to residual error to obtain a fine-tuned picture feature; the MobileCBAM-Net model has the advantages that the characteristics of space and channels can be considered, and the model is a lightweight network; the requirements of fast acquiring results of forest fire picture segmentation are met, and the speed is between 1 and 2 seconds.
Compared with the prior art, the invention has the beneficial effects that: the real-time picture segmentation and edge point monitoring algorithm for the forest fire is disclosed, the acquired forest fire picture is subjected to real-time fire semantic segmentation through a neural network and deep learning, and the position of a fire boundary point is calculated in real time for the segmented picture. The algorithm provided by the scheme can help the worker to check the real-time condition of the fire and monitor the fire boundary. Through the real-time quick analysis to the forest condition of a fire, can carry out effective processing to the conflagration, reduce the loss that brings the conflagration.
Drawings
FIG. 1 is a diagram of the Depthwise contribution operation.
Fig. 2 attention residual block diagram.
FIG. 3 is a channel attention block diagram.
FIG. 4 is a spatial attention block diagram.
FIG. 5 is a schematic diagram of the overall architecture of the MobileCBAM-Net network.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
After a forest has a fire, the condition of the fire needs to be known in time, so that the fire can be controlled in time, and the loss is reduced.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that:
a MobileCBAM-Net network model is built to segment pictures shot by forest fires in real time, and after the segmented pictures are obtained, edge points of the fires are calculated. The method specifically comprises the following steps:
and S1, building a MobileNet network model, wherein the basic unit of the network structure is a deep separable convolution. The depth separable convolution can be divided into two operations depthwise convolution and pointwise convolution. The depthwise convolution operation differs from the standard convolution operation in that, as shown in fig. 1, the convolution kernel used for each channel is different, and one convolution kernel convolves each channel. poitwise convolution is a common convolution, but it adopts a convolution kernel of 1x1, and combines the above outputs, thereby greatly reducing the calculated amount and parameters of the network. And normalization processing and ReLu activation operation are carried out before each convolution operation, so that the phenomenon of gradient disappearance can be relieved, and the model is more stable.
And S2, constructing an attention module, namely a channel attention module and a space attention module. First, two attention modules, a channel attention module and a spatial attention module, are established. The channel attention module inputs the multi-channel feature map while passing through a maximum pooling layer and an average pooling layer. And respectively inputting the obtained results into a shared MLP network, then simultaneously passing through a maximum pooling layer and an average pooling layer, adding the results of the maximum pooling layer and the average pooling layer, and finally obtaining the adjusted channel attention characteristics through an activation function. The input of the space attention module is adjusted channel attention characteristics, spatial characteristic extraction is carried out, the maximum pooling is firstly carried out, then an average pooling layer is carried out, and an activation function is added to obtain the space attention characteristics.
S3, constructing an attention fused residual error network, and obtaining the finely adjusted picture characteristics by using the residual error between the obtained spatial attention characteristics and the original characteristics, wherein the overall attention module diagram is shown in FIG. 2.
S4, constructing a depth attention module, namely, firstly, connecting a depth degree convolution module, then connecting normalization and activation operations, then connecting poitwise convolution, then carrying out normalization, and finally adding an attention residual error module.
S5, constructing a MobileCBAM-Net network model, wherein the MobileCBAM-Net extracts the picture characteristics of the forest fire by using the MobileNet built in the S1 as a basic network architecture, as shown in FIG. 5. The mobilebcam-Net includes three modules: attention module, residual network, depth separable convolution. The network input is a picture, and the picture is normalized and activated after passing through a convolution layer. And then passes through 7 identical depth attention modules obtained at S4. And then, performing up-sampling on the image for 4 times to obtain a characteristic picture, and finally, restoring the picture to the original size by using interpolation. The depth separable convolution can provide multi-level characteristic information for image segmentation, but also brings the problem of redundancy of the image characteristic information. And an attention module is added between the two modules, so that the problem of characteristic information redundancy is effectively solved, and the module can pay more attention to the characteristic information in the image. The MobileCBAM-Net model has the advantages of taking space and channel characteristics into consideration and is a lightweight network. The requirements of fast acquiring results of forest fire picture segmentation are met, and the speed is between 1 and 2 seconds.
And S6, inputting the fire picture into the MobileCBAM-Net network model constructed in the S5, and assigning the pixel points of the area with the fire request as red, and assigning the pixel points of the area which is not the fire area as black. Thereby segmenting out the fire region in the picture.
S7, calculating fire boundary point pixels to obtain the picture segmented in S6, wherein if one point around one pixel is not red, the point is a boundary point pixel, and otherwise, the point is a non-boundary point pixel;
s8, calculating the transverse longitudinal distance of the airplane, processing the input flying included angle to convert the flying included angle into a required positive included angle, and calculating the transverse longitudinal distance of the airplane by combining the pitch angles of the airplane and the nacelle and the flying height of the airplane;
s9, calculating longitude and latitude coordinates of the central point of the image, and calculating the longitude and latitude coordinates of the central point of the image by combining the longitude and latitude of the airplane and the result in the S8;
s10, calculating the longitude and latitude of the fire boundary point, acquiring the longitude and latitude of the edge point pixel in S7 and the image in S9, calculating the longitude and latitude of the upper left corner, and calculating the longitude and latitude of the other three points according to the longitude and latitude of the upper left corner so as to obtain the longitude and latitude of the boundary point;
in the technical scheme, the mobile net network is constructed by taking the depth separable convolution as a basic unit, different input channels are respectively convoluted, and then the output is combined to form the lightweight network. The invention constructs a MobileCBAM-Net network, fuses channel and space characteristics and segments the fire picture. The method is characterized in that MobileNet is used as a backbone network, an attention mechanism module and a short module are added, and the MobileNet is composed of an attention module, a residual error network and a depth separable convolution.
When the method is used, the acquired forest fire picture is subjected to real-time fire semantic segmentation through the neural network and deep learning, and the position of a fire boundary point is calculated in real time for the segmented picture. The algorithm provided by the scheme can help the worker to check the real-time condition of the fire and monitor the fire boundary. Through the real-time quick analysis to the forest condition of a fire, can carry out effective processing to the conflagration, reduce the loss that brings the conflagration.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (7)
1. A real-time segmentation and fire edge point monitoring algorithm for forest fire pictures is characterized by comprising the following steps:
s1, building a MobileNet network model, wherein the basic unit of the network structure is a deep separable convolution;
s2, constructing an attention module which is a channel attention module and a space attention module respectively;
s3, constructing an attention residual error network, and obtaining the finely adjusted picture characteristics by using residual errors between the obtained space attention characteristics and the original characteristics;
s4, constructing a depth attention module, adding a depth convolution module, connecting a pointwise convolution, and finally adding an attention residual error module obtained in S3;
s5, constructing a MobileCBAM-Net network model, synthesizing a MobileCBAM-Net by taking MobileNet constructed in S1 as a basic network architecture and taking S2, S3 and S4 as components, and extracting picture characteristics of forest fires;
s6, inputting the fire picture into the MobileCBAM-Net network model constructed in the S5, assigning the pixel points of the area with the fire request to be red, and assigning the pixel points of the area which is not the fire area to be black, so that the fire area in the picture is divided;
s7, calculating fire boundary point pixels to obtain the picture segmented in S6, wherein if one point around one pixel is not red, the point is a boundary point pixel, and otherwise, the point is a non-boundary point pixel;
s8, calculating the transverse longitudinal distance of the airplane, processing the input flying included angle to convert the flying included angle into a required positive included angle, and calculating the transverse longitudinal distance of the airplane by combining the pitch angles of the airplane and the nacelle and the flying height of the airplane;
s9, calculating longitude and latitude coordinates of the central point of the image, and calculating the longitude and latitude coordinates of the central point of the image by combining the longitude and latitude of the airplane and the result in the S8;
and S10, calculating the longitude and latitude of the fire boundary point, acquiring the longitude and latitude of the edge point pixel in S7 and the image in S9, calculating the longitude and latitude of the upper left corner, and calculating the longitude and latitude of the other three points according to the longitude and latitude of the upper left corner so as to obtain the longitude and latitude of the boundary point.
2. The forest fire picture real-time segmentation and fire edge point monitoring algorithm as claimed in claim 1, wherein: in S1, the depth separable convolution can be specifically divided into two operations depthwise convolution and pointwise convolution.
3. The forest fire picture real-time segmentation and fire edge point monitoring algorithm as claimed in claim 2, wherein: the depthwise convolution operation differs from the standard convolution operation in that the depthwise convolution operation uses a different convolution kernel for each channel, and one convolution kernel convolves each channel.
4. The forest fire picture real-time segmentation and fire edge point monitoring algorithm as claimed in claim 1, wherein: in S5, the MobileCBAM-Net comprises three modules of an attention module, a residual error network and a depth separable convolution; first, two attention modules, a channel attention module and a spatial attention module, are established.
5. The forest fire picture real-time segmentation and fire edge point monitoring algorithm as claimed in claim 4, wherein: inputting a multi-channel feature map by a channel attention module, and simultaneously passing through a maximum pooling layer and an average pooling layer; and respectively inputting the obtained results into a shared MLP network, then simultaneously passing through a maximum pooling layer and an average pooling layer, adding the results of the maximum pooling layer and the average pooling layer, and finally obtaining the adjusted channel attention characteristics through an activation function.
6. The forest fire picture real-time segmentation and fire edge point monitoring algorithm as claimed in claim 5, wherein: the input of the space attention module is adjusted channel attention characteristics, spatial characteristic extraction is carried out, the maximum pooling is firstly carried out, then an average pooling layer is carried out, and an activation function is added to obtain the space attention characteristics.
7. The forest fire picture real-time segmentation and fire edge point monitoring algorithm as claimed in claim 6, wherein: obtaining fine-tuned picture characteristics by using residual errors of the obtained space attention characteristics and the original characteristics; the MobileCBAM-Net model has the advantages that the characteristics of space and channels can be considered, and the model is a lightweight network; the requirements of fast acquiring results of forest fire picture segmentation are met, and the speed is between 1 and 2 seconds.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210051209.5A CN114821289B (en) | 2022-01-17 | 2022-01-17 | Forest fire picture real-time segmentation and fire edge point monitoring algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210051209.5A CN114821289B (en) | 2022-01-17 | 2022-01-17 | Forest fire picture real-time segmentation and fire edge point monitoring algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114821289A true CN114821289A (en) | 2022-07-29 |
CN114821289B CN114821289B (en) | 2023-10-17 |
Family
ID=82527963
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210051209.5A Active CN114821289B (en) | 2022-01-17 | 2022-01-17 | Forest fire picture real-time segmentation and fire edge point monitoring algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114821289B (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160313120A1 (en) * | 2013-12-16 | 2016-10-27 | Obshestvo S Ogranichennoj Otvetstvennostyu "Disikon" | Method for determination of optimal forest video monitoring system configuration |
CN106991681A (en) * | 2017-04-11 | 2017-07-28 | 福州大学 | A kind of fire boundary vector information extract real-time and method for visualizing and system |
CN109147259A (en) * | 2018-11-20 | 2019-01-04 | 武汉理工光科股份有限公司 | A kind of remote fire detection system and method based on video image |
CN110021018A (en) * | 2019-04-12 | 2019-07-16 | 电子科技大学 | A method of forest fire footprint is extracted based on remotely-sensed data |
CN110047241A (en) * | 2019-04-27 | 2019-07-23 | 刘秀萍 | A kind of forest fire unmanned plane cruise monitoring system |
CN110599727A (en) * | 2019-09-16 | 2019-12-20 | 星泽天下(北京)科技有限公司 | Emergency command management system for forest fire |
US20200012859A1 (en) * | 2017-03-28 | 2020-01-09 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for fire detection |
CN111047565A (en) * | 2019-11-29 | 2020-04-21 | 南京恩博科技有限公司 | Method, storage medium and equipment for forest cloud image segmentation |
CN111339858A (en) * | 2020-02-17 | 2020-06-26 | 电子科技大学 | Oil and gas pipeline marker identification method based on neural network |
CN111625999A (en) * | 2020-05-29 | 2020-09-04 | 中南林业科技大学 | Forest fire early warning model and system based on deep learning technology |
CN112308092A (en) * | 2020-11-20 | 2021-02-02 | 福州大学 | Light-weight license plate detection and identification method based on multi-scale attention mechanism |
CN113112510A (en) * | 2021-04-29 | 2021-07-13 | 五邑大学 | Semantic segmentation forest fire detection method, controller and storage medium |
CN113743378A (en) * | 2021-11-03 | 2021-12-03 | 航天宏图信息技术股份有限公司 | Fire monitoring method and device based on video |
CN113887324A (en) * | 2021-09-10 | 2022-01-04 | 北京和德宇航技术有限公司 | Fire point detection method based on satellite remote sensing data |
-
2022
- 2022-01-17 CN CN202210051209.5A patent/CN114821289B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160313120A1 (en) * | 2013-12-16 | 2016-10-27 | Obshestvo S Ogranichennoj Otvetstvennostyu "Disikon" | Method for determination of optimal forest video monitoring system configuration |
US20200012859A1 (en) * | 2017-03-28 | 2020-01-09 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for fire detection |
CN106991681A (en) * | 2017-04-11 | 2017-07-28 | 福州大学 | A kind of fire boundary vector information extract real-time and method for visualizing and system |
CN109147259A (en) * | 2018-11-20 | 2019-01-04 | 武汉理工光科股份有限公司 | A kind of remote fire detection system and method based on video image |
CN110021018A (en) * | 2019-04-12 | 2019-07-16 | 电子科技大学 | A method of forest fire footprint is extracted based on remotely-sensed data |
CN110047241A (en) * | 2019-04-27 | 2019-07-23 | 刘秀萍 | A kind of forest fire unmanned plane cruise monitoring system |
CN110599727A (en) * | 2019-09-16 | 2019-12-20 | 星泽天下(北京)科技有限公司 | Emergency command management system for forest fire |
CN111047565A (en) * | 2019-11-29 | 2020-04-21 | 南京恩博科技有限公司 | Method, storage medium and equipment for forest cloud image segmentation |
CN111339858A (en) * | 2020-02-17 | 2020-06-26 | 电子科技大学 | Oil and gas pipeline marker identification method based on neural network |
CN111625999A (en) * | 2020-05-29 | 2020-09-04 | 中南林业科技大学 | Forest fire early warning model and system based on deep learning technology |
CN112308092A (en) * | 2020-11-20 | 2021-02-02 | 福州大学 | Light-weight license plate detection and identification method based on multi-scale attention mechanism |
CN113112510A (en) * | 2021-04-29 | 2021-07-13 | 五邑大学 | Semantic segmentation forest fire detection method, controller and storage medium |
CN113887324A (en) * | 2021-09-10 | 2022-01-04 | 北京和德宇航技术有限公司 | Fire point detection method based on satellite remote sensing data |
CN113743378A (en) * | 2021-11-03 | 2021-12-03 | 航天宏图信息技术股份有限公司 | Fire monitoring method and device based on video |
Non-Patent Citations (4)
Title |
---|
AKASHDEEP SHARMA等: "IoT and deep learning-inspired multi-model framework for monitoring Active Fire Locations in Agricultural Activities", COMPUTERS & ELECTRICAL ENGINEERING, vol. 93, pages 1 - 19 * |
SANGHYUN WOO等: "CBAM: Convolutional Block Attention Module", PROCEEDINGS OF THE EUROPEAN CONFERENCE ON COMPUTER VISION (ECCV), 2018, pages 3 - 19 * |
徐燕翔等: "基于无人机的森林火灾检测系统", 计算机工程与设计, vol. 39, no. 06, pages 1591 - 1596 * |
陈博华: "图像分类注意力机制研究及其在目标检测中的应用", 中国优秀硕士学位论文全文数据库信息科技辑, pages 138 - 1873 * |
Also Published As
Publication number | Publication date |
---|---|
CN114821289B (en) | 2023-10-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111754394B (en) | Method and device for detecting object in fisheye image and storage medium | |
CN114202696A (en) | SAR target detection method and device based on context vision and storage medium | |
US20200250460A1 (en) | Head region recognition method and apparatus, and device | |
CN113192646B (en) | Target detection model construction method and device for monitoring distance between different targets | |
CN111079739A (en) | Multi-scale attention feature detection method | |
CN109389043A (en) | A kind of crowd density estimation method of unmanned plane picture | |
CN109325947A (en) | A kind of SAR image steel tower object detection method based on deep learning | |
CN108830327A (en) | A kind of crowd density estimation method | |
CN109886321B (en) | Image feature extraction method and device for fine-grained classification of icing image | |
CN109523558A (en) | A kind of portrait dividing method and system | |
Tong et al. | UAV target detection based on RetinaNet | |
CN115861915A (en) | Fire fighting access monitoring method, fire fighting access monitoring device and storage medium | |
CN116012879A (en) | Pedestrian detection method, system, equipment and medium for improving YOLOv4 network | |
CN114913604A (en) | Attitude identification method based on two-stage pooling S2E module | |
Li et al. | Vehicle object detection based on rgb-camera and radar sensor fusion | |
CN115171183A (en) | Mask face detection method based on improved yolov5 | |
CN113221842B (en) | Model training method, image recognition method, device, equipment and medium | |
CN113628172A (en) | Intelligent detection algorithm for personnel handheld weapons and smart city security system | |
CN116091709B (en) | Three-dimensional reconstruction method and device for building, electronic equipment and storage medium | |
CN114821289B (en) | Forest fire picture real-time segmentation and fire edge point monitoring algorithm | |
US11881020B1 (en) | Method for small object detection in drone scene based on deep learning | |
CN111832348B (en) | Pedestrian re-identification method based on pixel and channel attention mechanism | |
CN116844055A (en) | Lightweight SAR ship detection method and system | |
CN113902744B (en) | Image detection method, system, equipment and storage medium based on lightweight network | |
Zhan et al. | A High-precision Forest Fire Smoke Detection Approach Based on DRGNet to Remote Sensing Through Uavs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |