CN111898440B - Mountain fire detection method based on three-dimensional convolutional neural network - Google Patents
Mountain fire detection method based on three-dimensional convolutional neural network Download PDFInfo
- Publication number
- CN111898440B CN111898440B CN202010607252.6A CN202010607252A CN111898440B CN 111898440 B CN111898440 B CN 111898440B CN 202010607252 A CN202010607252 A CN 202010607252A CN 111898440 B CN111898440 B CN 111898440B
- Authority
- CN
- China
- Prior art keywords
- neural network
- convolutional neural
- video
- dimensional convolutional
- mountain fire
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 31
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 17
- 238000012549 training Methods 0.000 claims description 23
- 238000011176 pooling Methods 0.000 claims description 12
- 238000000034 method Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000002372 labelling Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 2
- 238000011478 gradient descent method Methods 0.000 claims description 2
- 238000013135 deep learning Methods 0.000 abstract description 6
- 238000011161 development Methods 0.000 abstract description 2
- 239000000284 extract Substances 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 10
- 238000012360 testing method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000002265 prevention Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Fire-Detection Mechanisms (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of deep learning, and particularly relates to a mountain fire detection method based on a three-dimensional convolutional neural network. The invention replaces feature engineering with deep learning, automatically extracts space-time features of the image sequence, and greatly improves the development efficiency of feature descriptors; the three-dimensional convolution replaces the traditional two-dimensional convolution, so that not only can the spatial mode of the image be learned, but also the motion mode between image sequences can be learned, and the expression and discrimination capability of the video feature descriptors are greatly improved; the 3DCNN replaces the traditional two-dimensional convolution network to build the target detector, so that the detection precision of mountain fire is greatly improved.
Description
Technical Field
The invention belongs to the technical field of deep learning, and particularly relates to a mountain fire detection method based on a three-dimensional convolutional neural network.
Background
Forest is a key to maintaining the ecological balance of the earth. Forest fires and forest fires not only cause great economic losses, but also have serious damages to the ecological environment. People have seen fires and send alarm information by looking at the tower visually. This approach is not only costly, inefficient, but is also susceptible to human negligence. Therefore, the design of an unmanned intelligent mountain fire detection technology with certain autonomous decision-making capability is important for maintaining normal running of ecological environment and national security.
In order to realize remote detection of mountain fires, a technology for detecting large-area mountain fires by using remote sensing satellites has appeared. Unfortunately, the spatial resolution of the remote sensing image is limited by the time resolution and the remote sensing distance, and is easily influenced by meteorological conditions, and meanwhile, the method cannot monitor the concerned area in real time and cannot detect early small-area forest fire. The technology for detecting the mountain fire on the foundation makes up the defects of the technology for detecting the mountain fire on the foundation, utilizes a visible light or infrared camera to collect images, and detects the mountain fire through image processing or computer vision technology. The method is dependent on feature engineering technology in early stage, and spatial features such as color, texture, morphology and the like of the image are extracted, or spectral features of the image are extracted through spectral analysis. The classification and positioning of mountain fires are realized by supervised learning and training of the classifier and combining an image pyramid and a sliding window technology. With the rise of deep learning in recent years, attempts have been made to introduce convolutional neural network (Convolution Neural Network, CNN) -based classification or object detection techniques into the field of forest fire detection. However, the early methods rely on feature engineering too much, and it is difficult to obtain feature descriptors with excellent performance suitable for mountain fire detection; while the subsequent two-dimensional CNN-type method focuses on modeling of spatial features only, while being applicable to classification or detection at the image level, it is still difficult to obtain higher accuracy when applied to mountain fire detection with significant motion features.
Disclosure of Invention
In order to overcome the defects of low space-time characteristic modeling capability and over-high false alarm rate of the traditional mountain fire detection technology, the invention provides a mountain fire detection technology based on a three-dimensional convolutional neural network (3-Dimensional Convolution Neural Network,3 DCNN), the space-time characteristic of mountain fire is modeled through deep learning, a video characteristic descriptor with excellent performance is obtained, and the high-precision detection of the mountain fire is realized on the basis.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a mountain fire detection method based on a three-dimensional convolutional neural network comprises the following steps:
s1, constructing a training data set: acquiring a plurality of videos with and without mountain fires in the same area, wherein the number of videos without mountain fires is more than or equal to that of videos with mountain fires, and labeling the videos with labels 1 and 0 respectively; cutting all videos into video clips with the length of 2 seconds, and marking coordinates of mountain fire areas in a first frame in the video clips to obtain a training data set;
s2, constructing a three-dimensional convolutional neural network by adopting 10 three-dimensional convolutional layers, 5 pooling layers and a Loss layer: defining a three-dimensional convolution layer as Convn, defining a pooling layer as Pooln, wherein n refers to the number of layers, and sequentially: conv1, pool1, conv2, pool2, conv3, conv4, pool3, conv5, conv6, pool4, conv7, conv8, pool5, conv9, conv10, softmax layers, wherein the convolution kernels of Conv1-Conv8 are all 3x3x3, step sizes of [1,1]The pool core size of all pool layers is 2x2, and the step size of Pool1 is [1,2]The step length of Pool2-Pool5 is [2,2]Conv9 has a convolution kernel size of 1x3x3, step size of [1,1]Conv10 has a convolution kernel size of 1x1x1, step size of [1,1]The method comprises the steps of carrying out a first treatment on the surface of the The loss function employed includes the loss L of the presence or absence of an object obj Class probability loss L class And a position size L xywh Wherein L is a loss of obj And L class The binary cross entropy is adopted, and the calculation formula is as follows:
l i =-(y i logx i +(1-y i )log(1-x i )),i∈0,1,...,N-1
wherein x represents a predicted value, y represents a target value, and N represents a batch size;
L xywh the mean square error is adopted, and the calculation formula is as follows:
MSE i =(x i -y i ) 2 ,i∈0,1,...,N-1
the final total loss is L obj 、L class And L xywh And (3) summing;
s3, normalizing the training data set into a 16x224x224 video segment by adopting a mode of random space-time disturbance, random horizontal overturn and equal proportion scaling, and training the constructed three-dimensional convolutional neural network: training by adopting a random gradient descent method with a batch size of 30, gradually increasing the learning rate from 0 to 0.0005 in the first 1000 batches by adopting a learning rate preheating mode, then maintaining until 90% and 95% of the total iteration times are reached, respectively reducing the learning rate by 5 times, and obtaining a total training period of 200; obtaining a trained three-dimensional convolutional neural network;
s4, dividing the video acquired in real time into video segments, and inputting the video segments into a trained three-dimensional convolutional neural network 3DNetDet to obtain a mountain fire detection result.
The beneficial effects of the invention are as follows: (1) Deep learning is used for replacing feature engineering, so that space-time features of an image sequence are automatically extracted, and the development efficiency of feature descriptors is greatly improved; (2) The three-dimensional convolution replaces the traditional two-dimensional convolution, so that not only can the spatial mode of the image be learned, but also the motion mode between image sequences can be learned, and the expression and discrimination capability of the video feature descriptors are greatly improved; (3) The 3DCNN replaces the traditional two-dimensional convolution network to build the target detector, so that the detection precision of mountain fire is greatly improved.
Drawings
FIG. 1 is a mountain fire detection flow chart;
FIG. 2 is a schematic diagram of a 3DCNN based classifier;
FIG. 3 is a schematic diagram of a mountain fire detection sub-network;
fig. 4 is a schematic diagram of rectangular intersection areas and union areas.
Detailed Description
The invention will be further described with reference to the drawings and examples.
Examples
The process flow of this example is shown in fig. 1, and in this example, the manner of collecting the mountain fire video data set includes: the forest fire video can be obtained from Internet news websites, video websites or large forest fire prevention project data release official networks, special persons can be arranged to simulate the forest fire under the condition of ensuring safety, and the forest fire video can be obtained through video recording of a video camera. At the same time, at least an equal number of videos of the region of interest type where no forest fire occurs are collected. The video is cut into video segments with the length of 2 seconds, and categories of the video segments without mountain fire and with mountain fire are marked by a label of 0 and a label of 1 respectively. If the video segment has a mountain fire, the coordinates of the mountain fire region in the first frame of the video segment are marked, including the uppermost, leftmost, lowermost and rightmost coordinates. All marked video clips are collected as a mountain fire video data set, in this example, the proposed scheme is verified by a test set, so 80% of the video clips are randomly acquired as a training set, and the rest video clips form the test set.
A 3 DCNN-based classification network was constructed and named 3DNet. The network configuration of 3 dnaet is shown in table 1, where the channel order follows the order of time, height, width. All three-dimensional convolutions use a convolution kernel of 3x3x3 size for a total of 8 three-dimensional convolution layers. The maximum pooling layers are respectively connected after the convolution layers Conv1, conv2, conv4, conv6 and Conv8, the sizes of 5 pooling cores are all 2x2x2, the pooling step sizes of the rest pooling layers except the first pooling layer are all 2, the step size of the first pooling layer in the time dimension is 1, and the step sizes of the two space dimensions are 2. The last pooling layer is followed by two fully connected layers containing 4096 neurons, and finally by a Softmax layer for classification.
Table 1 classification network 3 dnaet network architecture
The 3DNet was trained on a large video classification dataset, sports-1M. For each video, 5 2 second long video clips were randomly extracted from it and scaled to 127x171. During training, a 16x112x112 video clip is intercepted from the video clip by adopting a space-time random disturbance method for training, and the video clip is horizontally turned over with 50% probability. Training with a random gradient descent (SGD) of batch size 30, the initial learning rate was set to 0.003, halving the learning rate every 150000 iterations, for a total of 20 cycles.
And building a mountain fire detection network 3DNetDet with 3DNet as a backbone network. The object detection requires more detailed information than image classification, so the input size of the network is enlarged by a factor of 2 to 16x224x224.All layers after the last max-pooling layer are removed, a module for performing a detection task is added. Table 2 is a 3DNetDet network detection module configuration. The detection module needs to predict the category and location of the forest fire. The Loss/Decoder layer is used to calculate losses at training time or to decode category probabilities, locations and sizes of mountain fires at reasoning time. The loss of 3DNetDet includes the loss L of the presence or absence of an object obj Class probability loss L class And a position size L xywh And the loss of three parts. Wherein L is obj And L class The binary cross entropy is adopted, and the calculation formula is as follows:
l i =-(y i logx i +(1-y i )log(1-x i ) I.e. 0,1,..
Where x represents a predicted value, y represents a target value, and N represents a batch size. L (L) xywh The mean square error is adopted, and the calculation formula is as follows:
MSE i =(x i -y i ) 2 i.e. 0,1,..N-1 (formula 2)
Wherein the variables are as defined in equation 1.
Table 23DNetDet detection Module configuration
The 3DNetDet was trained on the training set of the forest fire video dataset. The backbone of the 3DNetDet is initialized with backbone parameters of the 3DCNN classification network. For each video segment in the training set, it is normalized to a 16x224x224 video segment in a manner of random spatio-temporal perturbation, random horizontal flip, and equal scale scaling. At the same time, the normalized video clips were processed by random chroma, saturation and brightness adjustment, contrast enhancement, etc. as inputs to the 3d netdet during training. Training was performed with a random gradient descent (SGD) of batch size 30. The learning rate is gradually increased from 0 to 0.0005 for the first 1000 batches by adopting the learning rate preheating technology, and then the learning rate is reduced by 5 times until 90% and 95% of the total iteration times are reached. The total training period was 200.
The detection performance of 3DNetDet was tested on the test set of the forest fire video dataset. For each predicted value of 3DNetDet, removing the predicted value with the confidence coefficient lower than 0.005, simultaneously merging the detection frames of the same object by non-maximum value inhibition, and then comparing the detection frames with the labeling data of the forest fire to calculate the accuracy rate and the recall rate. For each reserved predictor, if the maximum ratio of the intersection area of the predictor and the real frame (Intersection Over Union, IOU) is greater than 0.5, the predictor is considered valid, and the True Positive (TP) counter of the corresponding class is incremented by one; otherwise, the corresponding False Positive (FP) counter increments one. The sum of the number of False Negative (FN) in the test set and TP is the total number of video clips with category label "1" in the test set. The accuracy and recall are calculated in equations 3 and 4, respectively.
When 3DNetDet is deployed to detect a mountain fire, a real-time video stream is acquired from a camera and split into video segments with frame length of 16, and meanwhile, 8-frame overlapping of adjacent video segments is ensured. For the predicted value output by the 3d netdet, if the probability is greater than the set threshold (the typical value of the threshold is 0.5), the tested video clip is considered to describe a mountain fire event, and then the spatial coordinates of the ignition point are calculated by combining the corresponding predicted frame position, the mountain fire prevention equipment installation position information (longitude, latitude and altitude), the internal and external parameters of the camera and the geographic information system. The anti-forest fire system transmits the predicted value of 3DNetDet and the space coordinates of the ignition point as part of the alarm information to the relevant responsibility units.
Claims (1)
1. The mountain fire detection method based on the three-dimensional convolutional neural network is characterized by comprising the following steps of:
s1, constructing a training data set: acquiring a plurality of videos with and without mountain fires in the same area, wherein the number of videos without mountain fires is more than or equal to that of videos with mountain fires, and labeling the videos with labels 1 and 0 respectively; cutting all videos into video clips with the length of 2 seconds, and marking coordinates of mountain fire areas in a first frame in the video clips to obtain a training data set;
s2, constructing a three-dimensional convolutional neural network by adopting 10 three-dimensional convolutional layers, 5 pooling layers and a Loss layer: defining a three-dimensional convolution layer as Convn, defining a pooling layer as Pooln, wherein n refers to the number of layers, and sequentially: conv1, pool1, conv2, pool2, conv3, conv4, pool3, conv5, conv6, pool4, conv7, conv8, pool5, conv9, conv10, softmax layers, wherein the convolution kernels of Conv1-Conv8 are all 3x3x3, step sizes of [1,1]The pool core size of all pool layers is 2x2, and the step size of Pool1 is [1,2]The step length of Pool2-Pool5 is [2,2]Conv9 has a convolution kernel size of 1x3x3, step size of [1,1]Conv10 has a convolution kernel size of 1x1x1, step size of [1,1]The method comprises the steps of carrying out a first treatment on the surface of the The loss function employed includes the loss L of the presence or absence of an object obj Class probability loss L class And a position size L xywh Wherein L is a loss of obj And L class The binary cross entropy is adopted, and the calculation formula is as follows:
l i =-(y i logx i +(1-y i )log(1-x i )),i∈0,1,...,N-1
wherein x represents a predicted value, y represents a target value, and N represents a batch size;
L xywh the mean square error is adopted, and the calculation formula is as follows:
MSE i =(x i -y i ) 2 ,i∈0,1,...,N-1
the total loss is L obj 、L class And L xywh And (3) summing;
s3, normalizing the training data set into a 16x224x224 video segment by adopting a mode of random space-time disturbance, random horizontal overturn and equal proportion scaling, and training the constructed three-dimensional convolutional neural network: training by adopting a random gradient descent method with a batch size of 30, gradually increasing the learning rate from 0 to 0.0005 in the first 1000 batches by adopting a learning rate preheating mode, then maintaining until 90% and 95% of the total iteration times are reached, respectively reducing the learning rate by 5 times, and obtaining a total training period of 200; obtaining a trained three-dimensional convolutional neural network;
s4, dividing the video acquired in real time into video segments, and inputting the video segments into a trained three-dimensional convolutional neural network to obtain a mountain fire detection result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010607252.6A CN111898440B (en) | 2020-06-30 | 2020-06-30 | Mountain fire detection method based on three-dimensional convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010607252.6A CN111898440B (en) | 2020-06-30 | 2020-06-30 | Mountain fire detection method based on three-dimensional convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111898440A CN111898440A (en) | 2020-11-06 |
CN111898440B true CN111898440B (en) | 2023-12-01 |
Family
ID=73207241
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010607252.6A Active CN111898440B (en) | 2020-06-30 | 2020-06-30 | Mountain fire detection method based on three-dimensional convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111898440B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113190031B (en) * | 2021-04-30 | 2023-03-24 | 成都思晗科技股份有限公司 | Forest fire automatic photographing and tracking method, device and system based on unmanned aerial vehicle |
CN114973584A (en) * | 2022-05-10 | 2022-08-30 | 云南电网有限责任公司电力科学研究院 | Mountain fire warning method and device, computer equipment and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106897714A (en) * | 2017-03-23 | 2017-06-27 | 北京大学深圳研究生院 | A kind of video actions detection method based on convolutional neural networks |
CN107480729A (en) * | 2017-09-05 | 2017-12-15 | 江苏电力信息技术有限公司 | A kind of transmission line forest fire detection method based on depth space-time characteristic of field |
CN108764142A (en) * | 2018-05-25 | 2018-11-06 | 北京工业大学 | Unmanned plane image forest Smoke Detection based on 3DCNN and sorting technique |
CN109389185A (en) * | 2018-11-15 | 2019-02-26 | 中国科学技术大学 | Use the video smoke recognition methods of Three dimensional convolution neural network |
CN109522819A (en) * | 2018-10-29 | 2019-03-26 | 西安交通大学 | A kind of fire image recognition methods based on deep learning |
CN109829583A (en) * | 2019-01-31 | 2019-05-31 | 成都思晗科技股份有限公司 | Mountain fire Risk Forecast Method based on probability programming technique |
CN109919993A (en) * | 2019-03-12 | 2019-06-21 | 腾讯科技(深圳)有限公司 | Parallax picture capturing method, device and equipment and control system |
CN110570615A (en) * | 2019-09-04 | 2019-12-13 | 云南电网有限责任公司带电作业分公司 | Sky-ground combined power transmission line channel forest fire trend early warning method, device and system and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3637303B1 (en) * | 2018-10-09 | 2024-02-14 | Naver Corporation | Methods for generating a base of training images, for training a cnn and for detecting a poi change in a pair of inputted poi images using said cnn |
-
2020
- 2020-06-30 CN CN202010607252.6A patent/CN111898440B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106897714A (en) * | 2017-03-23 | 2017-06-27 | 北京大学深圳研究生院 | A kind of video actions detection method based on convolutional neural networks |
CN107480729A (en) * | 2017-09-05 | 2017-12-15 | 江苏电力信息技术有限公司 | A kind of transmission line forest fire detection method based on depth space-time characteristic of field |
CN108764142A (en) * | 2018-05-25 | 2018-11-06 | 北京工业大学 | Unmanned plane image forest Smoke Detection based on 3DCNN and sorting technique |
CN109522819A (en) * | 2018-10-29 | 2019-03-26 | 西安交通大学 | A kind of fire image recognition methods based on deep learning |
CN109389185A (en) * | 2018-11-15 | 2019-02-26 | 中国科学技术大学 | Use the video smoke recognition methods of Three dimensional convolution neural network |
CN109829583A (en) * | 2019-01-31 | 2019-05-31 | 成都思晗科技股份有限公司 | Mountain fire Risk Forecast Method based on probability programming technique |
CN109919993A (en) * | 2019-03-12 | 2019-06-21 | 腾讯科技(深圳)有限公司 | Parallax picture capturing method, device and equipment and control system |
CN110570615A (en) * | 2019-09-04 | 2019-12-13 | 云南电网有限责任公司带电作业分公司 | Sky-ground combined power transmission line channel forest fire trend early warning method, device and system and storage medium |
Non-Patent Citations (2)
Title |
---|
3D parallel fully convolutional networks for real-time video wildfire smoke detection;Xiuqing Li;《IEEE transactions on circuits and systems for video technology》;第30卷(第1期);第89-103页 * |
基于卷积神经网络的高光谱图像谱-空联合分类;付光远;辜弘炀;汪洪桥;;科学技术与工程(第21期);第273-279页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111898440A (en) | 2020-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112287816B (en) | Dangerous work area accident automatic detection and alarm method based on deep learning | |
Zhu et al. | Msnet: A multilevel instance segmentation network for natural disaster damage assessment in aerial videos | |
CN107818571A (en) | Ship automatic tracking method and system based on deep learning network and average drifting | |
CN113963301A (en) | Space-time feature fused video fire and smoke detection method and system | |
CN110827505A (en) | Smoke segmentation method based on deep learning | |
CN111898440B (en) | Mountain fire detection method based on three-dimensional convolutional neural network | |
CN110751018A (en) | Group pedestrian re-identification method based on mixed attention mechanism | |
CN113362374A (en) | High-altitude parabolic detection method and system based on target tracking network | |
CN113408351B (en) | Pedestrian re-recognition method for generating confrontation network based on attitude guidance | |
Qiang et al. | Forest fire smoke detection under complex backgrounds using TRPCA and TSVB | |
CN115171047A (en) | Fire image detection method based on lightweight long-short distance attention transformer network | |
CN112862150A (en) | Forest fire early warning method based on image and video multi-model | |
CN105279485A (en) | Detection method for monitoring abnormal behavior of target under laser night vision | |
CN115690564A (en) | Outdoor fire smoke image detection method based on Recursive BIFPN network | |
CN113537226A (en) | Smoke detection method based on deep learning | |
CN116994209A (en) | Image data processing system and method based on artificial intelligence | |
Luo | Research on fire detection based on YOLOv5 | |
CN112907138A (en) | Power grid scene early warning classification method and system from local perception to overall perception | |
CN116824462A (en) | Forest intelligent fireproof method based on video satellite | |
CN116612413A (en) | Parking lot smoke detection method and device based on improved YOLOv5 and data enhancement and storage medium | |
CN115171006B (en) | Detection method for automatically identifying person entering electric power dangerous area based on deep learning | |
Supangkat et al. | Moving Image Interpretation Models to Support City Analysis | |
CN110852174A (en) | Early smoke detection method based on video monitoring | |
CN116188442A (en) | High-precision forest smoke and fire detection method suitable for any scene | |
CN115995051A (en) | Substation equipment fault period identification method based on minimum residual error square sum method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |