CN110059723B - Robust smoke detection method based on integrated deep convolutional neural network - Google Patents
Robust smoke detection method based on integrated deep convolutional neural network Download PDFInfo
- Publication number
- CN110059723B CN110059723B CN201910206672.0A CN201910206672A CN110059723B CN 110059723 B CN110059723 B CN 110059723B CN 201910206672 A CN201910206672 A CN 201910206672A CN 110059723 B CN110059723 B CN 110059723B
- Authority
- CN
- China
- Prior art keywords
- layer
- adopted
- convolution
- step length
- pooling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
A robust smoke detection method based on an integrated deep convolutional neural network belongs to the field of image recognition and artificial intelligence. The invention combines the deep convolutional neural networks with different structures into an integrated classifier by an integrated learning method. The smoke detection device can detect smoke in a factory chimney, a torch and other various target scenes. The smoke detection is carried out in time, so that the pollution can be controlled in the industrial field, and the smoke detection method can also be used in public safety fields such as forest fire early warning and the like. The robust smoke detection method based on the integrated deep convolutional neural network is applied to the waste gas treatment system and other fields, compared with the existing method, the accuracy rate is obviously improved, and a large amount of parameter adjustment work of the existing method is avoided. The invention can accurately control the generation and discharge processes of waste gas in real time, and can give an early warning for the generation of smoke, thereby not only obviously reducing the discharge of toxic and harmful gases, but also greatly saving manpower.
Description
Technical Field
The invention integrates a plurality of different depth convolution neural networks to establish a smoke detection model aiming at the gray level image, and the smoke in the image is detected by taking the gray level image as input. The smoke detection method based on the deep convolutional neural network and the ensemble learning is applied to waste gas treatment and smoke detection, the production and discharge processes of waste gas are accurately controlled in real time, the discharge of toxic and harmful gases can be obviously reduced, the energy consumption is reduced, the human resources are saved, and the production efficiency is improved. A smoke detection method based on a deep convolutional neural network and ensemble learning belongs to the field of image recognition and artificial intelligence.
Background
In recent years, China is always dedicated to protecting the environment, saving energy, reducing emission and reducing atmospheric pollution. The new edition of environmental air quality standard and the emission standards aiming at the industries of thermal power, steel, cement, chemical industry, non-electric coal-fired boilers and the like are the most powerful policies for promoting the implementation of smoke treatment engineering of various enterprises. However, the traditional industries such as thermal power and chemical industry have a large proportion in the economy of China, the discharge amount of waste gas is still large, and a method for detecting the waste gas discharged by a torch, a chimney and the like is lacked, so that the problem of air pollution still puzzles many enterprises.
Traditional methods of smoke detection rely primarily on manual observation or sensors. However, due to limited human resources and high cost, the method based on manual observation cannot monitor smoke quickly and effectively for a long time. On the other hand, smoke sensors based on smoke particle sampling or relative humidity sampling are also likely to exhibit severe time lag due to environmental changes, while also not being able to completely cover the detection area. In general, existing smoke detection methods are difficult to meet.
In recent years, a technology for performing image recognition using a convolutional neural network has been developed greatly, and particularly, with the improvement of the calculation capability of a modern computer, accurate image recognition is achieved by effectively extracting features of a target through learning a large number of samples using a deep convolutional neural network. However, because the specific parameter settings of the neural network are not standardized, the neural networks with different structures and parameters often have larger performance gaps, and meanwhile, due to the diversity of permutation and combination, it is difficult to determine the best structure after testing all kinds of networks. Based on the problems, the invention provides the neural network integrating a plurality of different structures, thereby not only realizing the diversity of the classifier and improving the accuracy of the algorithm, but also avoiding the problem that the optimal network structure is difficult to determine through repeated tests.
Disclosure of Invention
The invention integrates a plurality of different depth convolution neural networks to establish a smoke detection model aiming at the gray level image, and detects that smoke exists on the image by taking the gray level image of a detection target as input. The method for detecting the smoke has the advantages that the accuracy is obviously improved compared with the existing method, and the problem of repeated parameter debugging is also avoided, so that the smoke is accurately detected in real time through the input smoke image. Conditions are created for accurately controlling the combustion process and the exhaust process of waste gas in real time;
the invention adopts the following technical scheme and implementation steps:
1. a robust smoke detection method based on an integrated deep convolutional neural network comprises the following steps:
detecting whether smoke exists in a target scene, and taking a gray image of the target scene as input;
the method is characterized by comprising the following steps:
(1) a plurality of sub-depth convolution neural networks are designed and trained, the input of the networks is gray level images, the sub-neural networks need to reflect the diversity of the structure, and three basic neural network structures are designed in the patent: DN11, DN8, and DN5, and generate 10 different neural networks with them, resulting in 30 sub-neural networks with different structures.
The structure of DN11 is: the first layer to the fourth layer of the network are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the convolution step length is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the fifth layer is a pooling layer, 3-dimensional pooling range and maximum pooling are adopted, all 0 filling is adopted for the input characteristic diagram, and the horizontal step length and the longitudinal step length are both 2; the sixth layer to the eighth layer are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the convolution step length is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the ninth layer is a pooling layer, a 2-dimensional pooling range and maximum pooling are adopted, and the transverse step length and the longitudinal step length are both 2; the tenth layer to the thirteenth layer are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the step length of convolution is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the fourteenth layer is a pooling layer, a 2-dimensional pooling range and maximum pooling are adopted, and the transverse step length and the longitudinal step length are both 2; the fifteenth layer and the sixteenth layer are full connection layers, 2048 neurons exist, and the dropout probability is 0.5; and finally, outputting a classification result through a softmax function, wherein the classification result is smoke or smokeless.
The structure of DN8 is: the first layer to the third layer of the network are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the step length of convolution is 1, a ReLU activation function is adopted, all 0 filling is adopted for an input characteristic diagram, and batch normalization is carried out on each layer; the fourth layer is a pooling layer, 3-dimensional pooling range and maximum pooling are adopted, all 0 filling is adopted for the input characteristic diagram, and the horizontal step length and the longitudinal step length are both 2; the fifth layer and the sixth layer are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the step length of the convolution is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the seventh layer is a pooling layer, a 2-dimensional pooling range and maximum pooling are adopted, and the transverse step length and the longitudinal step length are both 2; the eighth layer to the tenth layer are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the convolution step length is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the eleventh layer is a pooling layer, a 2-dimensional pooling range and maximum pooling are adopted, and the transverse step length and the longitudinal step length are both 2; the twelfth layer and the thirteenth layer are full connection layers, 2048 neurons are provided, and the dropout probability is 0.5; and finally, outputting a classification result through a softmax function, wherein the classification result is smoke or smokeless.
The structure of DN5 is: the first layer and the second layer of the network are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the step length of convolution is 1, a ReLU activation function is adopted, all 0 filling is adopted for an input characteristic diagram, and batch normalization is carried out on each layer; the third layer is a pooling layer, 3-dimensional pooling range and maximum pooling are adopted, all 0 filling is adopted for the input characteristic diagram, and the horizontal step length and the longitudinal step length are both 2; the fourth layer is a convolution layer, the dimension adopted by the convolution kernel is U, the number of the convolution kernels is V, the step length of the convolution is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the fifth layer is a pooling layer, a 2-dimensional pooling range and maximum pooling are adopted, and the transverse step length and the longitudinal step length are both 2; the sixth layer and the seventh layer are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the step length of convolution is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the eighth layer is a pooling layer, a 2-dimensional pooling range and maximum pooling are adopted, and the transverse step length and the longitudinal step length are both 2; the ninth layer and the tenth layer are full connection layers, 2048 neurons are provided, and the dropout probability is 0.5; and finally, outputting a classification result through a softmax function, wherein the classification result is smoke or smokeless.
In the above three basic neural network structures, different sub-neural networks can be obtained by setting different values of U and V. According to the method, U is set to be 3, 5, 7, 9 and 11, V is set to be 32 and 16, and the three network structures of DN11, DN8 and DN5, which are different in value arrangement and combination, are generated into 10 different sub-networks respectively, so that 30 sub-neural networks are obtained in total.
(2) Integrating and pruning the 30 generated sub-neural networks, wherein the method comprises the following specific steps:
firstly, selecting the network with the best precision from 30 sub-networks and putting the network into an integrated classifier. And then, iteratively finding out a second sub-network which is added into the integrated classifier next from the 30 networks to enable the integrated classifier to achieve the optimal precision in the verification set, repeating the steps q times, wherein each sub-network can be repeatedly selected for N times at most, and the value of N is set to be 2, so that the integrated classifier containing q sub-networks is obtained.
Establishing a set of all the sub-networks which are not selected, then overlapping and sequentially exchanging the sub-networks in the integrated classifier with the sub-networks which are not selected, if the accuracy of the integrated classifier is improved, exchanging the sub-networks with the sub-networks which are not selected, and otherwise, not changing the sub-networks.
Taking the value of the q value from 5 to 35, repeating the steps (1) and (2) for 30 times, finding out the value of the q with the highest precision, and taking the integrated classifier with the highest precision as a final integrated classifier.
This results in an integrated classifier.
The invention is mainly characterized in that:
(1) the invention can reflect the characteristics of the image from different angles aiming at the neural networks with different structures, so that a plurality of sub-neural networks with different structures are integrated, thereby achieving higher accuracy;
(2) aiming at the problem that the existing neural network algorithm needs a large amount of parameter adjustment work, the invention avoids repeated parameter adjustment by integrating a plurality of neural networks with different parameters, thereby reducing the workload;
drawings
FIG. 1 is a block diagram of the present invention
Detailed Description
The invention integrates a plurality of different depth convolution neural networks to establish a smoke detection model aiming at the gray level image, and detects that smoke exists on the image by taking the gray level image of a detection target as input. The method for detecting the smoke has the advantages that the accuracy is obviously improved compared with the existing method, and the problem of repeated parameter debugging is also avoided, so that the smoke is accurately detected in real time through the input smoke image. Conditions are created for accurately controlling the combustion process and the exhaust process of waste gas in real time;
the invention adopts the following technical scheme and implementation steps:
1. a plurality of sub-depth convolution neural networks are designed and trained, the input of the networks is gray level images, the sub-neural networks need to reflect the diversity of the structure, and three basic neural network structures are designed in the patent: DN11, DN8, and DN5, and generate 10 different neural networks with them, resulting in 30 sub-neural networks with different structures.
The structure of DN11 is: the first layer to the fourth layer of the network are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the convolution step length is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the fifth layer is a pooling layer, 3-dimensional pooling range and maximum pooling are adopted, all 0 filling is adopted for the input characteristic diagram, and the horizontal step length and the longitudinal step length are both 2; the sixth layer to the eighth layer are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the convolution step length is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the ninth layer is a pooling layer, a 2-dimensional pooling range and maximum pooling are adopted, and the transverse step length and the longitudinal step length are both 2; the tenth layer to the thirteenth layer are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the step length of convolution is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the fourteenth layer is a pooling layer, a 2-dimensional pooling range and maximum pooling are adopted, and the transverse step length and the longitudinal step length are both 2; the fifteenth layer and the sixteenth layer are full connection layers, 2048 neurons exist, and the dropout probability is 0.5; and finally, outputting a classification result through a softmax function, wherein the classification result is smoke or smokeless.
The structure of DN8 is: the first layer to the third layer of the network are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the step length of convolution is 1, a ReLU activation function is adopted, all 0 filling is adopted for an input characteristic diagram, and batch normalization is carried out on each layer; the fourth layer is a pooling layer, 3-dimensional pooling range and maximum pooling are adopted, all 0 filling is adopted for the input characteristic diagram, and the horizontal step length and the longitudinal step length are both 2; the fifth layer and the sixth layer are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the step length of the convolution is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the seventh layer is a pooling layer, a 2-dimensional pooling range and maximum pooling are adopted, and the transverse step length and the longitudinal step length are both 2; the eighth layer to the tenth layer are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the convolution step length is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the eleventh layer is a pooling layer, a 2-dimensional pooling range and maximum pooling are adopted, and the transverse step length and the longitudinal step length are both 2; the twelfth layer and the thirteenth layer are full connection layers, 2048 neurons are provided, and the dropout probability is 0.5; and finally, outputting a classification result through a softmax function, wherein the classification result is smoke or smokeless.
The structure of DN5 is: the first layer and the second layer of the network are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the step length of convolution is 1, a ReLU activation function is adopted, all 0 filling is adopted for an input characteristic diagram, and batch normalization is carried out on each layer; the third layer is a pooling layer, 3-dimensional pooling range and maximum pooling are adopted, all 0 filling is adopted for the input characteristic diagram, and the horizontal step length and the longitudinal step length are both 2; the fourth layer is a convolution layer, the dimension adopted by the convolution kernel is U, the number of the convolution kernels is V, the step length of the convolution is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the fifth layer is a pooling layer, a 2-dimensional pooling range and maximum pooling are adopted, and the transverse step length and the longitudinal step length are both 2; the sixth layer and the seventh layer are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the step length of convolution is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the eighth layer is a pooling layer, a 2-dimensional pooling range and maximum pooling are adopted, and the transverse step length and the longitudinal step length are both 2; the ninth layer and the tenth layer are full connection layers, 2048 neurons are provided, and the dropout probability is 0.5; and finally, outputting a classification result through a softmax function, wherein the classification result is smoke or smokeless.
In the above three basic neural network structures, different sub-neural networks can be obtained by setting different values of U and V. According to the method, U is set to be 3, 5, 7, 9 and 11, V is set to be 32 and 16, 10 different sub-networks of three network structures of DN11, DN8 and DN5 are generated by arranging and combining different values of U and V, so that a total of 30 sub-neural networks are obtained, and 9016 smoke images and 8363 smoke-free images are used for training all the neural networks.
2. Integrating and pruning the 30 generated sub-neural networks, wherein the method comprises the following specific steps:
(1) the network with the best precision in the 30 sub-networks is selected and put into the integrated classifier. And then, iteratively finding a second sub-network which is added into the integrated classifier next from the 30 networks to enable the integrated classifier to achieve the optimal precision in the verification set, repeating the steps q times, wherein each sub-network can be repeatedly selected for N times at most, and the value of N is set to be 2, so that the integrated classifier containing q sub-networks is obtained, and the verification set consists of 8804 smog images and 8511 smog-free images.
(2) And establishing a set of all the sub-networks which are not selected, then carrying out lap-over on the sub-networks in the integrated classifier and the sub-networks which are not selected in turn, and if the accuracy of the integrated classifier is improved, exchanging the sub-networks and the sub-networks which are not selected, and otherwise, keeping the sub-networks unchanged.
(3) And (3) taking the value of q from 5 to 35, repeating the steps (1) and (2) for 30 times, finding out the value of q which can reach the highest precision to be 15, and taking the integrated classifier with the highest precision as a final integrated classifier.
Thus, an integrated classifier is obtained, and through testing of the integrated classifier, the accuracy rate reaches 98.71% in a test set consisting of 1240 smoke-containing images and 1648 smoke-free images.
Claims (1)
1. A robust smoke detection method based on an integrated deep convolutional neural network is characterized by comprising the following steps:
the first step is as follows: designing and training a plurality of deep sub-convolution neural networks;
the second step is that: building a learner of an integrated neural network, pruning and removing a negative sub-neural network;
in the first step:
designing and training a plurality of sub-deep convolutional neural networks, wherein the input of the networks is gray level images, the sub-neural networks need to reflect the diversity of structures, and the method designs three basic neural network structures: DN11、DN8、DN5And 10 different neural networks are generated by using the three neural networks, and 30 sub-neural networks with different structures are obtained in total;
DN11the structure of (1) is as follows: the first layer to the fourth layer of the network are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the convolution step length is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the fifth layer is a pooling layer, 3-dimensional pooling range and maximum pooling are adopted, all 0 filling is adopted for the input characteristic diagram, and the horizontal step length and the longitudinal step length are both 2; the sixth layer to the eighth layer are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the convolution step length is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the ninth layer is a pooling layer adopting 2-dimensional pooling range and maximum pooling, and is horizontal and verticalThe step length of the direction is 2; the tenth layer to the thirteenth layer are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the step length of convolution is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the fourteenth layer is a pooling layer, a 2-dimensional pooling range and maximum pooling are adopted, and the transverse step length and the longitudinal step length are both 2; the fifteenth layer and the sixteenth layer are full connection layers, 2048 neurons exist, and the dropout probability is 0.5; finally, outputting a classification result through a softmax function, wherein the classification result is smoke or smokeless;
DN8the structure of (1) is as follows: the first layer to the third layer of the network are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the step length of convolution is 1, a ReLU activation function is adopted, all 0 filling is adopted for an input characteristic diagram, and batch normalization is carried out on each layer; the fourth layer is a pooling layer, 3-dimensional pooling range and maximum pooling are adopted, all 0 filling is adopted for the input characteristic diagram, and the horizontal step length and the longitudinal step length are both 2; the fifth layer and the sixth layer are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the step length of the convolution is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the seventh layer is a pooling layer, a 2-dimensional pooling range and maximum pooling are adopted, and the transverse step length and the longitudinal step length are both 2; the eighth layer to the tenth layer are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the convolution step length is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the eleventh layer is a pooling layer, a 2-dimensional pooling range and maximum pooling are adopted, and the transverse step length and the longitudinal step length are both 2; the twelfth layer and the thirteenth layer are full connection layers, 2048 neurons are provided, and the dropout probability is 0.5; finally, outputting a classification result through a softmax function, wherein the classification result is smoke or smokeless;
DN5the structure of (1) is as follows: the first layer and the second layer of the network are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the step length of convolution is 1, a ReLU activation function is adopted, all 0 filling is adopted for an input characteristic diagram, and batch normalization is carried out on each layer; the third layer is a pooling layer adopting 3-dimensional pooling range and maximum poolingFilling the input characteristic diagram with all 0, wherein the step length of the transverse direction and the step length of the longitudinal direction are both 2; the fourth layer is a convolution layer, the dimension adopted by the convolution kernel is U, the number of the convolution kernels is V, the step length of the convolution is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the fifth layer is a pooling layer, a 2-dimensional pooling range and maximum pooling are adopted, and the transverse step length and the longitudinal step length are both 2; the sixth layer and the seventh layer are convolution layers, the number of the convolution kernels is U, the number of the convolution kernels is V, the step length of convolution is 1, a ReLU activation function is adopted, all 0 filling is adopted for the input characteristic diagram, and batch normalization is carried out on each layer; the eighth layer is a pooling layer, a 2-dimensional pooling range and maximum pooling are adopted, and the transverse step length and the longitudinal step length are both 2; the ninth layer and the tenth layer are full connection layers, 2048 neurons are provided, and the dropout probability is 0.5; finally, outputting a classification result through a softmax function, wherein the classification result is smoke or smokeless;
in the above three basic neural network structures, U is set to be 3, 5, 7, 9 and 11, V is set to be 32 and 16, and different values of U and V are arranged and combined to generate DN11、DN8、DN5Each of the three network structures has 10 different sub-networks, thereby obtaining a total of 30 sub-neural networks;
integrating and pruning the 30 generated sub-neural networks, wherein the method comprises the following specific steps:
(1) firstly, selecting the network with the best precision from 30 sub-networks and putting the network into an integrated classifier; then, a second sub-network which is added into the integrated classifier next time and can enable the integrated classifier to achieve the optimal precision in the verification set is found out from 30 networks through iteration, the steps are repeated for q times, each sub-network can be repeatedly selected for N times at most, and the value of N is set to be 2, so that the integrated classifier containing q sub-networks is obtained;
(2) establishing a set of all the sub-networks which are not selected, then overlapping and sequentially exchanging the sub-networks in the integrated classifier with the sub-networks which are not selected, if the accuracy of the integrated classifier is improved, exchanging the sub-networks with the sub-networks which are not selected, and if not, keeping the sub-networks unchanged;
(3) taking the value of q from 5-35, repeating the steps of (1) and (2) for 30 times, finding out the value of q which can reach the highest precision, and taking the integrated classifier with the highest precision as a final integrated classifier;
this results in an integrated classifier.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910206672.0A CN110059723B (en) | 2019-03-19 | 2019-03-19 | Robust smoke detection method based on integrated deep convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910206672.0A CN110059723B (en) | 2019-03-19 | 2019-03-19 | Robust smoke detection method based on integrated deep convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110059723A CN110059723A (en) | 2019-07-26 |
CN110059723B true CN110059723B (en) | 2021-01-05 |
Family
ID=67317210
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910206672.0A Active CN110059723B (en) | 2019-03-19 | 2019-03-19 | Robust smoke detection method based on integrated deep convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110059723B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110956611A (en) * | 2019-11-01 | 2020-04-03 | 武汉纺织大学 | Smoke detection method integrated with convolutional neural network |
CN111008612A (en) * | 2019-12-24 | 2020-04-14 | 标旗(武汉)信息技术有限公司 | Production frequency statistical method, system and storage medium |
CN112801187B (en) * | 2021-01-29 | 2023-01-31 | 广东省科学院智能制造研究所 | Hyperspectral data analysis method and system based on attention mechanism and ensemble learning |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012041333A1 (en) * | 2010-09-30 | 2012-04-05 | Visiopharm A/S | Automated imaging, detection and grading of objects in cytological samples |
CN106228150A (en) * | 2016-08-05 | 2016-12-14 | 南京工程学院 | Smog detection method based on video image |
CN107749067A (en) * | 2017-09-13 | 2018-03-02 | 华侨大学 | Fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks |
JP2018101416A (en) * | 2016-12-21 | 2018-06-28 | ホーチキ株式会社 | Fire monitoring system |
CN109271906A (en) * | 2018-09-03 | 2019-01-25 | 五邑大学 | A kind of smog detection method and its device based on depth convolutional neural networks |
CN109376695A (en) * | 2018-11-26 | 2019-02-22 | 北京工业大学 | A kind of smog detection method based on depth hybrid neural networks |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109086803B (en) * | 2018-07-11 | 2022-10-14 | 南京邮电大学 | Deep learning and personalized factor-based haze visibility detection system and method |
-
2019
- 2019-03-19 CN CN201910206672.0A patent/CN110059723B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012041333A1 (en) * | 2010-09-30 | 2012-04-05 | Visiopharm A/S | Automated imaging, detection and grading of objects in cytological samples |
CN106228150A (en) * | 2016-08-05 | 2016-12-14 | 南京工程学院 | Smog detection method based on video image |
JP2018101416A (en) * | 2016-12-21 | 2018-06-28 | ホーチキ株式会社 | Fire monitoring system |
CN107749067A (en) * | 2017-09-13 | 2018-03-02 | 华侨大学 | Fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks |
CN109271906A (en) * | 2018-09-03 | 2019-01-25 | 五邑大学 | A kind of smog detection method and its device based on depth convolutional neural networks |
CN109376695A (en) * | 2018-11-26 | 2019-02-22 | 北京工业大学 | A kind of smog detection method based on depth hybrid neural networks |
Non-Patent Citations (2)
Title |
---|
Video fire smoke detection using motion and color features;YU Chun-yu等;《Fire Technology》;20101231;全文 * |
基于级联卷积神经网络的视频动态烟雾检测;陈俊周等;《电子科技大学学报》;20161130;第45卷(第6期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110059723A (en) | 2019-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110059723B (en) | Robust smoke detection method based on integrated deep convolutional neural network | |
CN108064047B (en) | Water quality sensor network optimization deployment method based on particle swarm | |
CN109376695B (en) | Smoke detection method based on deep hybrid neural network | |
CN110909483A (en) | Point source atmospheric pollutant emission list verification method based on gridding data | |
CN107944173B (en) | Dioxin soft measurement system based on selective integrated least square support vector machine | |
CN113239991B (en) | Flame image oxygen concentration prediction method based on regression generation countermeasure network | |
CN109922478B (en) | Water quality sensor network optimization deployment method based on improved cuckoo algorithm | |
CN111144609A (en) | Boiler exhaust emission prediction model establishing method, prediction method and device | |
WO2021159585A1 (en) | Dioxin emission concentration prediction method | |
CN105116730B (en) | Hydrogen-fuel engine electronic spark advance angle and optimizing system and its optimization method based on Particle Group Fuzzy Neural Network | |
CN112464544A (en) | Method for constructing model for predicting dioxin emission concentration in urban solid waste incineration process | |
Tan et al. | NOx emission model for coal-fired boilers using principle component analysis and support vector regression | |
Ho et al. | Measurement of biological aerosol with a fluorescent aerodynamic particle sizer (FLAPS): correlation of optical data with biological data | |
CN112967764B (en) | Multi-technology coupled pollutant source analysis method and device | |
CN110595972A (en) | Analysis method of PM2.5 concentration value and influence factor | |
CN114912855B (en) | Method and system for evaluating waste gas treatment effect | |
CN114462717A (en) | Small sample gas concentration prediction method based on improved GAN and LSTM | |
CN114266461A (en) | MSWI process dioxin emission risk early warning method based on visual distribution GAN | |
CN107944205B (en) | Water area characteristic model establishing method based on Gaussian smoke plume model | |
CN110988263A (en) | Vehicle exhaust concentration estimation method based on improved Stacking model | |
CN112241800B (en) | Method for calculating VOCs pollutant emission amount of coke oven | |
Sonko et al. | The study of population morbidity based on the spatial diffuse models in old industrial region of Krivbass | |
CN109059870B (en) | Boiler atmosphere pollutant emission monitoring system and inspection method based on unmanned aerial vehicle aerial image | |
CN101644699B (en) | Fresh fuel online identification method | |
CN111462835B (en) | Dioxin emission concentration soft measurement method based on depth forest regression algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |