CN112906491A - Forest fire detection method based on multi-mode fusion technology - Google Patents

Forest fire detection method based on multi-mode fusion technology Download PDF

Info

Publication number
CN112906491A
CN112906491A CN202110111091.6A CN202110111091A CN112906491A CN 112906491 A CN112906491 A CN 112906491A CN 202110111091 A CN202110111091 A CN 202110111091A CN 112906491 A CN112906491 A CN 112906491A
Authority
CN
China
Prior art keywords
model
resnet
forest fire
fusion technology
detection method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110111091.6A
Other languages
Chinese (zh)
Inventor
潘晓光
张娜
陈亮
张雅娜
马文芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi Sanyouhe Smart Information Technology Co Ltd
Original Assignee
Shanxi Sanyouhe Smart Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi Sanyouhe Smart Information Technology Co Ltd filed Critical Shanxi Sanyouhe Smart Information Technology Co Ltd
Priority to CN202110111091.6A priority Critical patent/CN112906491A/en
Publication of CN112906491A publication Critical patent/CN112906491A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/005Fire alarms; Alarms responsive to explosion for forest fires, e.g. detecting fires spread over a large or outdoors area
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • G08B17/125Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Emergency Management (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of artificial intelligence image processing, in particular to a forest fire detection method based on a multi-mode fusion technology. S1, pre-training the convolutional neural network ResNet-152 on ImageNet to obtain a pre-trained ResNet-152 model; s2, extracting multispectral image features by utilizing a pre-trained ResNet-152 model; s3, extracting thermal infrared image features by utilizing a pre-trained ResNet-152 model; s4, fusing the characteristics of the two modes, decomposing the fused characteristics by using diagonal decomposition of the matrix to obtain diagonal elements, and flattening the diagonal elements to obtain final fused characteristics; s5, carrying out deconvolution and activation operations to generate a final fire passing area prediction graph; s6, constructing a loss function, wherein the loss function of the model adopts binary cross entropy; constructing a loss function to minimize the loss function; and S7, training and testing the model and optimizing the network parameters. The invention introduces a multi-mode fusion technology to improve the recognition of the model to the forest fire. The invention is mainly applied to the forest fire detection aspect.

Description

Forest fire detection method based on multi-mode fusion technology
Technical Field
The invention relates to the technical field of artificial intelligence image processing, in particular to a forest fire detection method based on a multi-mode fusion technology.
Background
The remote sensing image has the characteristics of macroscopicity and timeliness, and has great application potential in the field of disaster management. Forest fires are typical natural disasters, and the forest fires are monitored by using a remote sensing technology, so that the forest fires are of great significance to the ecological environment and the life and production of human beings.
The existing forest fire detection technology mainly utilizes spectral information of remote sensing images to construct a vegetation index of a forest, and meanwhile thermal infrared information is used as an auxiliary to cooperatively detect an abnormal vegetation index area, so that the abnormal vegetation index area is used as a judgment basis for forest fire. However, the vegetation indexes, which are characteristics of such methods, are all designed manually, the processing result of the method still has large noise, and for the processing of the thermal infrared information, the thermal infrared information is mostly inverted into temperature information to determine abnormal areas. The method is complex, the processing result is noisy, a large amount of post-processing work is often needed, and in the traditional method, when thermal infrared information is combined, the thermal infrared information needs to be inverted into earth surface temperature information, and then an image is combined to assist in judging a fire passing area. And the inversion steps are complicated, multiple times of interactive processing are needed, and the automation of the model is difficult to realize. Automatic monitoring of forest fires is difficult to achieve.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides the forest fire detection method based on the multi-mode fusion technology.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a forest fire detection method based on a multi-mode fusion technology comprises the following steps:
s1, pre-training the convolutional neural network ResNet-152 on ImageNet to obtain a pre-trained ResNet-152 model;
s2, extracting multispectral image features by utilizing a pre-trained ResNet-152 model;
s3, extracting thermal infrared image features by utilizing a pre-trained ResNet-152 model;
s4, fusing the characteristics of the two modes, decomposing the fused characteristics by using a matrix diagonal decomposition method to obtain diagonal elements, and flattening the diagonal elements to obtain final fused characteristics;
s5, carrying out deconvolution and activation operations to generate a final fire passing area prediction graph;
s6, constructing a loss function, wherein the loss function of the model adopts binary cross entropy; constructing a loss function to minimize the loss function;
and S7, training and testing the model and optimizing the network parameters.
In the step S2 and the step S3, the multispectral image is recorded as IMThermal infrared image is ITThen two different modal data are respectively extracted by inputting the two different modal data into the pre-trained ResNet-152 network to obtain the feature FMAnd FTCan be expressed as: (F)M,FT)=ResNet 152(IM,IT)。
In the steps S2 and S3, the ResNet-152 is used to extract the image features, and the final features of the previous convolution layer characterization image of the last output layer of the ResNet-152 are taken, and the size of the final features is 7 × 7 × 2048.
In step S4, the features F of the two modes to be extractedMAnd FTMultiplying the vectors to obtain a feature matrix, and dividing by diagonalThe solution algorithm decomposes the feature matrix into a diagonal matrix and a feature vector matrix U, V, removes the elements of the diagonal matrix, as the final fused key feature vector, and this step can be expressed as: fM⊙FT=U·Λ·V。
In step S4, since the result obtained when the two modes perform matrix multiplication is one tensor, the hyper-diagonal decomposition algorithm is adopted.
In step S5, the fused feature vector is input into 6 deconvolution and nonlinear activation operation layers, and then two convolution layers are connected to obtain a final fire-passing region prediction map.
In step S6, a binary cross entropy is established between the image of the overfire region and the real image as a loss function to be optimized.
In step S7, the MODIS fire product data is processed as a data set for model training and testing, and used to train model parameters and test model accuracy.
Compared with the prior art, the invention has the beneficial effects that:
the method has better model robustness, because the manually designed features can not extract high-level features in the image, such as texture, saturation, fine granularity and other information, the extracted features have noise, and the deep learning model can extract the high-level features in the image, so that the fire area can be better identified by combining the high-level features, and the noise in the image is reduced; fusion of multi-mode information can be automatically realized; and a multi-mode fusion technology is used for directly fusing thermal infrared information and visible light information to realize automatic identification of the fire passing area. The invention introduces a multi-mode fusion technology, can improve the recognition of the model to the forest fire and realize the automatic detection of the forest fire.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a schematic processing diagram of the system of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1 and fig. 2, a method for detecting forest fires based on a multi-modal fusion technology includes the following steps:
s1, pre-training the convolutional neural network ResNet-152 on ImageNet to obtain a pre-trained ResNet-152 model;
s2, extracting multispectral image features by utilizing a pre-trained ResNet-152 model; the input image (size of 224 × 224 × 7 pixels) is passed into the ResNet-152 network pre-trained on ImageNet, and the last fully connected layer is then output (1 × 2048 pixels) as multispectral image features.
S3, extracting thermal infrared image features by utilizing a pre-trained ResNet-152 model; the input image (size of 224 × 224 × 1 pixels) is passed into the ResNet-152 network pre-trained on ImageNet, and the last fully-connected layer is then output (1 × 2048 pixels) as a thermal infrared image feature.
S4, fusing the characteristics of the two modes, decomposing the fused characteristics by using a matrix diagonal decomposition method to obtain diagonal elements, and flattening the diagonal elements to obtain final fused characteristics;
s5, carrying out deconvolution and activation operations to generate a final fire passing area prediction graph;
s6, constructing a loss function, wherein the loss function of the model adopts binary cross entropy; constructing a loss function to minimize the loss function;
and S7, training and testing the model and optimizing the network parameters.
In step S2 and step S3, the multispectral image is recorded as IMThermal infrared image is ITThen two different modal data are respectively extracted by inputting the two different modal data into the pre-trained ResNet-152 network to obtain the feature FMAnd FTCan be expressed as: (F)M,FT)=ResNet 152(IM,IT)。
Preferably, in step S2 and step S3, the image features are extracted using ResNet-152, and the final features of the previous convolution layer representation image of the last output layer of ResNet-152 are taken, and the size of the final features is 7 × 7 × 2048.
Preferably, in step S4, the features F of the two modalities are extractedMAnd FTMultiplying the vectors to obtain a feature matrix, decomposing the feature matrix into a diagonal matrix and a feature vector matrix U, V by adopting a diagonal decomposition algorithm, removing elements of the diagonal matrix, and taking the elements as the final fused key feature vector, wherein the step can be expressed as: fM⊙FT=U·Λ·V。
Preferably, in step S4, since the result obtained when the two modalities perform matrix multiplication is one tensor, the hyper-diagonal decomposition algorithm is used.
Preferably, in step S5, the fused feature vector is input into 6 deconvolution and nonlinear activation operation layers, two convolution layers are connected, resize operation is performed to obtain a final image of the predicted overfire region, and a mask image (224 × 224 × 1 pixels) with the same size as the input image is output.
Preferably, in step S6, a binary cross entropy is established between the image of the overfire region and the real image as a loss function to be optimized.
Preferably, in step S7, the MODIS fire product data is processed as a data set for model training and testing, and used for training model parameters and testing model accuracy.
The invention is in the central processing unit
Figure BDA0002916842080000042
The I5-86003.1 GHz CPU, the GTX 1050T 4G GPU and the memory 8G, WINDOWS 10 operating system are implemented by python language programming. Deep learning is also involved in the model, and the deep learning framework adopted in the experiment is tensorflow 1.14. The data used in the experiments were MODIS fire product data.
The size of the input multispectral and thermal infrared images are 224 × 224 × 3 pixels and 224 × 224 × 1 pixels. Images of two modalities are input into a ResNet-152 network, and image characteristics of 1 x 2048 pixels are obtained. And then the two image characteristics are fused and subjected to super diagonal decomposition to obtain 1 × 2048 pixels. Finally, the image is processed by deconvolution and activation, and then is convolved with two layers, and the prediction result of the overfire area with the same size as the input image is output, wherein the size of the overfire area is 224 multiplied by 1 pixels. During training, batch size is set to 4, epoch is set to 100, dropout is set to 0.5 to prevent overfitting, the optimizer selects AdamaOptizer, and learning rate is set to 0.0001.
In order to verify the effectiveness of the method, the method is compared with the traditional forest fire detection method, namely, an NDVI vegetation index is adopted for calculation, and an image abnormal fire passing area is searched. The model is simple and mainly adopts vegetation indexes, the model provided by the invention combines multi-mode information, reduces noise and realizes automatic detection of forest fires, and the accuracy comparison result of the final model is shown in the following table:
Figure BDA0002916842080000041
as can be seen from the table, compared with the NDVI model, the model provided by the invention has higher detection precision in forest fires, and can automatically detect the forest fires due to the introduction of multi-modal information. And noise generated by the traditional method can be effectively reduced.
Although only the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art, and all changes are encompassed in the scope of the present invention.

Claims (8)

1. A forest fire detection method based on a multi-mode fusion technology is characterized by comprising the following steps:
s1, pre-training the convolutional neural network ResNet-152 on ImageNet to obtain a pre-trained ResNet-152 model;
s2, extracting multispectral image features by utilizing a pre-trained ResNet-152 model;
s3, extracting thermal infrared image features by utilizing a pre-trained ResNet-152 model;
s4, fusing the characteristics of the two modes, decomposing the fused characteristics by using a matrix diagonal decomposition method to obtain diagonal elements, and flattening the diagonal elements to obtain final fused characteristics;
s5, carrying out deconvolution and activation operations to generate a final fire passing area prediction graph;
s6, constructing a loss function, wherein the loss function of the model adopts binary cross entropy; constructing a loss function to minimize the loss function;
and S7, training and testing the model and optimizing the network parameters.
2. The forest fire detection method based on the multi-modal fusion technology as claimed in claim 1, wherein: in the step S2 and the step S3, the multispectral image is recorded as IMThermal infrared image is ITThen two different modal data are respectively extracted by inputting the two different modal data into the pre-trained ResNet-152 network to obtain the feature FMAnd FTCan be expressed as: (F)M,FT)=ResNet 152(IM,IT)。
3. The forest fire detection method based on the multi-modal fusion technology as claimed in claim 1, wherein: in the steps S2 and S3, the ResNet-152 is used to extract the image features, and the final features of the previous convolution layer characterization image of the last output layer of the ResNet-152 are taken, and the size of the final features is 7 × 7 × 2048.
4. The forest fire detection method based on the multi-modal fusion technology as claimed in claim 2, wherein: in step S4, the features F of the two modes to be extractedMAnd FTMultiplying the vectors to obtain a feature matrix, and decomposing the feature matrix into a diagonal matrix and a feature vector by using a diagonal decomposition algorithmThe elements of the matrix U, V, the de-diagonal matrix are used as the final fused key feature vector, and this step can be expressed as: fM⊙FT=U·ΛV。
5. The forest fire detection method based on the multi-modal fusion technology as claimed in claim 4, wherein: in step S4, since the result obtained when the two modes perform matrix multiplication is one tensor, the hyper-diagonal decomposition algorithm is adopted.
6. The forest fire detection method based on the multi-modal fusion technology as claimed in claim 1, wherein: in step S5, the fused feature vector is input into 6 deconvolution and nonlinear activation operation layers, and then two convolution layers are connected to obtain a final fire-passing region prediction map.
7. The forest fire detection method based on the multi-modal fusion technology as claimed in claim 1, wherein: in step S6, a binary cross entropy is established between the image of the overfire region and the real image as a loss function to be optimized.
8. The forest fire detection method based on the multi-modal fusion technology as claimed in claim 1, wherein: in step S7, the MODIS fire product data is processed as a data set for model training and testing, and used to train model parameters and test model accuracy.
CN202110111091.6A 2021-01-26 2021-01-26 Forest fire detection method based on multi-mode fusion technology Pending CN112906491A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110111091.6A CN112906491A (en) 2021-01-26 2021-01-26 Forest fire detection method based on multi-mode fusion technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110111091.6A CN112906491A (en) 2021-01-26 2021-01-26 Forest fire detection method based on multi-mode fusion technology

Publications (1)

Publication Number Publication Date
CN112906491A true CN112906491A (en) 2021-06-04

Family

ID=76118837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110111091.6A Pending CN112906491A (en) 2021-01-26 2021-01-26 Forest fire detection method based on multi-mode fusion technology

Country Status (1)

Country Link
CN (1) CN112906491A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023000949A1 (en) * 2021-07-19 2023-01-26 清华大学 Detection method and device for video monitoring fire
CN116883785A (en) * 2023-07-17 2023-10-13 中国科学院地理科学与资源研究所 Forest carbon density data set extraction method
CN117010532A (en) * 2023-10-07 2023-11-07 电子科技大学 Comprehensive pipe gallery fire trend prediction method based on multi-mode deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683038A (en) * 2016-11-17 2017-05-17 云南电网有限责任公司电力科学研究院 Method and device for generating fire situation map
CN108664906A (en) * 2018-04-27 2018-10-16 温州大学激光与光电智能制造研究院 The detection method of content in a kind of fire scenario based on convolutional network
CN111079572A (en) * 2019-11-29 2020-04-28 南京恩博科技有限公司 Forest smoke and fire detection method based on video understanding, storage medium and equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683038A (en) * 2016-11-17 2017-05-17 云南电网有限责任公司电力科学研究院 Method and device for generating fire situation map
CN108664906A (en) * 2018-04-27 2018-10-16 温州大学激光与光电智能制造研究院 The detection method of content in a kind of fire scenario based on convolutional network
CN111079572A (en) * 2019-11-29 2020-04-28 南京恩博科技有限公司 Forest smoke and fire detection method based on video understanding, storage medium and equipment

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
周浪 等: ""基于 Sparse-DenseNet 模型的森林火灾识别研究"", 《北京林业大学学报》, vol. 42, no. 10, 31 October 2020 (2020-10-31), pages 36 - 44 *
李光鑫: ""红外和可见光图像融合技术的研究"", 《中国博士学位全文数据库(信息科技辑)》, 15 November 2008 (2008-11-15) *
金萌萌: ""多源图像的特征融合方法研究"", 《中国优秀硕士学位论文全文数据库(信息科技辑)》, 15 March 2015 (2015-03-15) *
陈广秋: ""基于奇异值分解的PCNN红外与可见光图像融合"", 《液晶与显示》, 28 February 2015 (2015-02-28) *
陈欣: ""基于深度神经网络的火灾烟雾检测方法"", 《中国优秀硕士学位论文全文数据库(信息科技辑)》, 15 January 2010 (2010-01-15), pages 3 *
黄俊: ""基于无人机平台的可见光和红外图像拼接算法研究"", 《中国优秀硕士学位论文全文数据库(信息科技辑)》, 15 December 2013 (2013-12-15), pages 1 - 1 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023000949A1 (en) * 2021-07-19 2023-01-26 清华大学 Detection method and device for video monitoring fire
CN116883785A (en) * 2023-07-17 2023-10-13 中国科学院地理科学与资源研究所 Forest carbon density data set extraction method
CN116883785B (en) * 2023-07-17 2024-03-12 中国科学院地理科学与资源研究所 Forest carbon density data set extraction method
CN117010532A (en) * 2023-10-07 2023-11-07 电子科技大学 Comprehensive pipe gallery fire trend prediction method based on multi-mode deep learning
CN117010532B (en) * 2023-10-07 2024-02-02 电子科技大学 Comprehensive pipe gallery fire trend prediction method based on multi-mode deep learning

Similar Documents

Publication Publication Date Title
Xu et al. Classification saliency-based rule for visible and infrared image fusion
CN112906491A (en) Forest fire detection method based on multi-mode fusion technology
Bu et al. Hyperspectral and multispectral image fusion via graph Laplacian-guided coupled tensor decomposition
CN113057633B (en) Multi-modal emotional stress recognition method and device, computer equipment and storage medium
Zeng et al. Hyperspectral image restoration via CNN denoiser prior regularized low-rank tensor recovery
Shao et al. Uncertainty guided multi-scale attention network for raindrop removal from a single image
CN111325319A (en) Method, device, equipment and storage medium for detecting neural network model
Dilshad et al. Efficient Deep Learning Framework for Fire Detection in Complex Surveillance Environment.
Liu et al. Fabric defect detection based on deep-feature and low-rank decomposition
Quan et al. Higher-order implicit fairing networks for 3D human pose estimation
Heidari et al. Progressive spatio-temporal bilinear network with Monte Carlo dropout for landmark-based facial expression recognition with uncertainty estimation
CN114581965A (en) Training method of finger vein recognition model, recognition method, system and terminal
Zhou Video expression recognition method based on spatiotemporal recurrent neural network and feature fusion
Zhang et al. Fire detection using vision transformer on power plant
Phutke et al. Fasnet: Feature aggregation and sharing network for image inpainting
Li et al. How does attention work in vision transformers? A visual analytics attempt
Kancharlapalli et al. A Novel Approach for Age and Gender Detection using Deep Convolution Neural Network
Cilla et al. Human action recognition with sparse classification and multiple‐view learning
Trottier et al. Multi-task learning by deep collaboration and application in facial landmark detection
Abdulmunem et al. Deep learning based masked face recognition in the era of the COVID-19 pandemic
Zhang et al. A Spectrum-Aware Transformer Network for Change Detection in Hyperspectral Imagery
Li et al. Face recognition algorithm based on multiscale feature fusion network
Kiritoshi et al. L1-Norm Gradient Penalty for Noise Reduction of Attribution Maps.
Cui et al. Unsupervised infrared and visible image fusion with pixel self-attention
Yan et al. Dimension decoupling attention mechanism for time series prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210604