CN111754404A - Remote sensing image space-time fusion method based on multi-scale mechanism and attention mechanism - Google Patents

Remote sensing image space-time fusion method based on multi-scale mechanism and attention mechanism Download PDF

Info

Publication number
CN111754404A
CN111754404A CN202010560118.5A CN202010560118A CN111754404A CN 111754404 A CN111754404 A CN 111754404A CN 202010560118 A CN202010560118 A CN 202010560118A CN 111754404 A CN111754404 A CN 111754404A
Authority
CN
China
Prior art keywords
feature
attention mechanism
image
fusion
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010560118.5A
Other languages
Chinese (zh)
Other versions
CN111754404B (en
Inventor
李伟生
张夏嫣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Minglong Electronic Technology Co ltd
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202010560118.5A priority Critical patent/CN111754404B/en
Publication of CN111754404A publication Critical patent/CN111754404A/en
Application granted granted Critical
Publication of CN111754404B publication Critical patent/CN111754404B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a remote sensing image space-time fusion method based on a multi-scale mechanism and an attention mechanism, which comprises the following steps: s1, respectively inputting the high-time and low-spatial resolution image and the low-time and high-spatial resolution image into two different parallel convolutional neural networks, and respectively extracting feature maps of the two different convolutional neural networks on different scales; s2 selecting feature maps on three scales, respectively fusing the feature maps, sampling the fused feature maps on the three scales to a uniform scale, and fusing the feature maps into a feature map; s3, inputting the fused feature map into an attention mechanism, and giving different weights to the features and channels of the feature map; and S4, reconstructing the image by using the full-connection layer in combination with the weight of the feature map to obtain an image with high spatial resolution and high temporal resolution. The invention improves the accuracy of the space-time algorithm of the remote sensing image and has advantages in time efficiency.

Description

Remote sensing image space-time fusion method based on multi-scale mechanism and attention mechanism
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a remote sensing image space-time fusion method based on a multi-scale mechanism and an attention mechanism.
Background
Limited by current hardware technology and budget constraints, the satellite images obtained by us have contradiction in terms of time and spatial resolution, for example, a mode-resolution Imaging spectrometer (MODIS) can acquire images of the same region every day or every half day, but the spatial resolution ranges from 250 meters to 1000 meters; while the terrestrial satellite in the united states (Landsat) is capable of acquiring images with a spatial resolution of 10 to 30 meters, but its reentry period is 16 days. In practical applications, high temporal and spatial resolution satellite images are required, so the problem can be solved by spatio-temporal fusion. The space-time fusion algorithm of the remote sensing image is widely applied to the directions of farmland monitoring, disaster prediction and the like. Existing remote sensing image space-time fusion algorithms can be divided into five major categories, namely weighted function-based algorithms (weight-based), Bayesian-based algorithms (Bayesian-based), Unmixing-based algorithms (Unmixing-based), Hybrid algorithms (Hybrid) and Learning-based algorithms (Learning-based).
The space-time Adaptive reflection fusion model (STARFM) is the most representative algorithm based on weighting function, and many of the following space-time fusion algorithms are proposed based on STARFM. Algorithms based on bayesian and unmixing are also gradually diversified later, and besides using a single kind of algorithm, there are some methods using a mixing algorithm, such as Flexible spatial temporal data fusion (FSDAF). In recent years, learning-based algorithms have been developed vigorously, and they can be further classified into dictionary pair learning-based spatio-temporal fusion algorithms and machine learning-based spatio-temporal fusion algorithms. The convolutional neural network is particularly widely applied to the space-time fusion of the remote sensing images, and a plurality of methods prove the superiority of the convolutional neural network.
Although existing spatiotemporal fusion methods are diverse, many problems still exist, such as: in the regions with higher heterogeneity, the fusion precision of the algorithm is not high; most algorithms use two pairs of images for spatio-temporal fusion, which has high requirements on data sets; the fused image resulting from the convolutional neural network based algorithm is usually too smooth.
Disclosure of Invention
The invention aims to establish a network framework which only uses a pair of remote sensing images for space-time fusion and has higher fusion precision, and the network framework is based on a convolutional neural network and aims to solve the problem that the result image based on the convolutional neural network is too smooth. Therefore, a remote sensing image space-time fusion method based on a multi-scale mechanism and an attention mechanism is provided. The technical scheme of the invention is as follows:
a remote sensing image space-time fusion method based on a multi-scale mechanism and an attention mechanism comprises the following steps:
s1, respectively adding t0And t1Temporal high-temporal, low-spatial resolution image sum t0The time-of-day low-time, high-spatial-resolution images are input into first and second convolutional neural networks,
s2, fusing feature maps of the two types of images on three different scales in the step S1;
s3, inputting the fusion result into the attention mechanism, comprising the following sub-steps,
s3.1, assigning values to the input feature maps in channel dimensions, and giving different weights to each feature map of the feature maps;
s3.2, after the characteristic diagram is assigned in the channel dimension, assigning the characteristic information of the characteristic diagram;
s4, inputting the feature graph containing the weight into the full-connection layer, reconstructing the image, and finally obtaining t1Temporal high spatial resolution images.
Further, the step S1 specifically includes the following sub-steps:
s1.1, to t0And t1The high-time and low-spatial resolution images (the size of the images is a × a) at two moments are subjected to difference to obtain residual images of the images, and then the obtained residual images are input into a first convolution neural network;
s1.2, mixing t0The low-temporal, high-spatial-resolution image at time instant (size 16a × 16a) is input to a second convolutional neural network;
s1.3, the first convolutional neural network and the second convolutional neural network are parallel structures, and feature maps of the two types of images on three (4a multiplied by 4a, 8a multiplied by 8a and 16a multiplied by 16a) different scales are extracted by using convolutional layers respectively.
Further, the method comprisesT of said step S10Temporal low spatial resolution image (M)1) And t1Temporal low spatial resolution image (M)2) Residual image (M) of12) The calculation formula is as follows:
M12=M2-M1
further, the first convolution neural network of step S1 is composed of convolution layer and deconvolution layer, and the first convolution neural network is composed of convolution layer and pooling layer, the first convolution neural network is used for up-sampling the input image, and the first convolution neural network is used for down-sampling the input image; due to the scale difference of the two types of input images, the network structure can project the two types of remote sensing images to the same scale, and is convenient for extracting richer feature information of the images.
Further, the step S2 specifically includes the following steps:
s2.1, selecting and outputting three feature maps in two parallel networks, wherein the three feature maps respectively comprise feature information of the image on different scales;
s2.2, fusing the two types of feature maps on the same scale respectively to obtain three fused feature maps;
s2.3, resampling the three fused feature graphs to a uniform scale, fusing the three fused feature graphs, and finally obtaining a fusion result; .
Further, the three feature maps of the low spatial resolution image of step S2 are respectively denoted as fi(M12) 1, 2, 3, high spatial resolution image (L)1) Are respectively represented as fi(L1) When the i values are the same, the feature maps of the two types of images are in the same scale, and the feature maps in the same scale are respectively fused to obtain FiThen to FiReprocess to the same scale, pair FiAnd performing fusion to obtain a final fusion result F, wherein the fusion process comprises the following steps:
Fi=fi(M12)+fi(L1)
F=F1+F2+F3
further, the S3 inputs the fusion result into the attention mechanism, including the following sub-steps,
s3.1, assigning values to the input feature graph in channel dimensions by using a channel attention mechanism, mainly zooming the channels of the feature graph through a convolution layer, and then endowing different weights to the channels of the feature graph by using a Sigmoid function;
and S3.2, after the characteristic diagram is assigned in the channel dimension, assigning characteristic information of houses, rivers, farmlands and the like on the characteristic diagram by using a space attention mechanism, and also assigning the characteristic diagram by using a Sigmoid function in the step, wherein the weight value of the characteristic diagram is continuously adjusted in the network iteration process until the network converges.
Further, the attention mechanism of step S3 is composed of a channel attention mechanism and a space attention mechanism, and is intended to allow some key areas including rivers, houses and farmlands to be intensively learned; the channel attention mechanism is to enhance the attention of the key channel by scaling the channels of the feature map and using a Sigmoid function to assign different weights to each channel, and the spatial attention mechanism is to assign values to feature information on the feature map and focus on important information on the feature map. The weight values on the feature maps are updated step by step in network iteration, and after the network is converged, the weight values given to the feature maps by the Sigmoid function can be finally determined.
Further, in step S4, the image is reconstructed through two full-connected layers, the number of channels of the output feature map is controlled, and the number of channels of the final result image is the same as that of the input image, i.e. the predicted t is1Temporal high spatial resolution images.
The invention has the following advantages and beneficial effects:
the method is based on a convolutional neural network, and firstly, a double-branch network is used for extracting the characteristic information of two types of remote sensing images. Because the ground feature information in the remote sensing image usually has multiplicative size, and the feature information may be different on different scales, a multi-scale mechanism is added in the dual-branch network, the dual-branch network respectively extracts feature maps of the image on three different scales, and then respectively fuses the feature maps, and the obtained fused image has richer space detail information and time change information. Besides enhancing the extraction of the features of the network on different scales, the network structure of the invention adds an attention mechanism behind a multi-scale mechanism; because the remote sensing image contains a great deal of feature information, but not every feature plays the same important role in scientific research, an attention mechanism is added to focus important features on a feature map, and the fusion precision and the visual effect are further improved.
The two special network structures in the invention reduce the problem of too smooth fusion result to a great extent, and the obtained fusion image has better visual effect and higher accuracy. Meanwhile, the network learning residual image is directly realized, so that the calculated amount of the network is reduced, the time change information of the network learning remote sensing image is facilitated, and a result with higher precision is obtained. In addition, in the invention, only one pair of remote sensing images is used for space-time fusion, thereby greatly reducing the limitation of the algorithm on the data set.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of the present invention;
figure 2 is a graph comparing results with other mainstream algorithms.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
the method comprises the following specific steps:
step S1, converting t0And t1The high-time and low-spatial resolution images at the moment are subjected to subtraction to obtain a residual image M12Inputting it into the first convolutional neural network, passing through the deconvolution layer pair M12Performing upsampling to obtain M12Feature maps on three scales. Will t0Temporal low-time, high-spatial-resolution images (L)1) Inputting into another convolutional neural network, and passing through the pooling layer pair L1Down-sampling to obtain L1A feature map on three scales;
step S2, for M12And L1Fusing the three feature maps to obtain three fused feature maps, projecting the three feature maps to the same scale, fusing the three processed feature maps to obtain a fused feature map, wherein the feature map comprises feature information of the two types of images on the three scales;
step S3, inputting the fused feature map into an attention mechanism, wherein the channel attention mechanism assigns a value to each channel of the feature map by zooming the channel of the feature map, and then the space attention mechanism assigns different weights to feature information on the feature map, so that a feature map with different weights in both channel dimension and space dimension can be obtained;
and step S4, inputting the feature map with the weight into the full-connection layer, reconstructing the image, controlling the channel of the feature map by using the two full-connection layers, and finally obtaining a fusion result.
In order to evaluate the performance of the invention, a classical data set is selected for experiment, and the experimental result is compared with other three classical space-time fusion algorithms. Where STARFM is a weighting function based algorithm, FSDAF is a hybrid algorithm, Deep Convolutional spatial-Temporal Fusion Network (DCSTFN) and the present invention is a Convolutional neural Network based algorithm, both of which use a pair of images for Spatio-Temporal Fusion.
Fig. 2 shows the experimental results of the respective methods, and it can be clearly seen that the resulting image of the present invention greatly alleviates the problem of the image being too smooth compared to DCSTFN. And the results of FSDAF show severe loss of detail while the results of STARFM show spectral distortion, compared to the fusion results of the present algorithm which are closer to the reference image.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (9)

1. A remote sensing image space-time fusion method based on a multi-scale mechanism and an attention mechanism is characterized by comprising the following steps:
s1, respectively adding t0And t1Temporal high-temporal, low-spatial resolution image sum t0The time-of-day low-time, high-spatial-resolution images are input into first and second convolutional neural networks,
s2, fusing feature maps of the two types of images on three different scales in the step S1;
s3, inputting the fusion result into the attention mechanism, comprising the following sub-steps,
s3.1, assigning values to the input feature maps in channel dimensions, and giving different weights to each feature map of the feature maps;
s3.2, after the characteristic diagram is assigned in the channel dimension, assigning the characteristic information of the characteristic diagram;
s4, inputting the feature graph containing the weight into the full-connection layer, reconstructing the image, and finally obtaining t1Temporal high spatial resolution images.
2. The method for space-time fusion of remote sensing images based on the multi-scale mechanism and the attention mechanism as claimed in claim 1, wherein the step S1 specifically comprises the following sub-steps:
s1.1, to t0And t1The high-time and low-spatial resolution images (the size of the images is a × a) at two moments are subjected to difference to obtain residual images of the images, and then the obtained residual images are input into a first convolution neural network;
s1.2, mixing t0The low-temporal, high-spatial-resolution image at time instant (size 16a × 16a) is input to a second convolutional neural network;
s1.3, the first convolutional neural network and the second convolutional neural network are parallel structures, and feature maps of the two types of images on three (4a multiplied by 4a, 8a multiplied by 8a and 16a multiplied by 16a) different scales are extracted by using convolutional layers respectively.
3. The method for space-time fusion of remote sensing images based on multi-scale mechanism and attention mechanism as claimed in claim 2, wherein t of step S1 is t0Temporal low spatial resolution image (M)1) And t1Temporal low spatial resolution image (M)2) Residual image (M) of12) The calculation formula is as follows:
M12=M2-M1
4. the method for spatiotemporal fusion of remote sensing images based on multi-scale mechanism and attention mechanism as claimed in claim 2 or 3, wherein the first convolution neural network of step S1 is composed of convolution layer and deconvolution layer, and the first convolution neural network is composed of convolution layer and pooling layer, the first convolution neural network is used for up-sampling the input image, the first convolution neural network is used for down-sampling the input image; due to the scale difference of the two types of input images, the network structure can project the two types of remote sensing images to the same scale, and is convenient for extracting richer feature information of the images.
5. The method for space-time fusion of remote sensing images based on the multi-scale mechanism and the attention mechanism as claimed in claim 4, wherein the step S2 specifically comprises the following steps:
s2.1, selecting and outputting three feature maps in two parallel networks, wherein the three feature maps respectively comprise feature information of the image on different scales;
s2.2, fusing the two types of feature maps on the same scale respectively to obtain three fused feature maps;
s2.3, resampling the three fused feature graphs to a uniform scale, fusing the three fused feature graphs, and finally obtaining a fusion result; .
6. The method for spatial-temporal fusion of remote sensing images based on multi-scale mechanism and attention mechanism as claimed in claim 5, wherein the three feature maps of the low spatial resolution image of step S2 are respectively represented as fi(M12) 1, 2, 3, high spatial resolution image (L)1) Are respectively represented as fi(L1) When the i values are the same, the feature maps of the two types of images are in the same scale, and the feature maps in the same scale are respectively fused to obtain FiThen to FiReprocess to the same scale, pair FiAnd performing fusion to obtain a final fusion result F, wherein the fusion process comprises the following steps:
Fi=fi(M12)+fi(L1)
F=F1+F2+F3
7. the remote sensing image space-time fusion method based on the multi-scale mechanism and the attention mechanism is characterized in that the S3 inputs the fusion result into the attention mechanism and comprises the following sub-steps,
s3.1, assigning values to the input feature graph in channel dimensions by using a channel attention mechanism, mainly zooming the channels of the feature graph through a convolution layer, and then endowing different weights to the channels of the feature graph by using a Sigmoid function;
and S3.2, after the characteristic diagram is assigned in the channel dimension, assigning characteristic information of houses, rivers, farmlands and the like on the characteristic diagram by using a space attention mechanism, and also assigning the characteristic diagram by using a Sigmoid function in the step, wherein the weight value of the characteristic diagram is continuously adjusted in the network iteration process until the network converges.
8. The method for spatiotemporal fusion of remote sensing images based on a multi-scale mechanism and an attention mechanism as claimed in claim 7, wherein the attention mechanism of step S3 is composed of a channel attention mechanism and a space attention mechanism, so as to allow some key areas including rivers, houses and farmlands to be learned with emphasis; the channel attention mechanism is to enhance the attention of the key channel by scaling the channels of the feature map and using a Sigmoid function to assign different weights to each channel, and the spatial attention mechanism is to assign values to feature information on the feature map and focus on important information on the feature map. The weight values on the feature maps are updated step by step in network iteration, and after the network is converged, the weight values given to the feature maps by the Sigmoid function can be finally determined.
9. The method for spatial-temporal fusion of remote sensing images based on multi-scale mechanism and attention mechanism as claimed in claim 8, wherein said step S4 is to reconstruct the image through two fully connected layers, control the number of channels of the output feature map, and finally obtain the result image with the same number of channels as the input image, i.e. the predicted t is t1Temporal high spatial resolution images.
CN202010560118.5A 2020-06-18 2020-06-18 Remote sensing image space-time fusion method based on multi-scale mechanism and attention mechanism Active CN111754404B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010560118.5A CN111754404B (en) 2020-06-18 2020-06-18 Remote sensing image space-time fusion method based on multi-scale mechanism and attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010560118.5A CN111754404B (en) 2020-06-18 2020-06-18 Remote sensing image space-time fusion method based on multi-scale mechanism and attention mechanism

Publications (2)

Publication Number Publication Date
CN111754404A true CN111754404A (en) 2020-10-09
CN111754404B CN111754404B (en) 2022-07-01

Family

ID=72675491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010560118.5A Active CN111754404B (en) 2020-06-18 2020-06-18 Remote sensing image space-time fusion method based on multi-scale mechanism and attention mechanism

Country Status (1)

Country Link
CN (1) CN111754404B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818843A (en) * 2021-01-29 2021-05-18 山东大学 Video behavior identification method and system based on channel attention guide time modeling
CN112862688A (en) * 2021-03-08 2021-05-28 西华大学 Cross-scale attention network-based image super-resolution reconstruction model and method
CN113012044A (en) * 2021-02-19 2021-06-22 北京师范大学 Remote sensing image space-time fusion method and system based on deep learning
CN113128586A (en) * 2021-04-16 2021-07-16 重庆邮电大学 Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image
CN114708511A (en) * 2022-06-01 2022-07-05 成都信息工程大学 Remote sensing image target detection method based on multi-scale feature fusion and feature enhancement

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108983599A (en) * 2018-08-07 2018-12-11 重庆邮电大学 A kind of adaptive process monitoring method of multi-parameter fusion under car networking
CN109584161A (en) * 2018-11-29 2019-04-05 四川大学 The Remote sensed image super-resolution reconstruction method of convolutional neural networks based on channel attention
WO2019136110A1 (en) * 2018-01-05 2019-07-11 Careband Incorporated Wearable electronic device and system for tracking location and identifying changes in salient indicators of patient health
CN110728224A (en) * 2019-10-08 2020-01-24 西安电子科技大学 Remote sensing image classification method based on attention mechanism depth Contourlet network
CN111192200A (en) * 2020-01-02 2020-05-22 南京邮电大学 Image super-resolution reconstruction method based on fusion attention mechanism residual error network
AU2020100200A4 (en) * 2020-02-08 2020-06-11 Huang, Shuying DR Content-guide Residual Network for Image Super-Resolution

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019136110A1 (en) * 2018-01-05 2019-07-11 Careband Incorporated Wearable electronic device and system for tracking location and identifying changes in salient indicators of patient health
CN108983599A (en) * 2018-08-07 2018-12-11 重庆邮电大学 A kind of adaptive process monitoring method of multi-parameter fusion under car networking
CN109584161A (en) * 2018-11-29 2019-04-05 四川大学 The Remote sensed image super-resolution reconstruction method of convolutional neural networks based on channel attention
CN110728224A (en) * 2019-10-08 2020-01-24 西安电子科技大学 Remote sensing image classification method based on attention mechanism depth Contourlet network
CN111192200A (en) * 2020-01-02 2020-05-22 南京邮电大学 Image super-resolution reconstruction method based on fusion attention mechanism residual error network
AU2020100200A4 (en) * 2020-02-08 2020-06-11 Huang, Shuying DR Content-guide Residual Network for Image Super-Resolution

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WEISHENG LI等: ""DMNet: A Network Architecture Using Dilated Convolution and Multiscale Mechanisms for Spatiotemporal Fusion of Remote Sensing Images"", 《IEEE SENSORS JOURNAL》 *
何冰倩等: ""基于改进的深度神经网络的人体动作识别模型"", 《计算机应用研究》 *
詹紫微: ""基于卷积神经网络的目标跟踪方法研究"", 《中国优秀博硕士学位论文全文数据库(硕士)》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818843A (en) * 2021-01-29 2021-05-18 山东大学 Video behavior identification method and system based on channel attention guide time modeling
CN113012044A (en) * 2021-02-19 2021-06-22 北京师范大学 Remote sensing image space-time fusion method and system based on deep learning
CN112862688A (en) * 2021-03-08 2021-05-28 西华大学 Cross-scale attention network-based image super-resolution reconstruction model and method
CN112862688B (en) * 2021-03-08 2021-11-23 西华大学 Image super-resolution reconstruction system and method based on cross-scale attention network
CN113128586A (en) * 2021-04-16 2021-07-16 重庆邮电大学 Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image
CN113128586B (en) * 2021-04-16 2022-08-23 重庆邮电大学 Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image
CN114708511A (en) * 2022-06-01 2022-07-05 成都信息工程大学 Remote sensing image target detection method based on multi-scale feature fusion and feature enhancement
CN114708511B (en) * 2022-06-01 2022-08-16 成都信息工程大学 Remote sensing image target detection method based on multi-scale feature fusion and feature enhancement

Also Published As

Publication number Publication date
CN111754404B (en) 2022-07-01

Similar Documents

Publication Publication Date Title
CN111754404B (en) Remote sensing image space-time fusion method based on multi-scale mechanism and attention mechanism
CN112507898B (en) Multi-modal dynamic gesture recognition method based on lightweight 3D residual error network and TCN
CN114529825B (en) Target detection model, method and application for fire fighting access occupied target detection
CN111369440B (en) Model training and image super-resolution processing method, device, terminal and storage medium
CN110782395B (en) Image processing method and device, electronic equipment and computer readable storage medium
Wang et al. TRC‐YOLO: A real‐time detection method for lightweight targets based on mobile devices
CN114549913B (en) Semantic segmentation method and device, computer equipment and storage medium
CN115409855B (en) Image processing method, device, electronic equipment and storage medium
Chen et al. U-net like deep autoencoders for deblurring atmospheric turbulence
CN114519667A (en) Image super-resolution reconstruction method and system
CN116071279A (en) Image processing method, device, computer equipment and storage medium
CN116645598A (en) Remote sensing image semantic segmentation method based on channel attention feature fusion
CN113393435B (en) Video saliency detection method based on dynamic context sensing filter network
CN112116700B (en) Monocular view-based three-dimensional reconstruction method and device
CN113962861A (en) Image reconstruction method and device, electronic equipment and computer readable medium
CN113628115A (en) Image reconstruction processing method and device, electronic equipment and storage medium
CN111223046B (en) Image super-resolution reconstruction method and device
US20240029420A1 (en) System, devices and/or processes for application of kernel coefficients
CN111505738A (en) Method and equipment for predicting meteorological factors in numerical weather forecast
CN116630152A (en) Image resolution reconstruction method and device, storage medium and electronic equipment
Pan et al. New image super-resolution scheme based on residual error restoration by neural networks
CN113361445B (en) Attention mechanism-based document binarization processing method and system
CN115272906A (en) Video background portrait segmentation model and algorithm based on point rendering
CN112613544A (en) Target detection method, device, electronic equipment and computer readable medium
Zhang et al. Single infrared remote sensing image super-resolution via supervised deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240626

Address after: 230000 B-1015, wo Yuan Garden, 81 Ganquan Road, Shushan District, Hefei, Anhui.

Patentee after: HEFEI MINGLONG ELECTRONIC TECHNOLOGY Co.,Ltd.

Country or region after: China

Address before: 400065 Chongwen Road, Nanshan Street, Nanan District, Chongqing

Patentee before: CHONGQING University OF POSTS AND TELECOMMUNICATIONS

Country or region before: China