CN113128586A - Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image - Google Patents

Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image Download PDF

Info

Publication number
CN113128586A
CN113128586A CN202110412317.6A CN202110412317A CN113128586A CN 113128586 A CN113128586 A CN 113128586A CN 202110412317 A CN202110412317 A CN 202110412317A CN 113128586 A CN113128586 A CN 113128586A
Authority
CN
China
Prior art keywords
resolution
images
image
convolution
spatial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110412317.6A
Other languages
Chinese (zh)
Other versions
CN113128586B (en
Inventor
李伟生
杨超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202110412317.6A priority Critical patent/CN113128586B/en
Publication of CN113128586A publication Critical patent/CN113128586A/en
Application granted granted Critical
Publication of CN113128586B publication Critical patent/CN113128586B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction

Abstract

The invention discloses a space-time fusion method based on a multi-scale mechanism and a series expansion convolution remote sensing image, which comprises the following steps: s1, inputting the three images with high time resolution and low spatial resolution into a mapping network, and extracting features through multi-scale perception and series expansion convolution in the mapping convolution network to obtain three transition images with similar resolution to the images with high time resolution and low time resolution; s2, inputting the transition image and the image with high space and low time resolution into the reconstructed difference image, and obtaining two difference images with high space and low time resolution through the collaborative training of multiple networks; s3, the two differential images and the two images with high space and low time resolution are weighted and fused, and a high space and high time resolution image is obtained through reconstruction. The method improves the accuracy of the remote sensing image space-time algorithm, and solves the problem that the fusion of the reconstruction fusion result high-frequency space detail and the spectrum information is inaccurate in the traditional remote sensing space-time algorithm.

Description

Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a multi-scale mechanism and series expansion convolution remote sensing image space-time fusion method based on multi-network collaborative training.
Background
The space-time fusion algorithm of the remote sensing image belongs to the field of remote sensing image fusion, and is widely applied to the fields of farmland monitoring, disaster prediction and the like. The time-space fusion of the remote sensing images aims to solve the contradiction of the remote sensing images on time and space resolution, and the remote sensing images with high time and high space resolution can be obtained through the time-space fusion. Existing remote sensing image space-time fusion algorithms can be divided into five major categories, namely, weighted function-based algorithms (Weight function-based), Bayesian-based algorithms (Bayesian-based), Unmixing-based algorithms (Unmixing-based), Hybrid algorithms (Hybrid) and Learning-based algorithms (Learning-based).
A space-time Adaptive reflection Fusion model (STARFM) is the most representative algorithm based on a weighting function, and many of the following space-time Fusion algorithms are proposed based on STARFM. Algorithms based on bayesian and unmixing are also gradually diversified later, and besides using a single kind of algorithm, there are some methods using a mixed algorithm, such as Flexible spatial temporal Data Fusion (FSDAF). In recent years, learning-based algorithms have been developed vigorously, and they can be further classified into dictionary pair learning-based spatio-temporal fusion algorithms and machine learning-based spatio-temporal fusion algorithms. The Sparse-representation-based spatio-temporal Fusion Model (SPSTFM) opens the way of the spatio-temporal Fusion algorithm based on dictionary pair learning, and has better processing capability on regions with higher heterogeneity. The space-time Fusion algorithm (STFDCNN) based on the Convolutional Neural network further improves the Fusion precision, which proves the applicability of the Convolutional Neural network in the space-time Fusion field, and then the space-time Fusion method based on the Convolutional Neural network is endless.
Although existing spatiotemporal fusion methods are diverse, many problems still exist, such as: in the regions with higher heterogeneity, the fusion precision of the algorithm is not high; fused images obtained by an algorithm based on a convolutional neural network are usually too smooth; the retention effect of the spectral information is not good. The multi-scale mechanism and the tandem expansion convolution mechanism are commonly used in the field of video frame super-resolution, and few spatio-temporal fusion methods are cited. The multi-scale mechanism can fully extract the characteristic information from a plurality of scale perception characteristic graphs, and is beneficial to solving the problem of poor retention effect of the spectral information of the common space-time fusion method; the series expansion convolution mechanism can extract the edge information of the characteristic diagram, and is beneficial to solving the problems that the fused image is too smooth and the spatial detail is seriously lost in the common space-time fusion method. The existing space-time fusion algorithm does not use a mechanism of excessive network collaborative training, in a multi-network model, the network training effect is influenced by a plurality of network output results, so that the network convergence effect is possibly poor.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. A space-time fusion method based on a multi-scale mechanism and a series expansion convolution remote sensing image is provided. The technical scheme of the invention is as follows:
a space-time fusion method based on a multi-scale mechanism and a series expansion convolution remote sensing image comprises the following steps:
s1, combining three images C with high time and low space resolutioniInputting the image into a mapping convolution network, wherein i is 1, 2 and 3, and extracting features through multi-scale perception and series expansion convolution in the mapping convolution network to obtain three transition images with similar resolution to the images with high spatial resolution and low temporal resolution;
s2, inputting the transition image and the images with high spatial resolution and low temporal resolution into the reconstructed difference image, and obtaining two difference images with high spatial resolution and low temporal resolution through multi-network collaborative training;
s3, performing weighted fusion on the two high-spatial resolution difference images and the two high-spatial and low-temporal resolution images, and reconstructing to obtain a high-spatial and high-temporal resolution image F1
Further, the mapping convolution network of step S1 is composed of a convolution layer, a multi-scale sensing module, and a series expansion convolution module, where the multi-scale sensing module is used to respectively sense multiple scales of the input feature map, and then superimposes the feature map into a new multi-dimensional feature map; the series expansion convolution module can extract richer characteristic information of the image by expanding the receptive field of the convolution layer, and the process of obtaining the image with the transition resolution ratio is as follows:
Figure BDA0003024358400000021
wherein T isiImage representing a transitional resolution, M0Mapping function, phi, representing a mapped convolutional network0A training weight parameter representing the mapping function.
Further, the step S1 specifically includes the following sub-steps,
s1.1, putting the input high-time and low-spatial-resolution images at three moments into a multi-scale sensing module to obtain a characteristic diagram under multi-scale sensing of the images;
s1.2, inputting the multi-scale perceived feature map into a series expansion convolution to obtain a dimension reduction feature map;
s1.3, converting the dimension reduction characteristic graphs obtained from the three images with low spatial resolution into three images T with transitional resolution through convolution operationi(i=1,2,3)。
Further, the reconstructed differential convolutional network and the co-training convolutional network of step S2 are respectively composed of eight layers of basic convolutional layers and six layers of basic convolutional layers, where the task of reconstructing the differential convolutional network is more complex, and therefore the network setup also has two more layers of basic convolutional layers than the co-training convolutional network. Assisting in reconstructing the differential convolution network to complete training according to time correlation through the output of the cooperative training convolution network, and outputting two high-spatial-resolution differential images FT01And FT12The process is as follows:
Tij=Ti-Tj
Figure BDA0003024358400000031
Figure BDA0003024358400000032
wherein T isijRepresenting the transition resolution image T at the i-th momentiTransition resolution image T with j timejDifference image therebetween, F0And F2High spatial, low temporal resolution images, M, representing time 0 and time 2, respectively1Mapping function, phi, representing the reconstructed sealed convolutional network1Represents the mapping function M1Training weight parameters of (1).
Further, the step S2 specifically includes the following sub-steps,
s2.1, inputting the three images with the transition resolution and the two images with high space and low time resolution into a reconstruction difference convolution network, and obtaining two differential images F with high space resolution according to the structural correlation of a time sequenceT01And FT12
S2.2, performing collaborative training by adopting known information according to the time correlation of the time sequence, and enabling two known high-space low-time resolution images F0And F2Outputting a high-resolution differential image F in an input cooperative training convolutional networkT02Using high resolution difference images FT02Helping to reconstruct a differential convolution network to complete training;
s2.3, obtaining two high-spatial-resolution difference images F through the trained reconstructed difference imagesT01And FT12
Further, the step S3 obtains an image F with high spatial and temporal resolution through weighted fusion reconstruction1The fusion process is as follows:
Figure BDA0003024358400000041
wherein ω is0And ω2Respectively as F0And F2Combining the result pair obtained after the high spatial resolution difference image to finally fuse and reconstruct the knotFruit F1The contribution weight of (1).
Two weight parameters are calculated:
Cij=Ci-Cj
Figure BDA0003024358400000042
Figure BDA0003024358400000043
wherein C isijRepresenting a high temporal, low spatial resolution image C at time iiAnd j time high time, low spatial resolution image CjDifference image between vC01And vC12Respectively represent C01And C12K is a set constant, so that the condition that the denominator is 0 is avoided.
The invention has the following advantages and beneficial effects:
the invention is based on a convolutional neural network, and uses a cooperative working mechanism of a plurality of networks, a multi-scale mechanism and a series expansion convolution mechanism. The multi-scale mechanism and the tandem expansion convolution mechanism are commonly used in the field of video frame super-resolution, and few spatio-temporal fusion methods are cited. The multi-scale mechanism can fully extract the characteristic information from a plurality of scale perception characteristic graphs, and is beneficial to solving the problem of poor retention effect of the spectral information of the common space-time fusion method; the series expansion convolution mechanism can extract the edge information of the characteristic diagram, and is beneficial to solving the problems that the fused image is too smooth and the spatial detail is seriously lost in the common space-time fusion method. The existing space-time fusion algorithm does not use a mechanism of excessive network collaborative training, in a multi-network model, the network training effect is influenced by a plurality of network output results, so that the network convergence effect is possibly poor. Through the special network, a fusion result with higher accuracy can be obtained, and in the invention, space-time fusion is carried out by using two pairs of images, so that more known information can be fully utilized, and a better reconstruction fusion effect can be obtained.
Drawings
FIG. 1 is a flow chart of a spatial-temporal fusion method based on a multi-scale mechanism and a series expansion convolution remote sensing image in a preferred embodiment;
figure 2 is a graph comparing results with other mainstream algorithms. (a) A reference image; (b) STARFM; (c) ESTARFM; (d) FSDAF; (e) StfNet; (f) DCSTFN; (g) EDCSTFN; (h) the invention relates to a method for preparing a high-temperature-resistant ceramic material.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
FIG. 1 is a flow chart of a spatiotemporal fusion method based on a multi-scale mechanism and a series expansion convolution remote sensing image according to a preferred embodiment of the invention;
the method comprises the following specific steps:
s1, inputting the three images with high time resolution and low spatial resolution into a mapping network, extracting features through multi-scale perception and series expansion convolution, and obtaining three transition images with similar resolution to the images with high time resolution and low spatial resolution;
s2, inputting the transition image and the image with high space and low time resolution into the reconstructed difference image, and obtaining two difference images with high space and low time resolution through the collaborative training of multiple networks;
s3, the two differential images and the two images with high space and low time resolution are weighted and fused, and a high space and high time resolution image is obtained through reconstruction.
In order to evaluate the performance of the invention, a classical data set is selected for experiment, and the experimental result is compared with other seven classical space-time fusion algorithms. Where STARFM and ESATRFM are weighted function based algorithms, FSDAF is a hybrid algorithm, StfNet, DCSTFN, edctfn and the present invention convolutional neural network based algorithms.
Fig. 2 shows the experimental results of each method, and it can be clearly seen that the result image of the invention greatly alleviates the problem of the image being too smooth compared with other algorithms. And the STARFM results show severe spectral distortion while the FSDAF results show loss of detail, compared to the fusion results of the present algorithm which are closer to the reference image.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (6)

1. A space-time fusion method based on a multi-scale mechanism and a series expansion convolution remote sensing image is characterized by comprising the following steps:
s1, combining three images C with high time and low space resolutioniInputting the image into a mapping convolution network, wherein i is 1, 2 and 3, and extracting features through multi-scale perception and series expansion convolution in the mapping convolution network to obtain three transition images with similar resolution to the images with high spatial resolution and low temporal resolution;
s2, inputting the transition image and the images with high spatial resolution and low temporal resolution into the reconstructed difference image, and obtaining two difference images with high spatial resolution and low temporal resolution through multi-network collaborative training;
s3, performing weighted fusion on the two high-spatial resolution difference images and the two high-spatial and low-temporal resolution images, and reconstructing to obtain a high-spatial and high-temporal resolution image F1
2. The spatio-temporal fusion method based on the multi-scale mechanism and the series expansion convolution remote sensing image as claimed in claim 1, wherein the mapping convolution network of the step S1 is composed of convolution layers, a multi-scale perception module and series expansion convolution modules, wherein the multi-scale perception module is used for respectively perceiving a plurality of scales of the input feature map and then superposing the feature map into a new multi-dimensional feature map; the series expansion convolution module can extract richer characteristic information of the image by expanding the receptive field of the convolution layer, and the process of obtaining the image with the transition resolution ratio is as follows:
Figure FDA0003024358390000011
wherein T isiImage representing a transitional resolution, M0Mapping function, phi, representing a mapped convolutional network0A training weight parameter representing the mapping function.
3. The spatio-temporal fusion method based on the multi-scale mechanism and the series expansion convolution remote sensing image as claimed in claim 2, characterized in that the step S1 specifically comprises the following sub-steps,
s1.1, putting the input high-time and low-spatial-resolution images at three moments into a multi-scale sensing module to obtain a characteristic diagram under multi-scale sensing of the images;
s1.2, inputting the multi-scale perceived feature map into a series expansion convolution to obtain a dimension reduction feature map;
s1.3, converting the dimension reduction characteristic graphs obtained from the three images with low spatial resolution into three images T with transitional resolution through convolution operationi(i=1,2,3)。
4. The method for spatial-temporal fusion of remote sensing images based on multi-scale mechanism and series-connection expansion convolution according to claim 2 or 3, characterized in that the reconstructed differential convolution network and the collaborative training convolution network of step S2 are respectively composed of eight layers of basic convolution layers and six layers of basic convolution layers, wherein the task of reconstructing the differential convolution network is more complex, so the network setup is also more than two layers of basic convolution layers than the collaborative training convolution network. Assisting in reconstructing the differential convolution network to complete training according to time correlation through the output of the cooperative training convolution network, and outputting two high-spatial-resolution differential images FT01And FT12The process is as follows:
Tij=Ti-Tj
Figure FDA0003024358390000021
Figure FDA0003024358390000022
wherein T isijRepresenting the transition resolution image T at the i-th momentiTransition resolution image T with j timejDifference image therebetween, F0And F2High spatial, low temporal resolution images, M, representing time 0 and time 2, respectively1Mapping function, phi, representing the reconstructed sealed convolutional network1Represents the mapping function M1Training weight parameters of (1).
5. The spatio-temporal fusion method based on the multi-scale mechanism and the series expansion convolution remote sensing image as claimed in claim 4, characterized in that the step S2 specifically comprises the following sub-steps,
s2.1, inputting the three images with the transition resolution and the two images with high space and low time resolution into a reconstruction difference convolution network, and obtaining two differential images F with high space resolution according to the structural correlation of a time sequenceT01And FT12
S2.2, performing collaborative training by adopting known information according to the time correlation of the time sequence, and enabling two known high-space low-time resolution images F0And F2Outputting a high-resolution differential image F in an input cooperative training convolutional networkT02Using high resolution difference images FT02Helping to reconstruct a differential convolution network to complete training;
s2.3, obtaining two high-spatial-resolution difference images F through the trained reconstructed difference imagesT01And FT12
6. The method for spatial-temporal fusion of remote sensing images based on multi-scale mechanism and series expansion convolution as claimed in claim 5, wherein said step S3 is implemented by weighted fusion reconstruction to obtain a high spatial and high temporal resolution image F1The fusion process is as follows:
Figure FDA0003024358390000031
wherein ω is0And ω2Respectively as F0And F2Combining the result pair obtained after the high spatial resolution difference image to finally fuse the reconstruction result F1The contribution weight of (1).
Two weight parameters are calculated:
Cij=Ci-Cj
Figure FDA0003024358390000032
Figure FDA0003024358390000033
wherein C isijRepresenting a high temporal, low spatial resolution image C at time iiHigh temporal, low spatial resolution with time jImage CjDifference image between vC01And vc12Respectively represent C01And C12K is a set constant, so that the condition that the denominator is 0 is avoided.
CN202110412317.6A 2021-04-16 2021-04-16 Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image Active CN113128586B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110412317.6A CN113128586B (en) 2021-04-16 2021-04-16 Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110412317.6A CN113128586B (en) 2021-04-16 2021-04-16 Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image

Publications (2)

Publication Number Publication Date
CN113128586A true CN113128586A (en) 2021-07-16
CN113128586B CN113128586B (en) 2022-08-23

Family

ID=76777414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110412317.6A Active CN113128586B (en) 2021-04-16 2021-04-16 Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image

Country Status (1)

Country Link
CN (1) CN113128586B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310883A (en) * 2023-05-17 2023-06-23 山东建筑大学 Agricultural disaster prediction method based on remote sensing image space-time fusion and related equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140064636A1 (en) * 2007-11-29 2014-03-06 Sri International Multi-scale adaptive fusion with contrast normalization
CN110263732A (en) * 2019-06-24 2019-09-20 京东方科技集团股份有限公司 Multiscale target detection method and device
CN110705457A (en) * 2019-09-29 2020-01-17 核工业北京地质研究院 Remote sensing image building change detection method
CN111080567A (en) * 2019-12-12 2020-04-28 长沙理工大学 Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
CN111754404A (en) * 2020-06-18 2020-10-09 重庆邮电大学 Remote sensing image space-time fusion method based on multi-scale mechanism and attention mechanism
CN112184554A (en) * 2020-10-13 2021-01-05 重庆邮电大学 Remote sensing image fusion method based on residual mixed expansion convolution
CN112329685A (en) * 2020-11-16 2021-02-05 常州大学 Method for detecting crowd abnormal behaviors through fusion type convolutional neural network
CN112529828A (en) * 2020-12-25 2021-03-19 西北大学 Reference data non-sensitive remote sensing image space-time fusion model construction method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140064636A1 (en) * 2007-11-29 2014-03-06 Sri International Multi-scale adaptive fusion with contrast normalization
CN110263732A (en) * 2019-06-24 2019-09-20 京东方科技集团股份有限公司 Multiscale target detection method and device
CN110705457A (en) * 2019-09-29 2020-01-17 核工业北京地质研究院 Remote sensing image building change detection method
CN111080567A (en) * 2019-12-12 2020-04-28 长沙理工大学 Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
CN111754404A (en) * 2020-06-18 2020-10-09 重庆邮电大学 Remote sensing image space-time fusion method based on multi-scale mechanism and attention mechanism
CN112184554A (en) * 2020-10-13 2021-01-05 重庆邮电大学 Remote sensing image fusion method based on residual mixed expansion convolution
CN112329685A (en) * 2020-11-16 2021-02-05 常州大学 Method for detecting crowd abnormal behaviors through fusion type convolutional neural network
CN112529828A (en) * 2020-12-25 2021-03-19 西北大学 Reference data non-sensitive remote sensing image space-time fusion model construction method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WEISHENG LI: "DMNet: A Network Architecture Using Dilated Convolution and Multiscale Mechanisms for Spatiotemporal Fusion of Remote Sensing Images", 《IEEE SENSORS JOURNAL》 *
李昌洁: "条件生成对抗遥感图像时空融合", 《中国图象图形学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310883A (en) * 2023-05-17 2023-06-23 山东建筑大学 Agricultural disaster prediction method based on remote sensing image space-time fusion and related equipment
CN116310883B (en) * 2023-05-17 2023-10-20 山东建筑大学 Agricultural disaster prediction method based on remote sensing image space-time fusion and related equipment

Also Published As

Publication number Publication date
CN113128586B (en) 2022-08-23

Similar Documents

Publication Publication Date Title
CN111311490B (en) Video super-resolution reconstruction method based on multi-frame fusion optical flow
CN106910161B (en) Single image super-resolution reconstruction method based on deep convolutional neural network
CN112734646B (en) Image super-resolution reconstruction method based on feature channel division
CN112634137A (en) Hyperspectral and full-color image fusion method based on AE extraction of multi-scale spatial spectrum features
CN112819910B (en) Hyperspectral image reconstruction method based on double-ghost attention machine mechanism network
CN113283444B (en) Heterogeneous image migration method based on generation countermeasure network
CN113177882B (en) Single-frame image super-resolution processing method based on diffusion model
CN114004847B (en) Medical image segmentation method based on graph reversible neural network
CN113240683B (en) Attention mechanism-based lightweight semantic segmentation model construction method
CN112950480A (en) Super-resolution reconstruction method integrating multiple receptive fields and dense residual attention
CN112561799A (en) Infrared image super-resolution reconstruction method
CN114418850A (en) Super-resolution reconstruction method with reference image and fusion image convolution
CN113128586B (en) Spatial-temporal fusion method based on multi-scale mechanism and series expansion convolution remote sensing image
WO2020001046A1 (en) Video prediction method based on adaptive hierarchical kinematic modeling
CN112734645B (en) Lightweight image super-resolution reconstruction method based on feature distillation multiplexing
CN114119356A (en) Method for converting thermal infrared image into visible light color image based on cycleGAN
CN117474781A (en) High spectrum and multispectral image fusion method based on attention mechanism
CN113255585A (en) Face video heart rate estimation method based on color space learning
CN112767277A (en) Depth feature sequencing deblurring method based on reference image
CN110689510B (en) Sparse representation-based image fusion method introducing dictionary information
CN114429424B (en) Remote sensing image super-resolution reconstruction method suitable for uncertain degradation modes
CN116563103A (en) Remote sensing image space-time fusion method based on self-adaptive neural network
CN113066033B (en) Multi-stage denoising system and method for color image
CN114022362A (en) Image super-resolution method based on pyramid attention mechanism and symmetric network
CN114757825A (en) Infrared image super-resolution reconstruction method and device based on feature separation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant