CN113762288A - Multispectral image fusion method based on interactive feature embedding - Google Patents

Multispectral image fusion method based on interactive feature embedding Download PDF

Info

Publication number
CN113762288A
CN113762288A CN202111106858.2A CN202111106858A CN113762288A CN 113762288 A CN113762288 A CN 113762288A CN 202111106858 A CN202111106858 A CN 202111106858A CN 113762288 A CN113762288 A CN 113762288A
Authority
CN
China
Prior art keywords
fusion
image
convolution
self
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111106858.2A
Other languages
Chinese (zh)
Other versions
CN113762288B (en
Inventor
赵凡
赵文达
吴雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Normal University
Original Assignee
Liaoning Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Normal University filed Critical Liaoning Normal University
Priority to CN202111106858.2A priority Critical patent/CN113762288B/en
Publication of CN113762288A publication Critical patent/CN113762288A/en
Application granted granted Critical
Publication of CN113762288B publication Critical patent/CN113762288B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

本发明提供一种基于交互式特征嵌入的多光谱图像融合方法,属于计算机视觉领域,本发明包括以下步骤:收集多光谱图像对,对图像对预处理,包括高度宽度调整、滑动窗口取图像对等,获取网络训练数据集;设计基于自监督学习的交互式特征嵌入的多光谱图像融合网络;设计损失函数,监督网络模型训练;测试过程中,输入多光谱图像对,网络输出最终图像融合结果。本发明可有效提升网络特征提取能力,有利于融合结果中重要信息的保留。

Figure 202111106858

The invention provides a multispectral image fusion method based on interactive feature embedding, which belongs to the field of computer vision. The invention includes the following steps: collecting multispectral image pairs, and preprocessing the image pairs, including height and width adjustment, and sliding windows to obtain image pairs. etc., obtain the network training data set; design a multi-spectral image fusion network based on interactive feature embedding based on self-supervised learning; design a loss function to supervise the training of the network model; in the testing process, input multi-spectral image pairs, and the network outputs the final image fusion result . The invention can effectively improve the network feature extraction ability, and is beneficial to the retention of important information in the fusion result.

Figure 202111106858

Description

Multispectral image fusion method based on interactive feature embedding
Technical Field
The invention belongs to the field of computer vision, and particularly relates to multispectral image fusion based on interactive feature embedding.
Background
The multispectral image fusion is to integrate the image characteristics of the same scene captured by the multispectral detector, so as to more comprehensively and accurately describe scene information. Multispectral image fusion is part of the image fusion task and has wide application in many areas, such as scene monitoring [1], target recognition, geological exploration, military and the like.
Deep learning techniques play an important role in image fusion. The existing image fusion method based on deep learning is mainly divided into two types: a convergence method based on a countermeasure network and a convergence method based on a non-countermeasure network. The fusion method based on the countermeasure network aims at fusing main features of a source image through designing a loss function in the countermeasure training process. However, this type of method has the following limitations: it is difficult for the network to optimize and to design a loss function that contains all the important information of the source image. In the fusion method based on the non-countermeasure network, the feature extraction process is often realized in an unsupervised mode, and the feature extraction is difficult to guarantee. Therefore, regardless of the counterlearning based on the loss function design or the unsupervised learning, ignoring any important information in the source image (such as gradient, edge, texture, intensity and contrast) will result in the loss of important features from the fusion result.
Therefore, the feature extraction capability of the network plays a key role in multi-source image fusion. In order to improve the network feature extraction capability, the invention provides an interactive feature-embedded multispectral image fusion network based on self-supervision learning, breaks through the technical bottleneck of comprehensively extracting the source image features in the existing fusion network, and has important significance for promoting more deep application of multispectral images in other fields.
Disclosure of Invention
The invention aims to improve the network feature extraction capability and provides a multispectral image fusion method based on interactive feature embedding.
The technical scheme of the invention is as follows:
a multispectral image fusion method based on interactive feature embedding comprises the following steps:
the method comprises the following steps: making a multi-spectral image fusion dataset
1) Acquiring a multispectral image dataset, a source image I1And a source image I2
2) For the multispectral source image I in the step 1)1,I2Adjusted to a height and a width ofSo that;
3) for the source images I with the same size in the step 2)1,I2Sliding from left to right to obtain image blocks from top to bottom according to a window with a fixed size and step length;
4) turning over and mirroring the image pair obtained in the step 3), and enlarging the size of the training data set sample;
step two: designing an interactive feature-embedded multispectral image fusion network for self-supervision learning to realize multi-focus image fusion
1) Designing a self-supervision feature extraction module, wherein the module comprises two branches with the same structure; each branch consists of a plurality of convolution layers, and the parameter of convolution kernel of each layer is 3 x f, wherein f is the number of convolution kernels; the hierarchical feature extracted from the convolutional layer is represented by F'm、F”mM is denoted as the mth layer, ranging from {1, 2.., M }; the two branches input a source image I with width W and height H1、I2The output result is a source image reconstruction result
Figure BDA0003272662360000026
Loss function L of the module1Expressed as:
Figure BDA0003272662360000021
where MSE represents the mean square error, InFor the source image I1、I2
Figure BDA0003272662360000022
Representing a source image I1、I2Corresponding reconstructed result
Figure BDA0003272662360000023
And
Figure BDA0003272662360000024
2) designing an interactive feature embedding module, which is composed of a plurality of convolution layers, wherein the convolution kernel parameter of each layer is 3 x fWherein f is the number of convolution kernels; the hierarchy features extracted for the convolutional layer are denoted as Fm(ii) a Wherein the hierarchical features of the first layer are derived from the source image I1、I2Obtaining the hierarchical characteristics F from the second layer to the M layers after convolutionmHierarchical feature F 'extracted by self-supervised feature extraction module'm、F”mThe process expression obtained by the convolution operation is:
Figure BDA0003272662360000025
wherein, C2For 2 convolution operations, C44 convolution operations; cat represents concat operation; from the above formula, it can be observed that the layer of the intermediate layer and the feature FmIs a hierarchical feature F 'extracted by a self-supervised feature extraction module'm、F”mDerived therefrom, this ensures FmAnd F'm,F”mSharing low, medium and high-grade characteristics to further serve fusion tasks;
hierarchical feature F 'extracted by self-supervision feature extraction module on the other hand'm、F”mAlso derived from the hierarchical features FmFrom FmObtained after a convolution operation, expressed as:
F'm,F”m=C(Fm),M≥m≥1 (3)
in view of feature F 'for reconstructing the source image'm,F”mFrom FmThis also ensures FmThe method comprises the main characteristics of a source image, and further serves a fusion task;
3) outputting a fusion result; fusion result IfThe final output result weight W of the source image and the interactive feature embedding module is multiplied to obtain:
If=I1*W+I2*(1-W) (4)
wherein W is a weight map represented by FMObtained by a convolution operation:
W=C4(FM) (5)
wherein C is4Represents four convolution operations;
step three: network training, wherein the network training process is a process of optimizing a loss function; the self-supervision learning interactive feature embedded multispectral image fusion network loss function provided by the method consists of two parts: loss of self-supervised training, i.e. L1(ii) a Loss of fusion, i.e. Lf(ii) a Network training is the process of minimizing the loss function L,
L=L1+Lf (6)
in particular, LfIs a loss function based on SSIM;
step four: a testing stage; inputting two multispectral images I with width W and height H1、I2Output the corresponding reconstruction result
Figure BDA0003272662360000031
And final fusion result If
The invention has the beneficial effects that: compared with the prior art, the invention has the following beneficial effects: the invention provides a multispectral image fusion method for self-supervision learning, which can effectively improve the network feature extraction capability through a self-supervision mechanism. The invention provides an interactive feature embedding structure which can be used as a bridge connection image fusion and reconstruction task, and can gradually embed key information acquired by self-supervision learning into the fusion task, so that the fusion performance is improved finally.
Drawings
FIG. 1 is a schematic diagram of the basic structure of the process of the present invention.
Fig. 2 is a schematic diagram of the fusion result of the present embodiment.
Detailed Description
The specific embodiment of the multispectral image fusion method based on interactive feature embedding is explained in detail as follows:
the method comprises the following steps: the multispectral image fusion data set production specifically comprises the following steps:
1) acquiring a multi-spectral image dataset, a source mapLike I1And a source image I2
2) For the multispectral source image I in the step 1)1,I2Adjusting to be consistent in height and width;
3) for the source images I with the same size in the step 2)1,I2And sliding the image blocks from left to right from top to bottom in a window with a fixed size and step length.
4) Turning over and mirroring the image pair obtained in the step 3), and enlarging the size of the training data set sample;
step two: as shown in fig. 1, designing a multispectral image fusion network with interactive feature embedding for self-supervised learning to implement multispectral image fusion includes:
1) and designing a self-supervision characteristic extraction module. As shown in fig. 1, the module comprises two structurally identical branches. In this embodiment, each branch is composed of M (M ═ 3) convolution layers, each layer having convolution kernel parameters of 3 × f (f is the number of convolution kernels). The number of convolution kernels in the first layer is 64, the number of convolution kernels in the second layer is 128, and the number of convolution kernels in the third layer is 256. The hierarchical feature extracted from the convolutional layer is represented by F'm,F”m(m is denoted as the mth layer, ranging from {1,2,3 }). The two branches input a source image I with width W and height H1、I2The output result is a source image reconstruction result
Figure BDA0003272662360000046
Loss function L of the module1Expressed as:
Figure BDA0003272662360000041
where MSE represents the mean square error, InFor the source image I1、I2
Figure BDA0003272662360000042
Representing a source image I1、I2Corresponding reconstructed result
Figure BDA0003272662360000043
And
Figure BDA0003272662360000044
2) interactive feature embedding module design. As shown in fig. 1, in this embodiment, the module is composed of M +1(M ═ 3) convolutional layers, and the convolution kernel parameter of each layer is 3 × f (f is the number of convolution kernels). The number of convolution kernels in the first layer is 64, the number of convolution kernels in the second layer is 128, the number of convolution kernels in the third layer is 256, and the number of convolution kernels in the fourth layer is 1. The hierarchy features extracted for the convolutional layer are denoted as Fm. Wherein the hierarchical feature F of the first layer1From a source image I1、I2Obtaining the hierarchical characteristics F from the second layer to the M layers after convolutionmHierarchical feature F 'extracted by self-supervised feature extraction module'm,F”mThe process expression obtained by the convolution operation is:
Figure BDA0003272662360000045
wherein C is2For 2 convolution operations, C4Is 4 convolution operations. Cat represents the concat operation. From the above formula, it can be observed that the layer of the intermediate layer and the feature FmIs a hierarchical feature F 'extracted by a self-supervised feature extraction module'm,F”mDerived therefrom, this ensures FmCan be reacted with F'm,F”mSharing low, medium and high level features to serve fusion tasks.
Hierarchical feature F 'extracted by self-supervision feature extraction module on the other hand'm,F”mAlso derived from the hierarchical features FmFrom FmObtained after a convolution operation, expressed as:
F'm,F”m=C(Fm),M≥m≥1 (3)
in view of feature F 'for reconstructing the source image'm,F”mFrom FmThis also ensures FmThe method comprises the main characteristics of the source image, and further serves a fusion task. Thus, interact withThe self-supervision mechanism can be fully utilized by the formula characteristic embedding mechanism, so that important characteristics are prevented from being lost in the fusion result.
3) And outputting a fusion result. As shown in FIG. 1, fusion result IfThe final output result weight W of the source image and the interactive feature embedding module is multiplied to obtain:
If=I1*W+I2*(1-W) (4)
wherein W is a weight map represented by FMObtained by a convolution operation:
W=C4(FM) (5)
wherein C is4Representing four convolution operations.
Step three: and (5) network training. The network training process is a process that optimizes a loss function. The interactive feature embedded multispectral image fusion network loss function provided by the invention consists of two parts: loss of self-supervised training, i.e. L1(shown in formula 1); loss of fusion, i.e. Lf. Network training is the process of minimizing the loss function L,
L=L1+Lf (6)
in particular, LfIs a loss function based on SSIM.
The parameters in the network training process are set as follows:
base _ lr:1 e-4/learning rate
momentum of 0.9/momentum
weight _ decay:5 e-3/weight decay
batch size 1/batch size
solution _ mode GPU/example training Using GPU
Step four: and (5) a testing stage. Inputting two multispectral images I with width W and height H1、I2The model of the invention outputs its corresponding reconstructed result
Figure BDA0003272662360000051
And final fusion result If. As shown in fig. 2, compared to other fusion methodsThe fusion result obtained by the method can better retain the main characteristics in the source image, including the brightness characteristic and the texture characteristic.

Claims (1)

1.一种基于交互式特征嵌入的多光谱图像融合方法,其特征在于,步骤如下:1. a multispectral image fusion method based on interactive feature embedding, is characterized in that, step is as follows: 步骤一:制作多光谱图像融合数据集Step 1: Create a multispectral image fusion dataset 1)获取多光谱图像数据集,源图像I1以及源图像I21) acquiring a multispectral image dataset, source image I 1 and source image I 2 ; 2)对步骤1)中的多光谱源图像I1,I2调整到高度和宽度一致;2) For the multi-spectral source images I 1 and I 2 in step 1), adjust the height and width to be consistent; 3)对步骤2)中大小一致的源图像I1,I2,以固定大小的窗口、步长从上到下,从左到右滑动取图像块;3) For the source images I 1 , I 2 of the same size in step 2), take image blocks by sliding from top to bottom and from left to right with a fixed size window and step size; 4)对步骤3)中获取的图像对,进行翻转、镜像操作,扩大训练数据集样本大小;4) Perform flipping and mirroring operations on the image pair obtained in step 3) to expand the sample size of the training data set; 步骤二:设计自监督学习的交互式特征嵌入的多光谱图像融合网络,实现多光谱图像融合Step 2: Design a self-supervised learning interactive feature embedding multispectral image fusion network to achieve multispectral image fusion 1)设计自监督特征提取模块,该模块包含两个结构相同的分支;每个分支由多个卷积层组成,每层的卷积核参数为3*3*f,其中f为卷积核个数;卷积层所提取的层级特征表示为F′m、F″m,m表示为第m层,范围为{1,2,...,M};两个分支输入为宽度为W、高度为H的源图像I1、I2,输出结果为源图像重构结果
Figure FDA0003272662350000012
该模块的loss函数L1表示为:
1) Design a self-supervised feature extraction module, which contains two branches with the same structure; each branch consists of multiple convolutional layers, and the convolution kernel parameter of each layer is 3*3*f, where f is the convolution kernel The number of layers; the hierarchical features extracted by the convolutional layer are represented as F′ m , F″ m , m is represented as the mth layer, and the range is {1, 2,..., M}; the input of the two branches is the width W , source images I 1 , I 2 of height H, and the output result is the reconstruction result of the source image
Figure FDA0003272662350000012
The loss function L1 of this module is expressed as:
Figure FDA0003272662350000013
Figure FDA0003272662350000013
其中,MSE表示均方误差,In为源图像I1、I2
Figure FDA0003272662350000014
表示源图像I1、I2对应的重构结果
Figure FDA0003272662350000015
Figure FDA0003272662350000016
Among them, MSE represents mean square error, I n is source image I 1 , I 2 ,
Figure FDA0003272662350000014
Represents the reconstruction results corresponding to the source images I 1 , I 2
Figure FDA0003272662350000015
and
Figure FDA0003272662350000016
2)设计交互式特征嵌入模块,该模块由多个卷积层组成,每层的卷积核参数为3*3*f,其中f为卷积核个数;卷积层所提取的层级特征表示为Fm;其中,第一层的层级特征由源图像I1、I2卷积后得到,第二层至M层的层级特征Fm由自监督特征提取模块提取的层级特征F′m、F″m经过卷积操作得到该过程表达为:2) Design an interactive feature embedding module, which consists of multiple convolutional layers, and the convolution kernel parameter of each layer is 3*3*f, where f is the number of convolution kernels; the hierarchical features extracted by the convolutional layer Denoted as F m ; wherein, the hierarchical features of the first layer are obtained by convolution of the source images I 1 and I 2 , and the hierarchical features F m of the second to M layers are obtained by the hierarchical features F′ m extracted by the self-supervised feature extraction module , F″ m through the convolution operation, the process is expressed as:
Figure FDA0003272662350000011
Figure FDA0003272662350000011
其中,C2为2次卷积操作,C4为4次卷积操作;Cat表示concat操作;通过上式,可以观察到,中间层的层及特征Fm是由自监督特征提取模块提取的层级特征F′m、F″m派生得到的,这也就保证了Fm与F′m,F″m共享低、中、高级特征,进而服务于融合任务融合;Among them, C 2 is 2 convolution operations, C 4 is 4 convolution operations; Cat represents the concat operation; through the above formula, it can be observed that the layers and features F m of the middle layer are extracted by the self-supervised feature extraction module The hierarchical features F′ m and F″ m are derived, which also ensures that F m and F′ m , F″ m share low, medium and high-level features, thereby serving the fusion task fusion; 另一方面由自监督特征提取模块提取的层级特征F′m、F″m也派生于层级特征Fm,由Fm经卷积操作后得到,表达为:On the other hand, the hierarchical features F′ m and F″ m extracted by the self-supervised feature extraction module are also derived from the hierarchical feature F m , which are obtained by the convolution operation of F m and are expressed as: F′m,F″m=C(Fm),M≥m≥1 (3)F′ m , F″ m =C(F m ), M≥m≥1 (3) 鉴于用于重构源图像的特征F′m,F″m来自于Fm,这也就保证了Fm包含源图像主要特征,进而服务于融合任务;In view of the feature F' m used to reconstruct the source image, F" m comes from F m , which also ensures that F m contains the main features of the source image, and then serves the fusion task; 3)融合结果输出;融合结果If由源图像与交互式特征嵌入模块最终输出结果权重W相乘获得:3) Fusion result output; the fusion result I f is obtained by multiplying the source image and the final output result weight W of the interactive feature embedding module: If=I1*W+I2*(1-W) (4)I f =I 1 *W+I 2 *(1-W) (4) 其中,W为权重图,由FM通过卷积操作获得:Among them, W is the weight map, obtained by FM through the convolution operation: W=C4(FM) (5)W= C 4 (FM ) (5) 其中C4表示四次卷积操作;where C 4 represents four convolution operations; 步骤三:网络训练,网络训练过程是最优化损失函数的过程;本方法提出的自监督学习的交互式特征嵌入的多光谱图像融合网络损失函数由两部分组成部分组成:自监督训练损失,即L1;融合损失,即Lf;网络训练为最小化loss函数L的过程,Step 3: Network training, the network training process is the process of optimizing the loss function; the multispectral image fusion network loss function of the interactive feature embedding of self-supervised learning proposed by this method consists of two parts: the self-supervised training loss, namely L 1 ; fusion loss, namely L f ; network training is the process of minimizing the loss function L, L=L1+Lf (6)L=L 1 +L f (6) 具体地,Lf为基于SSIM的损失函数;Specifically, L f is the loss function based on SSIM; 步骤四:测试阶段;输入宽度W,高度H的两幅多光谱图像I1、I2,输出其对应重构结果
Figure FDA0003272662350000021
以及最终融合结果If
Step 4: Test phase; input two multispectral images I 1 , I 2 with width W and height H, and output the corresponding reconstruction results
Figure FDA0003272662350000021
and the final fusion result If.
CN202111106858.2A 2021-09-22 2021-09-22 Multispectral image fusion method based on interactive feature embedding Active CN113762288B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111106858.2A CN113762288B (en) 2021-09-22 2021-09-22 Multispectral image fusion method based on interactive feature embedding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111106858.2A CN113762288B (en) 2021-09-22 2021-09-22 Multispectral image fusion method based on interactive feature embedding

Publications (2)

Publication Number Publication Date
CN113762288A true CN113762288A (en) 2021-12-07
CN113762288B CN113762288B (en) 2022-11-29

Family

ID=78796650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111106858.2A Active CN113762288B (en) 2021-09-22 2021-09-22 Multispectral image fusion method based on interactive feature embedding

Country Status (1)

Country Link
CN (1) CN113762288B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886345A (en) * 2019-02-27 2019-06-14 清华大学 Self-supervised learning model training method and device based on relational reasoning
US20210027417A1 (en) * 2019-07-22 2021-01-28 Raytheon Company Machine learned registration and multi-modal regression
CN112465733A (en) * 2020-08-31 2021-03-09 长沙理工大学 Remote sensing image fusion method, device, medium and equipment based on semi-supervised learning
CN113095249A (en) * 2021-04-19 2021-07-09 大连理工大学 Robust multi-mode remote sensing image target detection method
KR20210112869A (en) * 2020-03-06 2021-09-15 세종대학교산학협력단 Single-shot adaptive fusion method and apparatus for robust multispectral object detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886345A (en) * 2019-02-27 2019-06-14 清华大学 Self-supervised learning model training method and device based on relational reasoning
US20210027417A1 (en) * 2019-07-22 2021-01-28 Raytheon Company Machine learned registration and multi-modal regression
KR20210112869A (en) * 2020-03-06 2021-09-15 세종대학교산학협력단 Single-shot adaptive fusion method and apparatus for robust multispectral object detection
CN112465733A (en) * 2020-08-31 2021-03-09 长沙理工大学 Remote sensing image fusion method, device, medium and equipment based on semi-supervised learning
CN113095249A (en) * 2021-04-19 2021-07-09 大连理工大学 Robust multi-mode remote sensing image target detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MINGKAI ZHENG, ET AL.: "ReSSL: Relational Self-Supervised Learning with Weak Augmentation", 《ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021)》 *
田嵩旺 等: "基于多判别器的多波段图像自监督融合方法", 《计算机科学》 *

Also Published As

Publication number Publication date
CN113762288B (en) 2022-11-29

Similar Documents

Publication Publication Date Title
CN112465827B (en) Contour perception multi-organ segmentation network construction method based on class-by-class convolution operation
CN111709903B (en) Infrared and visible light image fusion method
CN111047515A (en) Cavity convolution neural network image super-resolution reconstruction method based on attention mechanism
CN110119780A (en) Based on the hyperspectral image super-resolution reconstruction method for generating confrontation network
CN113962893A (en) Face image restoration method based on multi-scale local self-attention generation countermeasure network
CN110097528A (en) A kind of image interfusion method based on joint convolution autoencoder network
CN109949214A (en) An image style transfer method and system
CN110490219B (en) A Method for Seismic Data Reconstruction Based on Texture Constrained U-net Network
CN112580670B (en) Hyperspectral-spatial-spectral combined feature extraction method based on transfer learning
CN108830818A (en) A kind of quick multi-focus image fusing method
CN106097253B (en) A single image super-resolution reconstruction method based on block rotation and sharpness
CN115511767B (en) Self-supervised learning multi-modal image fusion method and application thereof
CN109035267A (en) A kind of image object based on deep learning takes method
CN113706407B (en) Infrared and visible light image fusion method based on separation and characterization
CN118710507A (en) Underwater image enhancement method based on Mamba hybrid architecture based on space-frequency fusion
CN116824525B (en) Image information extraction method based on traffic road image
CN116363036A (en) Infrared and visible light image fusion method based on visual enhancement
Yang et al. MSE-Net: generative image inpainting with multi-scale encoder
CN114494828A (en) Grape disease identification method and device, electronic equipment and storage medium
CN118762009B (en) Colonoscopy polyp image detection method based on Mamba and YOLOv8
Shao et al. SRWGANTV: image super-resolution through wasserstein generative adversarial networks with total variational regularization
CN114022362A (en) Image super-resolution method based on pyramid attention mechanism and symmetric network
CN116342392B (en) Single remote sensing image super-resolution method based on deep learning
CN113762288A (en) Multispectral image fusion method based on interactive feature embedding
CN118154555A (en) Plant multi-organ CT image phenotype analysis method based on label efficient learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zhao Fan

Inventor after: Zhao Wenda

Inventor after: Wu Xue

Inventor after: Liu Yu

Inventor after: Zhang Yiming

Inventor before: Zhao Fan

Inventor before: Zhao Wenda

Inventor before: Wu Xue

GR01 Patent grant
GR01 Patent grant