CN113762288B - Multispectral image fusion method based on interactive feature embedding - Google Patents
Multispectral image fusion method based on interactive feature embedding Download PDFInfo
- Publication number
- CN113762288B CN113762288B CN202111106858.2A CN202111106858A CN113762288B CN 113762288 B CN113762288 B CN 113762288B CN 202111106858 A CN202111106858 A CN 202111106858A CN 113762288 B CN113762288 B CN 113762288B
- Authority
- CN
- China
- Prior art keywords
- fusion
- image
- convolution
- feature
- source image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Editing Of Facsimile Originals (AREA)
Abstract
The invention provides a multispectral image fusion method based on interactive feature embedding, which belongs to the field of computer vision and comprises the following steps: collecting a multispectral image pair, preprocessing the image pair, including height and width adjustment, sliding window image acquisition and image equivalence, and acquiring a network training data set; designing an interactive feature embedded multispectral image fusion network based on self-supervision learning; designing a loss function, and supervising network model training; in the testing process, the multispectral image pair is input, and the final image fusion result is output through a network. The invention can effectively improve the network feature extraction capability and is beneficial to the retention of important information in the fusion result.
Description
Technical Field
The invention belongs to the field of computer vision, and particularly relates to multispectral image fusion based on interactive feature embedding.
Background
The multispectral image fusion is to integrate the image characteristics of the same scene captured by the multispectral detector, so as to more comprehensively and accurately describe scene information. Multispectral image fusion is part of the image fusion task and has wide applications in many areas, such as scene monitoring [1], target identification, geological exploration, and military.
Deep learning techniques play an important role in image fusion. The existing image fusion method based on deep learning is mainly divided into two types: a convergence method based on a countermeasure network and a convergence method based on a non-countermeasure network. The fusion method based on the countermeasure network aims at fusing main features of a source image through designing a loss function in the countermeasure training process. However, this type of method has the following limitations: it is difficult for the network to optimize and to design a loss function that contains all the important information of the source image. In the fusion method based on the non-countermeasure network, the feature extraction process is often realized in an unsupervised mode, and the feature extraction is difficult to guarantee. Therefore, regardless of the counterlearning based on the loss function design or the unsupervised learning, ignoring any important information in the source image (such as gradient, edge, texture, strength and contrast) will result in the loss of important features from the fusion result.
Therefore, the feature extraction capability of the network plays a key role in multi-source image fusion. In order to improve the network feature extraction capability, the invention provides an interactive feature-embedded multispectral image fusion network based on self-supervision learning, breaks through the technical bottleneck of comprehensively extracting the source image features in the existing fusion network, and has important significance for promoting more deep application of multispectral images in other fields.
Disclosure of Invention
The invention aims to improve the network feature extraction capability and provides a multispectral image fusion method based on interactive feature embedding.
The technical scheme of the invention is as follows:
a multispectral image fusion method based on interactive feature embedding comprises the following steps:
the method comprises the following steps: making a multi-spectral image fusion dataset
1) Acquiring a multispectral image dataset, a source image I 1 And a source image I 2 ;
2) For the multispectral source image I in the step 1) 1 ,I 2 Adjusting to be consistent in height and width;
3) For the source images I with the same size in the step 2) 1 ,I 2 Sliding from left to right to obtain image blocks from top to bottom according to a window with a fixed size and step length;
4) Turning over and mirroring the image pair obtained in the step 3), and enlarging the size of the training data set sample;
step two: designing an interactive feature-embedded multispectral image fusion network for self-supervision learning to realize multi-focus image fusion
1) Designing a self-supervision feature extraction module, wherein the module comprises two branches with the same structure; each branch consists of a plurality of convolution layers, and the convolution kernel parameter of each layer is 3 x f, wherein f is the number of convolution kernels; the hierarchical feature extracted from the convolutional layer is represented as F' m 、F” m M is denoted as the mth layer, ranging from {1, 2.., M }; the two branches input a source image I with width W and height H 1 、I 2 The output result is a source image reconstruction resultLoss function L of the module 1 Expressed as:
where MSE denotes the mean square error, I n For the source image I 1 、I 2 ,Representing a source image I 1 、I 2 Corresponding reconstructed resultAnd
2) Designing an interactive feature embedding module, wherein the module consists of a plurality of convolution layers, and the parameter of convolution kernels of each layer is 3 x f, wherein f is the number of convolution kernels; the hierarchy features extracted for the convolutional layer are denoted as F m (ii) a Wherein the hierarchical features of the first layer are derived from the source image I 1 、I 2 Obtaining the hierarchical characteristics F from the second layer to the M layers after convolution m Hierarchical feature F 'extracted by self-supervised feature extraction module' m 、F” m The process expression obtained by the convolution operation is:
wherein, C 2 For 2 convolution operations, C 4 4 convolution operations; cat represents concat operation; from the above formula, it can be observed that the layer of the intermediate layer and the feature F m Is a hierarchical feature F 'extracted by a self-supervised feature extraction module' m 、F” m Derived therefrom, this ensures F m And F' m ,F” m Sharing low, medium and high-grade characteristics to further serve fusion tasks;
hierarchical feature F 'extracted by self-supervised feature extraction Module, on the other hand' m 、F” m Also derived from the hierarchical features F m From F m After the convolution operation, the expression is as follows:
F' m ,F” m =C(F m ),M≥m≥1 (3)
in view of feature F 'for reconstructing the source image' m ,F” m From F m This also ensures F m The method comprises the main characteristics of a source image, and further serves a fusion task;
3) Outputting a fusion result; fusion results I f The final output result weight W of the source image and the interactive feature embedding module is multiplied to obtain:
I f =I 1 *W+I 2 *(1-W) (4)
wherein W is a weight map represented by F M Obtained by a convolution operation:
W=C 4 (F M ) (5)
wherein C is 4 Represents the four convolution operations;
step three: network training, wherein the network training process is a process of optimizing a loss function; the method provides an interactive feature-embedded multispectral map for self-supervised learningThe image fusion network loss function consists of two parts: loss of self-supervised training, i.e. L 1 (ii) a Loss of fusion, i.e. L f (ii) a Network training is the process of minimizing the loss function L,
L=L 1 +L f (6)
in particular, L f Is a loss function based on SSIM;
step four: a testing stage; inputting two multispectral images I with width W and height H 1 、I 2 Output the corresponding reconstruction resultAnd final fusion result I f 。
The invention has the beneficial effects that: compared with the prior art, the invention has the following beneficial effects: the invention provides a multispectral image fusion method for self-supervision learning, which can effectively improve the network feature extraction capability through a self-supervision mechanism. The invention provides an interactive feature embedded structure which can be used as a bridge connection image fusion and reconstruction task, can gradually embed key information acquired by self-supervision learning into the fusion task, and finally improves the fusion performance.
Drawings
FIG. 1 is a schematic diagram of the basic structure of the process of the present invention.
FIG. 2 is a schematic diagram of the fusion result of the present embodiment.
Detailed Description
The specific embodiment of the multispectral image fusion method based on interactive feature embedding is explained in detail as follows:
the method comprises the following steps: the multispectral image fusion data set production specifically comprises the following steps:
1) Acquiring a multispectral image dataset, a source image I 1 And a source image I 2 ;
2) For the multispectral source image I in the step 1) 1 ,I 2 Adjusting to be consistent in height and width;
3) For the source images I with the same size in the step 2) 1 ,I 2 And sliding the image blocks from left to right from top to bottom in a window with a fixed size and step length.
4) Turning over and mirroring the image pair obtained in the step 3), and enlarging the size of the training data set sample;
step two: as shown in fig. 1, designing a multispectral image fusion network with interactive feature embedding for self-supervised learning to implement multispectral image fusion includes:
1) And designing a self-supervision characteristic extraction module. As shown in fig. 1, the module comprises two structurally identical branches. In this embodiment, each branch consists of M (M = 3) convolutional layers, each layer having a convolution kernel parameter of 3 × f (f is the number of convolution kernels). The number of convolution kernels in the first layer is 64, the number of convolution kernels in the second layer is 128, and the number of convolution kernels in the third layer is 256. The hierarchical feature extracted from the convolutional layer is represented by F' m ,F” m (m is denoted as the m-th layer, ranging from {1,2,3 }). The two branches input a source image I with width W and height H 1 、I 2 The output result is a source image reconstruction resultLoss function L of the module 1 Expressed as:
where MSE represents the mean square error, I n For the source image I 1 、I 2 ,Representing a source image I 1 、I 2 Corresponding reconstructed resultAnd
2) Interactive feature embedding module design. In this embodiment, as shown in FIG. 1, theThe module consists of M +1 (M = 3) convolutional layers, each with convolution kernel parameters of 3 × f (f is the number of convolution kernels). The number of convolution kernels in the first layer is 64, the number of convolution kernels in the second layer is 128, the number of convolution kernels in the third layer is 256, and the number of convolution kernels in the fourth layer is 1. The hierarchy features extracted for the convolutional layer are denoted as F m . Wherein the hierarchical feature F of the first layer 1 From a source image I 1 、I 2 Obtaining the hierarchical characteristics F from the second layer to the M layers after convolution m Hierarchical feature F 'extracted by self-supervised feature extraction module' m ,F” m The process expression obtained by the convolution operation is:
wherein C 2 For 2 convolution operations, C 4 Is 4 convolution operations. Cat represents the concat operation. From the above formula, it can be observed that the layer of the intermediate layer and the feature F m Is a hierarchical feature F 'extracted by a self-supervised feature extraction module' m ,F” m Derived from, this ensures F m Can be reacted with F' m ,F” m Sharing low, medium and high level features to serve fusion tasks.
Hierarchical feature F 'extracted by self-supervision feature extraction module on the other hand' m ,F” m Also derived from the hierarchical features F m From F m Obtained after a convolution operation, expressed as:
F' m ,F” m =C(F m ),M≥m≥1 (3)
in view of feature F 'for reconstructing the source image' m ,F” m From F m This also ensures F m The fusion task is served by the fusion task. Therefore, the interactive feature embedding mechanism can fully utilize the self-supervision mechanism, thereby avoiding missing important features in the fusion result.
3) And outputting a fusion result. As shown in FIG. 1, fusion result I f The weight W of the final output result of the source image and the interactive feature embedding module is multiplied to obtainObtaining:
I f =I 1 *W+I 2 *(1-W) (4)
wherein W is a weight map represented by F M Obtained by a convolution operation:
W=C 4 (F M ) (5)
wherein C 4 Representing four convolution operations.
Step three: and (5) network training. The network training process is a process that optimizes a loss function. The interactive feature embedded multispectral image fusion network loss function provided by the invention consists of two parts: loss of self-supervised training, i.e. L 1 (shown in formula 1); loss of fusion, i.e. L f . Network training is the process of minimizing the loss function L,
L=L 1 +L f (6)
in particular, L f Is a loss function based on SSIM.
The parameters in the network training process are set as follows:
base _ lr:1 e-4/learning rate
momentum of 0.9/momentum
weight _ decay:5 e-3/weight decay
batch size 1/batch size
solution _ mode GPU/example training Using a GPU
Step four: and (5) testing. Inputting two multispectral images I with width W and height H 1 、I 2 The model of the invention outputs its corresponding reconstructed resultAnd final fusion result I f . As shown in fig. 2, compared with other fusion methods, the fusion result obtained by the method of the present invention can better retain the main features in the source image, including the luminance feature and the texture feature.
Claims (1)
1. A multispectral image fusion method based on interactive feature embedding is characterized by comprising the following steps:
the method comprises the following steps: making a multi-spectral image fusion dataset
1) Acquiring a multispectral image dataset, a source image I 1 And a source image I 2 ;
2) For the multispectral source image I in the step 1) 1 ,I 2 Adjusting to be consistent in height and width;
3) For the source images I with the same size in the step 2) 1 ,I 2 Sliding from left to right to obtain image blocks from top to bottom according to a window with a fixed size and step length;
4) Turning over and mirroring the image pair obtained in the step 3), and enlarging the size of the training data set sample;
step two: designing an interactive feature-embedded multispectral image fusion network for self-supervision learning to realize multispectral image fusion
1) Designing a self-supervision feature extraction module, wherein the module comprises two branches with the same structure; each branch consists of a plurality of convolution layers, and the convolution kernel parameter of each layer is 3 x f, wherein f is the number of convolution kernels; the hierarchical feature extracted from the convolutional layer is represented as F' m 、F″ m M is denoted as the mth layer, ranging from {1, 2.., M }; the two branches input a source image I with width W and height H 1 、I 2 The output result is a source image reconstruction resultLoss function L of the module 1 Expressed as:
where MSE denotes the mean square error, I n For the source image I 1 、I 2 ,Representing a source image I 1 、I 2 Corresponding reconstruction resultAnd
2) Designing an interactive feature embedding module, wherein the module consists of a plurality of convolution layers, and the parameter of the convolution kernel of each layer is 3 x f, wherein f is the number of the convolution kernels; the hierarchy feature extracted for the convolutional layer is denoted as F m (ii) a Wherein the hierarchical features of the first layer are derived from the source image I 1 、I 2 Obtaining the hierarchical characteristics F from the second layer to the M layers after convolution m Hierarchical feature F 'extracted by self-supervised feature extraction module' m-1 、F″ m-1 Obtained through a convolution operation, and the process is expressed as:
wherein, C 2 For 2 convolution operations, C 4 4 convolution operations; cat represents the concat operation; from the above formula, it can be observed that the hierarchical features F of the intermediate layer m Is a hierarchical feature F 'extracted by a self-supervised feature extraction module' m 、F″ m Derived from, this ensures F m And F' m ,F″ m Sharing low, medium and high-grade characteristics to further serve fusion tasks;
hierarchical feature F 'extracted by self-supervision feature extraction module on the other hand' m 、F″ m Also derived from the hierarchical features F m From F m Obtained after a convolution operation, expressed as:
F′ m ,F″ m =C(F m ),M≥m≥1 (3)
in view of characteristic F 'for reconstructing the source image' m ,F″ m From F m This also ensures F m The method comprises the main characteristics of a source image, and further serves a fusion task;
3) Outputting a fusion result; fusion results I f Is a source image I 1 And I 2 Respectively combining the weight W of the final output result of the interactive feature embedding module and the weighted sum of the weights W and 1-W:
I f =I 1 *W+I 2 *(1-W) (4)
wherein W is a weight map represented by F M Obtained by a convolution operation:
W=C 4 (F M ) (5)
wherein C is 4 Represents four convolution operations;
step three: network training, wherein the network training process is a process of optimizing a loss function; the self-supervision learning interactive feature embedded multispectral image fusion network loss function provided by the method consists of two parts: loss of self-supervised training, i.e. L 1 (ii) a Loss of fusion, i.e. L f (ii) a Network training is the process of minimizing the loss function L,
L=L 1 +L f (6)
in particular, L f Is a loss function based on SSIM;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111106858.2A CN113762288B (en) | 2021-09-22 | 2021-09-22 | Multispectral image fusion method based on interactive feature embedding |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111106858.2A CN113762288B (en) | 2021-09-22 | 2021-09-22 | Multispectral image fusion method based on interactive feature embedding |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113762288A CN113762288A (en) | 2021-12-07 |
CN113762288B true CN113762288B (en) | 2022-11-29 |
Family
ID=78796650
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111106858.2A Active CN113762288B (en) | 2021-09-22 | 2021-09-22 | Multispectral image fusion method based on interactive feature embedding |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113762288B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109886345A (en) * | 2019-02-27 | 2019-06-14 | 清华大学 | Self-supervisory learning model training method and device based on relation inference |
CN112465733A (en) * | 2020-08-31 | 2021-03-09 | 长沙理工大学 | Remote sensing image fusion method, device, medium and equipment based on semi-supervised learning |
CN113095249A (en) * | 2021-04-19 | 2021-07-09 | 大连理工大学 | Robust multi-mode remote sensing image target detection method |
KR20210112869A (en) * | 2020-03-06 | 2021-09-15 | 세종대학교산학협력단 | Single-shot adaptive fusion method and apparatus for robust multispectral object detection |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021016352A1 (en) * | 2019-07-22 | 2021-01-28 | Raytheon Company | Machine learned registration and multi-modal regression |
-
2021
- 2021-09-22 CN CN202111106858.2A patent/CN113762288B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109886345A (en) * | 2019-02-27 | 2019-06-14 | 清华大学 | Self-supervisory learning model training method and device based on relation inference |
KR20210112869A (en) * | 2020-03-06 | 2021-09-15 | 세종대학교산학협력단 | Single-shot adaptive fusion method and apparatus for robust multispectral object detection |
CN112465733A (en) * | 2020-08-31 | 2021-03-09 | 长沙理工大学 | Remote sensing image fusion method, device, medium and equipment based on semi-supervised learning |
CN113095249A (en) * | 2021-04-19 | 2021-07-09 | 大连理工大学 | Robust multi-mode remote sensing image target detection method |
Non-Patent Citations (2)
Title |
---|
ReSSL: Relational Self-Supervised Learning with Weak Augmentation;Mingkai Zheng, et al.;《Advances in Neural Information Processing Systems 34 (NeurIPS 2021)》;20210720;1-13 * |
基于多判别器的多波段图像自监督融合方法;田嵩旺 等;《计算机科学》;20210421;185-190 * |
Also Published As
Publication number | Publication date |
---|---|
CN113762288A (en) | 2021-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110119780B (en) | Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network | |
CN112001960B (en) | Monocular image depth estimation method based on multi-scale residual error pyramid attention network model | |
CN113469094A (en) | Multi-mode remote sensing data depth fusion-based earth surface coverage classification method | |
CN110211045A (en) | Super-resolution face image method based on SRGAN network | |
CN109214989A (en) | Single image super resolution ratio reconstruction method based on Orientation Features prediction priori | |
CN115205672A (en) | Remote sensing building semantic segmentation method and system based on multi-scale regional attention | |
CN112686830B (en) | Super-resolution method of single depth map based on image decomposition | |
CN114037699B (en) | Pathological image classification method, equipment, system and storage medium | |
CN113256494B (en) | Text image super-resolution method | |
CN115147426B (en) | Model training and image segmentation method and system based on semi-supervised learning | |
CN111833261A (en) | Image super-resolution restoration method for generating countermeasure network based on attention | |
CN106097253A (en) | A kind of based on block rotation and the single image super resolution ratio reconstruction method of definition | |
CN113192076A (en) | MRI brain tumor image segmentation method combining classification prediction and multi-scale feature extraction | |
CN117830788A (en) | Image target detection method for multi-source information fusion | |
CN110633706B (en) | Semantic segmentation method based on pyramid network | |
CN113554655B (en) | Optical remote sensing image segmentation method and device based on multi-feature enhancement | |
CN113762288B (en) | Multispectral image fusion method based on interactive feature embedding | |
CN116188882A (en) | Point cloud up-sampling method and system integrating self-attention and multipath path diagram convolution | |
Shao et al. | SRWGANTV: image super-resolution through wasserstein generative adversarial networks with total variational regularization | |
CN116844039A (en) | Multi-attention-combined trans-scale remote sensing image cultivated land extraction method | |
CN115035377A (en) | Significance detection network system based on double-stream coding and interactive decoding | |
Cao et al. | Understanding 3D point cloud deep neural networks by visualization techniques | |
CN113971686A (en) | Target tracking method based on background restoration and capsule network | |
CN111899284A (en) | Plane target tracking method based on parameterized ESM network | |
Xiao et al. | Multi-Scale Non-Local Sparse Attention for Single Image Super-Resolution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information |
Inventor after: Zhao Fan Inventor after: Zhao Wenda Inventor after: Wu Xue Inventor after: Liu Yu Inventor after: Zhang Yiming Inventor before: Zhao Fan Inventor before: Zhao Wenda Inventor before: Wu Xue |
|
CB03 | Change of inventor or designer information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |