CN113762288A - Multispectral image fusion method based on interactive feature embedding - Google Patents

Multispectral image fusion method based on interactive feature embedding Download PDF

Info

Publication number
CN113762288A
CN113762288A CN202111106858.2A CN202111106858A CN113762288A CN 113762288 A CN113762288 A CN 113762288A CN 202111106858 A CN202111106858 A CN 202111106858A CN 113762288 A CN113762288 A CN 113762288A
Authority
CN
China
Prior art keywords
fusion
image
feature
convolution
source image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111106858.2A
Other languages
Chinese (zh)
Other versions
CN113762288B (en
Inventor
赵凡
赵文达
吴雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Normal University
Original Assignee
Liaoning Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Normal University filed Critical Liaoning Normal University
Priority to CN202111106858.2A priority Critical patent/CN113762288B/en
Publication of CN113762288A publication Critical patent/CN113762288A/en
Application granted granted Critical
Publication of CN113762288B publication Critical patent/CN113762288B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

The invention provides a multispectral image fusion method based on interactive feature embedding, which belongs to the field of computer vision and comprises the following steps: collecting a multispectral image pair, preprocessing the image pair, including height and width adjustment, sliding window image acquisition and image equivalence, and acquiring a network training data set; designing an interactive feature-embedded multispectral image fusion network based on self-supervision learning; designing a loss function, and supervising network model training; in the testing process, the multispectral image pair is input, and the final image fusion result is output through a network. The invention can effectively improve the network feature extraction capability and is beneficial to the retention of important information in the fusion result.

Description

Multispectral image fusion method based on interactive feature embedding
Technical Field
The invention belongs to the field of computer vision, and particularly relates to multispectral image fusion based on interactive feature embedding.
Background
The multispectral image fusion is to integrate the image characteristics of the same scene captured by the multispectral detector, so as to more comprehensively and accurately describe scene information. Multispectral image fusion is part of the image fusion task and has wide application in many areas, such as scene monitoring [1], target recognition, geological exploration, military and the like.
Deep learning techniques play an important role in image fusion. The existing image fusion method based on deep learning is mainly divided into two types: a convergence method based on a countermeasure network and a convergence method based on a non-countermeasure network. The fusion method based on the countermeasure network aims at fusing main features of a source image through designing a loss function in the countermeasure training process. However, this type of method has the following limitations: it is difficult for the network to optimize and to design a loss function that contains all the important information of the source image. In the fusion method based on the non-countermeasure network, the feature extraction process is often realized in an unsupervised mode, and the feature extraction is difficult to guarantee. Therefore, regardless of the counterlearning based on the loss function design or the unsupervised learning, ignoring any important information in the source image (such as gradient, edge, texture, intensity and contrast) will result in the loss of important features from the fusion result.
Therefore, the feature extraction capability of the network plays a key role in multi-source image fusion. In order to improve the network feature extraction capability, the invention provides an interactive feature-embedded multispectral image fusion network based on self-supervision learning, breaks through the technical bottleneck of comprehensively extracting the source image features in the existing fusion network, and has important significance for promoting more deep application of multispectral images in other fields.
Disclosure of Invention
The invention aims to improve the network feature extraction capability and provides a multispectral image fusion method based on interactive feature embedding.
The technical scheme of the invention is as follows:
a multispectral image fusion method based on interactive feature embedding comprises the following steps:
the method comprises the following steps: making a multi-spectral image fusion dataset
1) Acquiring a multispectral image dataset, a source image I1And a source image I2
2) For the multispectral source image I in the step 1)1,I2Adjusted to a height and a width ofSo that;
3) for the source images I with the same size in the step 2)1,I2Sliding from left to right to obtain image blocks from top to bottom according to a window with a fixed size and step length;
4) turning over and mirroring the image pair obtained in the step 3), and enlarging the size of the training data set sample;
step two: designing an interactive feature-embedded multispectral image fusion network for self-supervision learning to realize multi-focus image fusion
1) Designing a self-supervision feature extraction module, wherein the module comprises two branches with the same structure; each branch consists of a plurality of convolution layers, and the parameter of convolution kernel of each layer is 3 x f, wherein f is the number of convolution kernels; the hierarchical feature extracted from the convolutional layer is represented by F'm、F”mM is denoted as the mth layer, ranging from {1, 2.., M }; the two branches input a source image I with width W and height H1、I2The output result is a source image reconstruction result
Figure BDA0003272662360000026
Loss function L of the module1Expressed as:
Figure BDA0003272662360000021
where MSE represents the mean square error, InFor the source image I1、I2
Figure BDA0003272662360000022
Representing a source image I1、I2Corresponding reconstructed result
Figure BDA0003272662360000023
And
Figure BDA0003272662360000024
2) designing an interactive feature embedding module, which is composed of a plurality of convolution layers, wherein the convolution kernel parameter of each layer is 3 x fWherein f is the number of convolution kernels; the hierarchy features extracted for the convolutional layer are denoted as Fm(ii) a Wherein the hierarchical features of the first layer are derived from the source image I1、I2Obtaining the hierarchical characteristics F from the second layer to the M layers after convolutionmHierarchical feature F 'extracted by self-supervised feature extraction module'm、F”mThe process expression obtained by the convolution operation is:
Figure BDA0003272662360000025
wherein, C2For 2 convolution operations, C44 convolution operations; cat represents concat operation; from the above formula, it can be observed that the layer of the intermediate layer and the feature FmIs a hierarchical feature F 'extracted by a self-supervised feature extraction module'm、F”mDerived therefrom, this ensures FmAnd F'm,F”mSharing low, medium and high-grade characteristics to further serve fusion tasks;
hierarchical feature F 'extracted by self-supervision feature extraction module on the other hand'm、F”mAlso derived from the hierarchical features FmFrom FmObtained after a convolution operation, expressed as:
F'm,F”m=C(Fm),M≥m≥1 (3)
in view of feature F 'for reconstructing the source image'm,F”mFrom FmThis also ensures FmThe method comprises the main characteristics of a source image, and further serves a fusion task;
3) outputting a fusion result; fusion result IfThe final output result weight W of the source image and the interactive feature embedding module is multiplied to obtain:
If=I1*W+I2*(1-W) (4)
wherein W is a weight map represented by FMObtained by a convolution operation:
W=C4(FM) (5)
wherein C is4Represents four convolution operations;
step three: network training, wherein the network training process is a process of optimizing a loss function; the self-supervision learning interactive feature embedded multispectral image fusion network loss function provided by the method consists of two parts: loss of self-supervised training, i.e. L1(ii) a Loss of fusion, i.e. Lf(ii) a Network training is the process of minimizing the loss function L,
L=L1+Lf (6)
in particular, LfIs a loss function based on SSIM;
step four: a testing stage; inputting two multispectral images I with width W and height H1、I2Output the corresponding reconstruction result
Figure BDA0003272662360000031
And final fusion result If
The invention has the beneficial effects that: compared with the prior art, the invention has the following beneficial effects: the invention provides a multispectral image fusion method for self-supervision learning, which can effectively improve the network feature extraction capability through a self-supervision mechanism. The invention provides an interactive feature embedding structure which can be used as a bridge connection image fusion and reconstruction task, and can gradually embed key information acquired by self-supervision learning into the fusion task, so that the fusion performance is improved finally.
Drawings
FIG. 1 is a schematic diagram of the basic structure of the process of the present invention.
Fig. 2 is a schematic diagram of the fusion result of the present embodiment.
Detailed Description
The specific embodiment of the multispectral image fusion method based on interactive feature embedding is explained in detail as follows:
the method comprises the following steps: the multispectral image fusion data set production specifically comprises the following steps:
1) acquiring a multi-spectral image dataset, a source mapLike I1And a source image I2
2) For the multispectral source image I in the step 1)1,I2Adjusting to be consistent in height and width;
3) for the source images I with the same size in the step 2)1,I2And sliding the image blocks from left to right from top to bottom in a window with a fixed size and step length.
4) Turning over and mirroring the image pair obtained in the step 3), and enlarging the size of the training data set sample;
step two: as shown in fig. 1, designing a multispectral image fusion network with interactive feature embedding for self-supervised learning to implement multispectral image fusion includes:
1) and designing a self-supervision characteristic extraction module. As shown in fig. 1, the module comprises two structurally identical branches. In this embodiment, each branch is composed of M (M ═ 3) convolution layers, each layer having convolution kernel parameters of 3 × f (f is the number of convolution kernels). The number of convolution kernels in the first layer is 64, the number of convolution kernels in the second layer is 128, and the number of convolution kernels in the third layer is 256. The hierarchical feature extracted from the convolutional layer is represented by F'm,F”m(m is denoted as the mth layer, ranging from {1,2,3 }). The two branches input a source image I with width W and height H1、I2The output result is a source image reconstruction result
Figure BDA0003272662360000046
Loss function L of the module1Expressed as:
Figure BDA0003272662360000041
where MSE represents the mean square error, InFor the source image I1、I2
Figure BDA0003272662360000042
Representing a source image I1、I2Corresponding reconstructed result
Figure BDA0003272662360000043
And
Figure BDA0003272662360000044
2) interactive feature embedding module design. As shown in fig. 1, in this embodiment, the module is composed of M +1(M ═ 3) convolutional layers, and the convolution kernel parameter of each layer is 3 × f (f is the number of convolution kernels). The number of convolution kernels in the first layer is 64, the number of convolution kernels in the second layer is 128, the number of convolution kernels in the third layer is 256, and the number of convolution kernels in the fourth layer is 1. The hierarchy features extracted for the convolutional layer are denoted as Fm. Wherein the hierarchical feature F of the first layer1From a source image I1、I2Obtaining the hierarchical characteristics F from the second layer to the M layers after convolutionmHierarchical feature F 'extracted by self-supervised feature extraction module'm,F”mThe process expression obtained by the convolution operation is:
Figure BDA0003272662360000045
wherein C is2For 2 convolution operations, C4Is 4 convolution operations. Cat represents the concat operation. From the above formula, it can be observed that the layer of the intermediate layer and the feature FmIs a hierarchical feature F 'extracted by a self-supervised feature extraction module'm,F”mDerived therefrom, this ensures FmCan be reacted with F'm,F”mSharing low, medium and high level features to serve fusion tasks.
Hierarchical feature F 'extracted by self-supervision feature extraction module on the other hand'm,F”mAlso derived from the hierarchical features FmFrom FmObtained after a convolution operation, expressed as:
F'm,F”m=C(Fm),M≥m≥1 (3)
in view of feature F 'for reconstructing the source image'm,F”mFrom FmThis also ensures FmThe method comprises the main characteristics of the source image, and further serves a fusion task. Thus, interact withThe self-supervision mechanism can be fully utilized by the formula characteristic embedding mechanism, so that important characteristics are prevented from being lost in the fusion result.
3) And outputting a fusion result. As shown in FIG. 1, fusion result IfThe final output result weight W of the source image and the interactive feature embedding module is multiplied to obtain:
If=I1*W+I2*(1-W) (4)
wherein W is a weight map represented by FMObtained by a convolution operation:
W=C4(FM) (5)
wherein C is4Representing four convolution operations.
Step three: and (5) network training. The network training process is a process that optimizes a loss function. The interactive feature embedded multispectral image fusion network loss function provided by the invention consists of two parts: loss of self-supervised training, i.e. L1(shown in formula 1); loss of fusion, i.e. Lf. Network training is the process of minimizing the loss function L,
L=L1+Lf (6)
in particular, LfIs a loss function based on SSIM.
The parameters in the network training process are set as follows:
base _ lr:1 e-4/learning rate
momentum of 0.9/momentum
weight _ decay:5 e-3/weight decay
batch size 1/batch size
solution _ mode GPU/example training Using GPU
Step four: and (5) a testing stage. Inputting two multispectral images I with width W and height H1、I2The model of the invention outputs its corresponding reconstructed result
Figure BDA0003272662360000051
And final fusion result If. As shown in fig. 2, compared to other fusion methodsThe fusion result obtained by the method can better retain the main characteristics in the source image, including the brightness characteristic and the texture characteristic.

Claims (1)

1. A multispectral image fusion method based on interactive feature embedding is characterized by comprising the following steps:
the method comprises the following steps: making a multi-spectral image fusion dataset
1) Acquiring a multispectral image dataset, a source image I1And a source image I2
2) For the multispectral source image I in the step 1)1,I2Adjusting to be consistent in height and width;
3) for the source images I with the same size in the step 2)1,I2Sliding from left to right to obtain image blocks from top to bottom according to a window with a fixed size and step length;
4) turning over and mirroring the image pair obtained in the step 3), and enlarging the size of the training data set sample;
step two: designing an interactive feature-embedded multispectral image fusion network for self-supervision learning to realize multispectral image fusion
1) Designing a self-supervision feature extraction module, wherein the module comprises two branches with the same structure; each branch consists of a plurality of convolution layers, and the parameter of convolution kernel of each layer is 3 x f, wherein f is the number of convolution kernels; the hierarchical feature extracted from the convolutional layer is represented by F'm、F″mM is denoted as the mth layer, ranging from {1, 2.., M }; the two branches input a source image I with width W and height H1、I2The output result is a source image reconstruction result
Figure FDA0003272662350000012
Loss function L of the module1Expressed as:
Figure FDA0003272662350000013
where MSE representsMean square error, InFor the source image I1、I2
Figure FDA0003272662350000014
Representing a source image I1、I2Corresponding reconstructed result
Figure FDA0003272662350000015
And
Figure FDA0003272662350000016
2) designing an interactive feature embedding module, wherein the module consists of a plurality of convolution layers, and the parameter of convolution kernels of each layer is 3 x f, wherein f is the number of convolution kernels; the hierarchy features extracted for the convolutional layer are denoted as Fm(ii) a Wherein the hierarchical features of the first layer are derived from the source image I1、I2Obtaining the hierarchical characteristics F from the second layer to the M layers after convolutionmHierarchical feature F 'extracted by self-supervised feature extraction module'm、F″mThe process expression obtained by the convolution operation is:
Figure FDA0003272662350000011
wherein, C2For 2 convolution operations, C44 convolution operations; cat represents concat operation; from the above formula, it can be observed that the layer of the intermediate layer and the feature FmIs a hierarchical feature F 'extracted by a self-supervised feature extraction module'm、F″mDerived therefrom, this ensures FmAnd F'm,F″mSharing low, medium and high-grade characteristics to further serve fusion tasks;
hierarchical feature F 'extracted by self-supervision feature extraction module on the other hand'm、F″mAlso derived from the hierarchical features FmFrom FmObtained after a convolution operation, expressed as:
F′m,F″m=C(Fm),M≥m≥1 (3)
in view of feature F 'for reconstructing the source image'm,F″mFrom FmThis also ensures FmThe method comprises the main characteristics of a source image, and further serves a fusion task;
3) outputting a fusion result; fusion result IfThe final output result weight W of the source image and the interactive feature embedding module is multiplied to obtain:
If=I1*W+I2*(1-W) (4)
wherein W is a weight map represented by FMObtained by a convolution operation:
W=C4(FM) (5)
wherein C is4Represents four convolution operations;
step three: network training, wherein the network training process is a process of optimizing a loss function; the self-supervision learning interactive feature embedded multispectral image fusion network loss function provided by the method consists of two parts: loss of self-supervised training, i.e. L1(ii) a Loss of fusion, i.e. Lf(ii) a Network training is the process of minimizing the loss function L,
L=L1+Lf (6)
in particular, LfIs a loss function based on SSIM;
step four: a testing stage; inputting two multispectral images I with width W and height H1、I2Output the corresponding reconstruction result
Figure FDA0003272662350000021
And final fusion result If
CN202111106858.2A 2021-09-22 2021-09-22 Multispectral image fusion method based on interactive feature embedding Active CN113762288B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111106858.2A CN113762288B (en) 2021-09-22 2021-09-22 Multispectral image fusion method based on interactive feature embedding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111106858.2A CN113762288B (en) 2021-09-22 2021-09-22 Multispectral image fusion method based on interactive feature embedding

Publications (2)

Publication Number Publication Date
CN113762288A true CN113762288A (en) 2021-12-07
CN113762288B CN113762288B (en) 2022-11-29

Family

ID=78796650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111106858.2A Active CN113762288B (en) 2021-09-22 2021-09-22 Multispectral image fusion method based on interactive feature embedding

Country Status (1)

Country Link
CN (1) CN113762288B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886345A (en) * 2019-02-27 2019-06-14 清华大学 Self-supervisory learning model training method and device based on relation inference
US20210027417A1 (en) * 2019-07-22 2021-01-28 Raytheon Company Machine learned registration and multi-modal regression
CN112465733A (en) * 2020-08-31 2021-03-09 长沙理工大学 Remote sensing image fusion method, device, medium and equipment based on semi-supervised learning
CN113095249A (en) * 2021-04-19 2021-07-09 大连理工大学 Robust multi-mode remote sensing image target detection method
KR20210112869A (en) * 2020-03-06 2021-09-15 세종대학교산학협력단 Single-shot adaptive fusion method and apparatus for robust multispectral object detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886345A (en) * 2019-02-27 2019-06-14 清华大学 Self-supervisory learning model training method and device based on relation inference
US20210027417A1 (en) * 2019-07-22 2021-01-28 Raytheon Company Machine learned registration and multi-modal regression
KR20210112869A (en) * 2020-03-06 2021-09-15 세종대학교산학협력단 Single-shot adaptive fusion method and apparatus for robust multispectral object detection
CN112465733A (en) * 2020-08-31 2021-03-09 长沙理工大学 Remote sensing image fusion method, device, medium and equipment based on semi-supervised learning
CN113095249A (en) * 2021-04-19 2021-07-09 大连理工大学 Robust multi-mode remote sensing image target detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MINGKAI ZHENG, ET AL.: "ReSSL: Relational Self-Supervised Learning with Weak Augmentation", 《ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021)》 *
田嵩旺 等: "基于多判别器的多波段图像自监督融合方法", 《计算机科学》 *

Also Published As

Publication number Publication date
CN113762288B (en) 2022-11-29

Similar Documents

Publication Publication Date Title
CN110210551B (en) Visual target tracking method based on adaptive subject sensitivity
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
CN105069746B (en) Video real-time face replacement method and its system based on local affine invariant and color transfer technology
CN111539887B (en) Channel attention mechanism and layered learning neural network image defogging method based on mixed convolution
CN106097253B (en) A kind of single image super resolution ratio reconstruction method based on block rotation and clarity
CN114548265B (en) Crop leaf disease image generation model training method, crop leaf disease identification method, electronic equipment and storage medium
CN110782427B (en) Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution
CN113870124B (en) Weak supervision-based double-network mutual excitation learning shadow removing method
CN113256494B (en) Text image super-resolution method
CN117274760A (en) Infrared and visible light image fusion method based on multi-scale mixed converter
CN112686830B (en) Super-resolution method of single depth map based on image decomposition
CN115205672A (en) Remote sensing building semantic segmentation method and system based on multi-scale regional attention
CN112288645A (en) Skull face restoration model construction method, restoration method and restoration system
CN116029902A (en) Knowledge distillation-based unsupervised real world image super-resolution method
CN113192076A (en) MRI brain tumor image segmentation method combining classification prediction and multi-scale feature extraction
CN115205692A (en) Typical feature intelligent identification and extraction method based on generation of countermeasure network
CN110751271B (en) Image traceability feature characterization method based on deep neural network
CN114037699B (en) Pathological image classification method, equipment, system and storage medium
CN110633706B (en) Semantic segmentation method based on pyramid network
CN113706407B (en) Infrared and visible light image fusion method based on separation characterization
CN117726872A (en) Lung CT image classification method based on multi-view multi-task feature learning
CN109584194B (en) Hyperspectral image fusion method based on convolution variation probability model
CN113762288B (en) Multispectral image fusion method based on interactive feature embedding
Shao et al. SRWGANTV: image super-resolution through wasserstein generative adversarial networks with total variational regularization
CN115035377A (en) Significance detection network system based on double-stream coding and interactive decoding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Zhao Fan

Inventor after: Zhao Wenda

Inventor after: Wu Xue

Inventor after: Liu Yu

Inventor after: Zhang Yiming

Inventor before: Zhao Fan

Inventor before: Zhao Wenda

Inventor before: Wu Xue

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant