CN111223069A - Image fusion method and system - Google Patents

Image fusion method and system Download PDF

Info

Publication number
CN111223069A
CN111223069A CN202010036038.XA CN202010036038A CN111223069A CN 111223069 A CN111223069 A CN 111223069A CN 202010036038 A CN202010036038 A CN 202010036038A CN 111223069 A CN111223069 A CN 111223069A
Authority
CN
China
Prior art keywords
image
fluorescence
visible light
base layer
weight map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010036038.XA
Other languages
Chinese (zh)
Other versions
CN111223069B (en
Inventor
王慧泉
毛润
姜泊
牛萍娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Polytechnic University
Original Assignee
Tianjin Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Polytechnic University filed Critical Tianjin Polytechnic University
Priority to CN202010036038.XA priority Critical patent/CN111223069B/en
Publication of CN111223069A publication Critical patent/CN111223069A/en
Application granted granted Critical
Publication of CN111223069B publication Critical patent/CN111223069B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10064Fluorescence image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)

Abstract

The invention relates to an image fusion method and system. The method comprises the following steps: acquiring a source image; the source image comprises a fluorescent image and a visible light image; performing two-scale decomposition on the source image by adopting a Gaussian filtering method to obtain a base layer image and a detail layer image of the source image; constructing a weight map of the highlighted fluorescence information using a nonlinear function; fusing the base layer image corresponding to the fluorescence image and the base layer image corresponding to the visible light image to obtain a base layer image of a fused image; constructing a second weight map of enhanced fluorescence information based on the significance detection; fusing a detail layer image corresponding to the fluorescent image and a detail layer image corresponding to the visible light image to obtain a detail layer image of the fused image; and reconstructing the base layer image of the fused image and the detail layer image of the fused image to obtain the fused image. The invention can reduce the complexity of the multi-scale algorithm and improve the efficiency of image fusion.

Description

Image fusion method and system
Technical Field
The invention relates to the technical field of image processing, in particular to an image fusion method and system.
Background
The image fusion technology is used for integrating two or more pieces of image information of the same scene under different spectral and spatial details. Toet et al indicates that in perception evaluation of different image fusion schemes, infrared images have the best target detection and recognition effects, and visible light images have the best global scene perception effects, so that complementary information fusion between the visible light images and the infrared images is comprehensively utilized to form new images, richer information is provided, and the method has important application in the research fields of target recognition, multi-source information mining, medical imaging, map matching and the like.
Because the multi-scale geometric analysis method conforms to the visual characteristics of human beings, and the obtained fusion image has a good visual effect, the mainstream fusion algorithm is mostly based on the multi-scale geometric analysis, such as a pyramid fusion algorithm, a fusion algorithm based on discrete wavelet transform, curve transform, contourlet transform and the like. However, the traditional multi-scale image fusion method is realized in a frequency domain, the calculation complexity is high, and the real-time requirement of a system cannot be met, the fusion efficiency is improved by the two-scale spatial domain transformation image fusion method, but the noise is easily generated when the image is processed by the existing two-scale method.
Disclosure of Invention
The invention aims to provide an image fusion method and an image fusion system, which are used for reducing the complexity of a multi-scale algorithm and improving the efficiency of image fusion.
In order to achieve the purpose, the invention provides the following scheme:
an image fusion method, comprising:
acquiring a source image; the source image comprises a fluorescence image and a visible light image to be fused;
performing two-scale decomposition on the source image by adopting a Gaussian filtering method to obtain a base layer image and a detail layer image of the source image; the base layer image of the source image comprises a base layer image corresponding to the fluorescent image and a base layer image corresponding to the visible light image, and the detail layer image of the source image comprises a detail layer image corresponding to the fluorescent image and a detail layer image corresponding to the visible light image;
constructing a first weight map of the highlighted fluorescence information using a nonlinear function;
fusing the base layer image corresponding to the fluorescence image and the base layer image corresponding to the visible light image according to the first weight map to obtain a base layer image of a fused image;
constructing a second weight map of enhanced fluorescence information based on the significance detection; the second weight map comprises a final weight map of a fluorescence image and a final weight map of a visible light image;
fusing the detail layer image corresponding to the fluorescence image and the detail layer image corresponding to the visible light image according to the second weight map to obtain a detail layer image of a fused image;
and reconstructing the base layer image of the fused image and the detail layer image of the fused image to obtain the fused image.
Optionally, the performing two-scale decomposition on the source image by using a gaussian filtering method to obtain a base layer image and a detail layer image of the source image specifically includes:
using formulas
Figure BDA0002366037940000021
Performing two-scale decomposition on the fluorescence image to obtain a base layer image and a detail layer image corresponding to the fluorescence image; wherein, INFor the fluorescence image, BNBase layer image corresponding to the fluorescence image, DNThe detail layer image corresponding to the fluorescence image is obtained, G (r, sigma) is a Gaussian filter, r is the size of a filter window, and sigma is a standard deviation;
using formulas
Figure BDA0002366037940000022
Performing two-scale decomposition on the visible light image to obtain a base layer image and a detail layer image corresponding to the visible light image; wherein, IVFor the visible light image, BVFor base layer images corresponding to visible light images, DVThe corresponding detail layer image of the visible light image.
Optionally, the constructing a first weight map of the highlighted fluorescence information by using a nonlinear function specifically includes:
using formulas
Figure BDA0002366037940000023
Identifying target characteristic information in a base layer image corresponding to the fluorescent image to obtain a fluorescent information characteristic image R; wherein, R (x, y) is the pixel value of the pixel point at (x, y) position in the fluorescence information characteristic image, | BN(x, y) | is the pixel value of the pixel point at (x, y) position in the base layer image corresponding to the fluorescence image, | BV(x, y) | is the pixel value of the pixel point at (x, y) position in the base layer image corresponding to the visible light image;
using formulas
Figure BDA0002366037940000031
Normalizing the fluorescence information characteristic image to obtain an enhancement coefficient matrix P; wherein P (x, y) is the enhancement coefficient value at (x, y) in the enhancement coefficient matrix;
using a non-linear function using the formula WB=G(r,σ)*Sλ(P) adjusting the enhancement coefficient matrix to obtain a first weight map WB(ii) a Wherein G (r, σ) is a Gaussian filter, SλIn the form of a non-linear function,
Figure BDA0002366037940000032
x is the argument of the nonlinear function and λ is the enhancement coefficient.
Optionally, the fusing the base layer image corresponding to the fluorescent image and the base layer image corresponding to the visible light image according to the first weight map to obtain the base layer image of the fused image specifically includes:
using formula BF=BNWB+BV(1-WB) Fusing the base layer image corresponding to the fluorescence image and the base layer image corresponding to the visible light image to obtain a base layer image of a fused image; wherein, BNBase layer image corresponding to fluorescent image, BVFor base layer images corresponding to visible light images, WBIs a first weight map, BFIs the base layer image of the fused image.
Optionally, the constructing a second weight map of enhanced fluorescence information based on significance detection specifically includes:
using median and mean filters, using formulae
Figure BDA0002366037940000033
Constructing visual saliency characteristics of the fluorescence image and the visible image; wherein HNAs a visually significant feature of the fluorescence image, HVBeing a visually significant feature of a visible light image, INAs the fluorescence image, IVFor the visible light image, MF is a median filter, and AF is a mean filter;
using formulas
Figure BDA0002366037940000034
Normalizing the visual saliency characteristics of the fluorescence image and the visual saliency characteristics of the visible light image to obtain an initial weight map; wherein, WNIs an initial weight map of the fluorescence image, WVAn initial weight map which is a visible light image;
using a formula based on the enhancement coefficient matrix and the initial weight map
Figure BDA0002366037940000041
Constructing a second weight map; wherein the content of the first and second substances,
Figure BDA0002366037940000042
is the final weight map of the fluorescence image,
Figure BDA0002366037940000043
and K is a fluorescence information enhancement coefficient, and P is an enhancement coefficient matrix.
Optionally, the fusing the detail layer image corresponding to the fluorescence image and the detail layer image corresponding to the visible light image according to the second weight map to obtain the detail layer image of the fused image specifically includes:
using formulas
Figure BDA0002366037940000044
Fusing the detail layer image corresponding to the fluorescence image and the detail layer image corresponding to the visible light image to obtain a detail layer image D of a fused imageF(ii) a Wherein D isNFor the detail layer image corresponding to the fluorescence image, DVFor the detail layer image corresponding to the visible light image,
Figure BDA0002366037940000045
is the final weight map of the fluorescence image,
Figure BDA0002366037940000046
is the final weight map of the visible light image.
The present invention also provides an image fusion system, comprising:
the source image acquisition module is used for acquiring a source image; the source image comprises a fluorescence image and a visible light image to be fused;
the two-scale decomposition module is used for carrying out two-scale decomposition on the source image by adopting a Gaussian filtering method to obtain a base layer image and a detail layer image of the source image; the base layer image of the source image comprises a base layer image corresponding to the fluorescent image and a base layer image corresponding to the visible light image, and the detail layer image of the source image comprises a detail layer image corresponding to the fluorescent image and a detail layer image corresponding to the visible light image;
a first weight map construction module for constructing a first weight map of the highlighted fluorescence information using a nonlinear function;
the base layer image fusion module is used for fusing the base layer image corresponding to the fluorescent image and the base layer image corresponding to the visible light image according to the first weight map to obtain a base layer image of a fused image;
a second weight map construction module for constructing a second weight map of enhanced fluorescence information based on the significance detection; the second weight map comprises a final weight map of a fluorescence image and a final weight map of a visible light image;
the detail layer image fusion module is used for fusing the detail layer image corresponding to the fluorescence image and the detail layer image corresponding to the visible light image according to the second weight map to obtain a detail layer image of a fused image;
and the reconstruction module is used for reconstructing the base layer image of the fused image and the detail layer image of the fused image to obtain the fused image.
Optionally, the two-scale decomposition module specifically includes:
a fluorescence image two-scale decomposition unit for using a formula
Figure BDA0002366037940000051
Performing two-scale decomposition on the fluorescence image to obtain a base layer image and a detail layer image corresponding to the fluorescence image; wherein, INFor the fluorescence image, BNBase layer image corresponding to the fluorescence image, DNThe detail layer image corresponding to the fluorescence image is obtained, G (r, sigma) is a Gaussian filter, r is the size of a filter window, and sigma is a standard deviation;
a visible light image two-scale decomposition unit for using formula
Figure BDA0002366037940000052
Performing two-scale decomposition on the visible light image to obtain a base layer image and a detail layer image corresponding to the visible light image; wherein, IVFor the visible light image, BVFor base layer images corresponding to visible light images, DVThe corresponding detail layer image of the visible light image.
Optionally, the first weight map building module specifically includes:
a target characteristic information identification unit for utilizing a formula
Figure BDA0002366037940000053
Identifying target characteristic information in a base layer image corresponding to the fluorescent image to obtain a fluorescent information characteristic image R; wherein, R (x, y) is the pixel value of the pixel point at (x, y) position in the fluorescence information characteristic image, | BN(x, y) | is a fluorescence imageCorresponding pixel value, | B, of pixel point at (x, y) in the base layer imageV(x, y) | is the pixel value of the pixel point at (x, y) position in the base layer image corresponding to the visible light image;
a first normalization unit for utilizing a formula
Figure BDA0002366037940000054
Normalizing the fluorescence information characteristic image to obtain an enhancement coefficient matrix P; wherein P (x, y) is the enhancement coefficient value at (x, y) in the enhancement coefficient matrix;
a non-linear adjustment unit for using the formula W by using a non-linear functionB=G(r,σ)*Sλ(P) adjusting the enhancement coefficient matrix to obtain a first weight map WB(ii) a Wherein G (r, σ) is a Gaussian filter, SλIn the form of a non-linear function,
Figure BDA0002366037940000061
x is the argument of the nonlinear function and λ is the enhancement coefficient.
Optionally, the second weight map building module specifically includes:
a visual saliency feature construction unit for employing a median filter and a mean filter using a formula
Figure BDA0002366037940000062
Constructing visual saliency characteristics of the fluorescence image and the visible image; wherein HNAs a visually significant feature of the fluorescence image, HVBeing a visually significant feature of a visible light image, INAs the fluorescence image, IVFor the visible light image, MF is a median filter, and AF is a mean filter;
a second normalization unit for utilizing the formula
Figure BDA0002366037940000063
Normalizing the visual saliency characteristics of the fluorescence image and the visual saliency characteristics of the visible light image to obtain an initial weight map; wherein, WNIs a fluorescent pictureInitial weight map of image, WVAn initial weight map which is a visible light image;
a final weight map construction unit for using a formula according to the enhancement coefficient matrix and the initial weight map
Figure BDA0002366037940000064
Constructing a second weight map; wherein the content of the first and second substances,
Figure BDA0002366037940000065
is the final weight map of the fluorescence image,
Figure BDA0002366037940000066
and K is a fluorescence information enhancement coefficient, and P is an enhancement coefficient matrix.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the method can effectively retain the significance information of the source image, meanwhile highlight the detail information of the fluorescence image, and realize the accurate positioning and detail enhancement of the target object to be detected, thereby providing more comfortable visual effect. The peak signal-to-noise ratio, the mutual information, the edge holding capacity and the visual information holding degree index are obviously improved, the fusion complexity is low, and the fusion efficiency is high.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart of an image fusion method according to the present invention;
FIG. 2 is a schematic diagram of an image fusion system according to the present invention;
FIG. 3 is a flow chart illustrating an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
FIG. 1 is a flow chart of an image fusion method according to the present invention. As shown in fig. 1, the image fusion method of the present invention includes the following steps:
step 100: and acquiring a source image. The source image comprises a fluorescence image and a visible light image to be fused.
Step 200: and carrying out two-scale decomposition on the source image by adopting a Gaussian filtering method to obtain a base layer image and a detail layer image of the source image. The base layer image of the source image comprises a base layer image corresponding to the fluorescent image and a base layer image corresponding to the visible light image, and the detail layer image of the source image comprises a detail layer image corresponding to the fluorescent image and a detail layer image corresponding to the visible light image. The specific process is as follows:
using formulas
Figure BDA0002366037940000071
Performing two-scale decomposition on the fluorescence image to obtain a base layer image and a detail layer image corresponding to the fluorescence image; wherein, INFor the fluorescence image, BNBase layer image corresponding to the fluorescence image, DNG (r, sigma) is a Gaussian filter, r is the size of a filter window, and sigma is a standard deviation.
Using formulas
Figure BDA0002366037940000072
Performing two-scale decomposition on the visible light image to obtain the visible light imageA base layer image and a detail layer image corresponding to the visible light image; wherein, IVFor the visible light image, BVFor base layer images corresponding to visible light images, DVThe corresponding detail layer image of the visible light image.
Step 300: a first weight map of the highlighted fluorescence information is constructed using a nonlinear function. The specific process is as follows:
using formulas
Figure BDA0002366037940000081
Identifying target characteristic information in a base layer image corresponding to the fluorescent image to obtain a fluorescent information characteristic image R; wherein, R (x, y) is the pixel value of the pixel point at (x, y) position in the fluorescence information characteristic image, | BN(x, y) | is the pixel value of the pixel point at (x, y) position in the base layer image corresponding to the fluorescence image, | BVAnd (x, y) | is the pixel value of a pixel point at (x, y) position in the base layer image corresponding to the visible light image.
Using formulas
Figure BDA0002366037940000082
Normalizing the fluorescence information characteristic image to obtain an enhancement coefficient matrix P; where P (x, y) is the enhancement coefficient value at (x, y) in the enhancement coefficient matrix.
Using a non-linear function using the formula WB=G(r,σ)*Sλ(P) adjusting the enhancement coefficient matrix to obtain a first weight map WB(ii) a Wherein G (r, σ) is a Gaussian filter, SλIn the form of a non-linear function,
Figure BDA0002366037940000083
x is the independent variable of the nonlinear function, x is equal to [0,1 ]]λ is the enhancement coefficient, λ ∈ [0, ∞).
Step 400: and fusing the base layer image corresponding to the fluorescence image and the base layer image corresponding to the visible light image according to the first weight map to obtain the base layer image of the fused image. Specifically, using formula BF=BNWB+BV(1-WB) A base layer image corresponding to the fluorescent image andfusing the base layer image corresponding to the visible light image to obtain a base layer image of a fused image; wherein, BNBase layer image corresponding to fluorescent image, BVFor base layer images corresponding to visible light images, WBIs a first weight map, BFIs the base layer image of the fused image.
Step 500: a second weight map of enhanced fluorescence information is constructed based on the significance detection. The second weight map includes a final weight map of the fluorescence image and a final weight map of the visible light image. The specific process is as follows:
using median and mean filters, using formulae
Figure BDA0002366037940000084
Constructing visual saliency characteristics of the fluorescence image and the visible image; wherein HNAs a visually significant feature of the fluorescence image, HVBeing a visually significant feature of a visible light image, INAs the fluorescence image, IVFor the visible image, MF is a median filter and AF is a mean filter. Wherein, the filtering radius of the mean filter is set to 31, and the filtering radius of the median filter can be set to 3.
Using formulas
Figure BDA0002366037940000091
Normalizing the visual saliency characteristics of the fluorescence image and the visual saliency characteristics of the visible light image to obtain an initial weight map; wherein, WNIs an initial weight map of the fluorescence image, WVIs an initial weight map of the visible light image.
Using a formula based on the enhancement coefficient matrix and the initial weight map
Figure BDA0002366037940000092
Constructing a second weight map; wherein the content of the first and second substances,
Figure BDA0002366037940000093
is the final weight map of the fluorescence image,
Figure BDA0002366037940000094
and K is a fluorescence information enhancement coefficient, and P is an enhancement coefficient matrix.
Step 600: and fusing the detail layer image corresponding to the fluorescence image and the detail layer image corresponding to the visible light image according to the second weight map to obtain the detail layer image of the fused image. In particular, using formulae
Figure BDA0002366037940000095
Fusing the detail layer image corresponding to the fluorescence image and the detail layer image corresponding to the visible light image to obtain a detail layer image D of a fused imageF(ii) a Wherein D isNFor the detail layer image corresponding to the fluorescence image, DVFor the detail layer image corresponding to the visible light image,
Figure BDA0002366037940000096
is the final weight map of the fluorescence image,
Figure BDA0002366037940000097
is the final weight map of the visible light image.
Step 700: and reconstructing the base layer image of the fused image and the detail layer image of the fused image to obtain the fused image. Specifically, F ═ D is usedF+BFAnd reconstructing the base layer image of the fused image and the detail layer image of the fused image to obtain a fused image F.
The image fusion method can effectively retain the significance information of the source image, meanwhile highlight the detail information of the fluorescence image, realize the accurate positioning and detail enhancement of the target object to be detected, and further provide more comfortable visual effect. The peak signal-to-noise ratio, the mutual information, the edge holding capacity and the visual information holding degree index are obviously improved, the fusion complexity is low, and the fusion efficiency is obviously improved.
FIG. 2 is a schematic structural diagram of an image fusion system according to the present invention. As shown in fig. 2, the image fusion system of the present invention includes the following structure:
a source image obtaining module 201, configured to obtain a source image; the source image comprises a fluorescence image and a visible light image to be fused.
A two-scale decomposition module 202, configured to perform two-scale decomposition on the source image by using a gaussian filtering method to obtain a base layer image and a detail layer image of the source image; the base layer image of the source image comprises a base layer image corresponding to the fluorescent image and a base layer image corresponding to the visible light image, and the detail layer image of the source image comprises a detail layer image corresponding to the fluorescent image and a detail layer image corresponding to the visible light image.
A first weight map construction module 203 for constructing a first weight map of the highlighted fluorescence information using a non-linear function.
And a base layer image fusion module 204, configured to fuse the base layer image corresponding to the fluorescent image and the base layer image corresponding to the visible light image according to the first weight map, so as to obtain a base layer image of a fused image.
A second weight map construction module 205 for constructing a second weight map of enhanced fluorescence information based on significance detection; the second weight map includes a final weight map of the fluorescence image and a final weight map of the visible light image.
And a detail layer image fusion module 206, configured to fuse the detail layer image corresponding to the fluorescence image and the detail layer image corresponding to the visible light image according to the second weight map, so as to obtain a detail layer image of a fused image.
And the reconstructing module 207 is configured to reconstruct the base layer image of the fused image and the detail layer image of the fused image to obtain a fused image.
As a specific embodiment, the two-scale decomposition module 202 in the image fusion system of the present invention specifically includes:
a fluorescence image two-scale decomposition unit for using a formula
Figure BDA0002366037940000101
Performing two-scale decomposition on the fluorescence image to obtain the fluorescence image corresponding to the fluorescence imageA base layer image and a detail layer image; wherein, INFor the fluorescence image, BNBase layer image corresponding to the fluorescence image, DNG (r, sigma) is a Gaussian filter, r is the size of a filter window, and sigma is a standard deviation.
A visible light image two-scale decomposition unit for using formula
Figure BDA0002366037940000111
Performing two-scale decomposition on the visible light image to obtain a base layer image and a detail layer image corresponding to the visible light image; wherein, IVFor the visible light image, BVFor base layer images corresponding to visible light images, DVThe corresponding detail layer image of the visible light image.
As a specific embodiment, the first weight map constructing module 203 in the image fusion system of the present invention specifically includes:
a target characteristic information identification unit for utilizing a formula
Figure BDA0002366037940000112
Identifying target characteristic information in a base layer image corresponding to the fluorescent image to obtain a fluorescent information characteristic image R; wherein, R (x, y) is the pixel value of the pixel point at (x, y) position in the fluorescence information characteristic image, | BN(x, y) | is the pixel value of the pixel point at (x, y) position in the base layer image corresponding to the fluorescence image, | BVAnd (x, y) | is the pixel value of a pixel point at (x, y) position in the base layer image corresponding to the visible light image.
A first normalization unit for utilizing a formula
Figure BDA0002366037940000113
Normalizing the fluorescence information characteristic image to obtain an enhancement coefficient matrix P; where P (x, y) is the enhancement coefficient value at (x, y) in the enhancement coefficient matrix.
A non-linear adjustment unit for using the formula W by using a non-linear functionB=G(r,σ)*Sλ(P) adjusting the enhancement coefficient matrix to obtain a first weightDrawing WB(ii) a Wherein G (r, σ) is a Gaussian filter, SλIn the form of a non-linear function,
Figure BDA0002366037940000114
x is the argument of the nonlinear function and λ is the enhancement coefficient.
As a specific embodiment, the second weight map constructing module 205 in the image fusion system of the present invention specifically includes:
a visual saliency feature construction unit for employing a median filter and a mean filter using a formula
Figure BDA0002366037940000115
Constructing visual saliency characteristics of the fluorescence image and the visible image; wherein HNAs a visually significant feature of the fluorescence image, HVBeing a visually significant feature of a visible light image, INAs the fluorescence image, IVFor the visible image, MF is a median filter and AF is a mean filter.
A second normalization unit for utilizing the formula
Figure BDA0002366037940000121
Normalizing the visual saliency characteristics of the fluorescence image and the visual saliency characteristics of the visible light image to obtain an initial weight map; wherein, WNIs an initial weight map of the fluorescence image, WVIs an initial weight map of the visible light image.
A final weight map construction unit for using a formula according to the enhancement coefficient matrix and the initial weight map
Figure BDA0002366037940000122
Constructing a second weight map; wherein the content of the first and second substances,
Figure BDA0002366037940000123
is the final weight map of the fluorescence image,
Figure BDA0002366037940000124
and K is a fluorescence information enhancement coefficient, and P is an enhancement coefficient matrix.
An embodiment is provided below to further illustrate the scheme of the present invention shown in fig. 1 and 2. FIG. 3 is a flow chart illustrating an embodiment of the present invention. As shown in fig. 3, the present embodiment includes the following steps:
step 1: and performing two-scale decomposition on the source image by using Gaussian filtering to obtain a base layer containing large-scale information and a detail layer image containing small-scale information in the fluorescence image and the visible light image. The method comprises the steps that a source image, namely an input image, in the step is an infrared image (namely a fluorescent image) and a visible light image which are acquired for the same scene, background information of a target scene can be effectively presented in the visible light image, the infrared image has the advantage of highlighting the target information, the images are identical in size, the input image is subjected to Gaussian filtering to carry out two-scale decomposition on the source image, and a base layer containing large-scale information and a detail layer image containing small-scale information are obtained.
Step 2: for the fusion rule of the base layer image, a weight graph highlighting fluorescence information is constructed by using a nonlinear function, and the relative quantity of fluorescence spectrum information is enhanced in a fine adjustment mode, so that the base layer image of the fusion image is obtained. The method comprises the following specific steps:
b1: and identifying target characteristic information of the fluorescence base layer image to obtain a fluorescence information characteristic image from the base layer image after the source image is subjected to two-scale decomposition.
B2: and after the fluorescence information characteristic image is obtained, normalizing the characteristic image to obtain a coefficient enhancement matrix.
B3: and after the coefficient enhancement matrix is obtained, carrying out nonlinear function adjustment on the enhancement coefficient matrix to obtain an initial weight map of the base layer fusion.
B4: and weighting the base layer images of the fluorescence image and the visible light image through a base layer weight graph to obtain a fused base layer image.
Step 3: and for the fusion rule of the detail layer image, constructing a fusion weight map for enhancing fluorescence information by using a significance detection and coefficient enhancement matrix, and obtaining the detail layer image of the fusion image through weighting fusion.
C1: and (3) constructing a saliency image by using median filtering and mean filtering of a detail layer image of the source image after the two-scale decomposition to obtain the visual saliency characteristics of the source image.
C2: and after the significance images of the fluorescence image and the visible light image are obtained, normalizing to construct an initial weight map.
C3: and after the initial weight map is obtained, constructing an enhanced weight map by using the coefficient enhancement matrix and the initial weight map.
C4: and the fused detail layer image is obtained by weighting the detail layer images of the fluorescence image and the visible light image through the detail layer weight map.
Step 4: and obtaining a fused image through the process of reconstructing the fused base layer image and the fused detail layer image.
The method of the invention has richer details, higher definition and satisfactory running speed than other methods. The verification process of the invention adopts two groups of images as source images, and the result measurement utilizes the traditional image quality evaluation standard: peak Signal to Noise Ratio (PSNR), Mutual Information (MI), and edge preservation quantity (Q)AB/F) Visual information fidelity, VIFF, Information Entropy (IE). The method (PROPOSED) of the invention is compared with dual-tree complex wavelet transform (DTCTWT), Discrete Wavelet Transform (DWT), Fast Filtering Image Fusion (FFIF), Laplace pyramid algorithm (LP), non-downsampling contourlet transform (NSCT), and low-pass pyramid transform (RP) based on significance detection dual-scale fusion (TSIFVS). The verification results are shown in tables 1, 2 and 3, and the method can effectively retain the significance information of the source image, highlight the detail information of the fluorescence image, realize the accurate positioning and detail enhancement of the target object to be detected, and provide more comfortable visual effect. The peak signal-to-noise ratio, the mutual information, the edge holding capacity and the visual information holding degree index are obviously improved, the fusion complexity is low, and the fusion efficiency is obviously improved.
TABLE 1 Objective index evaluation of the first set of fused images
Figure BDA0002366037940000141
TABLE 2 Objective index evaluation of the second group of fused images
Figure BDA0002366037940000142
TABLE 3 time comparison of the individual fusion method operations
Figure BDA0002366037940000143
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. An image fusion method, comprising:
acquiring a source image; the source image comprises a fluorescence image and a visible light image to be fused;
performing two-scale decomposition on the source image by adopting a Gaussian filtering method to obtain a base layer image and a detail layer image of the source image; the base layer image of the source image comprises a base layer image corresponding to the fluorescent image and a base layer image corresponding to the visible light image, and the detail layer image of the source image comprises a detail layer image corresponding to the fluorescent image and a detail layer image corresponding to the visible light image;
constructing a first weight map of the highlighted fluorescence information using a nonlinear function;
fusing the base layer image corresponding to the fluorescence image and the base layer image corresponding to the visible light image according to the first weight map to obtain a base layer image of a fused image;
constructing a second weight map of enhanced fluorescence information based on the significance detection; the second weight map comprises a final weight map of a fluorescence image and a final weight map of a visible light image;
fusing the detail layer image corresponding to the fluorescence image and the detail layer image corresponding to the visible light image according to the second weight map to obtain a detail layer image of a fused image;
and reconstructing the base layer image of the fused image and the detail layer image of the fused image to obtain the fused image.
2. The image fusion method according to claim 1, wherein the performing two-scale decomposition on the source image by using a gaussian filtering method to obtain a base layer image and a detail layer image of the source image specifically comprises:
using formulas
Figure FDA0002366037930000011
Performing two-scale decomposition on the fluorescence image to obtain a base layer image and a detail layer image corresponding to the fluorescence image; wherein, INFor the fluorescence image, BNBase layer image corresponding to the fluorescence image, DNThe detail layer image corresponding to the fluorescence image is obtained, G (r, sigma) is a Gaussian filter, r is the size of a filter window, and sigma is a standard deviation;
using formulas
Figure FDA0002366037930000012
Performing two-scale decomposition on the visible light image to obtain a base layer image and a detail layer image corresponding to the visible light image; wherein, IVFor the visible light image, BVFor base layer images corresponding to visible light images, DVThe corresponding detail layer image of the visible light image.
3. The image fusion method according to claim 1, wherein the constructing the first weight map of the highlighted fluorescence information using the nonlinear function specifically comprises:
using formulas
Figure FDA0002366037930000021
Identifying target characteristic information in a base layer image corresponding to the fluorescent image to obtain a fluorescent information characteristic image R; wherein, R (x, y) is the pixel value of the pixel point at (x, y) position in the fluorescence information characteristic image, | BN(x, y) | is the pixel value of the pixel point at (x, y) position in the base layer image corresponding to the fluorescence image, | BV(x, y) | is the pixel value of the pixel point at (x, y) position in the base layer image corresponding to the visible light image;
using formulas
Figure FDA0002366037930000022
Normalizing the fluorescence information characteristic image to obtain an enhancement coefficient matrix P; wherein P (x, y) is the enhancement coefficient value at (x, y) in the enhancement coefficient matrix;
using a non-linear function using the formula WB=G(r,σ)*Sλ(P) adjusting the enhancement coefficient matrix to obtain a first weight map WB(ii) a Wherein G (r, σ) is a Gaussian filter, SλIn the form of a non-linear function,
Figure FDA0002366037930000023
x is the argument of the nonlinear function and λ is the enhancement coefficient.
4. The image fusion method according to claim 1, wherein the fusing the base layer image corresponding to the fluorescent image and the base layer image corresponding to the visible light image according to the first weight map to obtain the base layer image of the fused image specifically comprises:
using formula BF=BNWB+BV(1-WB) Fusing the base layer image corresponding to the fluorescence image and the base layer image corresponding to the visible light image to obtain a base layer image of a fused image; wherein, BNBase layer image corresponding to fluorescent image, BVFor base layer images corresponding to visible light images, WBIs a first weight map, BFIs the base layer image of the fused image.
5. The image fusion method according to claim 3, wherein the constructing of the second weight map of enhanced fluorescence information based on saliency detection specifically comprises:
using median and mean filters, using formulae
Figure FDA0002366037930000024
Constructing visual saliency characteristics of the fluorescence image and the visible image; wherein HNAs a visually significant feature of the fluorescence image, HVBeing a visually significant feature of a visible light image, INAs the fluorescence image, IVFor the visible light image, MF is a median filter, and AF is a mean filter;
using formulas
Figure FDA0002366037930000031
Normalizing the visual saliency characteristics of the fluorescence image and the visual saliency characteristics of the visible light image to obtain an initial weight map; wherein, WNIs an initial weight map of the fluorescence image, WVAn initial weight map which is a visible light image;
using a formula based on the enhancement coefficient matrix and the initial weight map
Figure FDA0002366037930000032
Constructing a second weight map; wherein the content of the first and second substances,
Figure FDA0002366037930000033
is the final weight map of the fluorescence image,
Figure FDA0002366037930000034
and K is a fluorescence information enhancement coefficient, and P is an enhancement coefficient matrix.
6. The image fusion method according to claim 1, wherein the fusing the detail layer image corresponding to the fluorescence image and the detail layer image corresponding to the visible light image according to the second weight map to obtain the detail layer image of the fused image specifically comprises:
using formulas
Figure FDA0002366037930000035
Fusing the detail layer image corresponding to the fluorescence image and the detail layer image corresponding to the visible light image to obtain a detail layer image D of a fused imageF(ii) a Wherein D isNFor the detail layer image corresponding to the fluorescence image, DVFor the detail layer image corresponding to the visible light image,
Figure FDA0002366037930000036
is the final weight map of the fluorescence image,
Figure FDA0002366037930000037
is the final weight map of the visible light image.
7. An image fusion system, comprising:
the source image acquisition module is used for acquiring a source image; the source image comprises a fluorescence image and a visible light image to be fused;
the two-scale decomposition module is used for carrying out two-scale decomposition on the source image by adopting a Gaussian filtering method to obtain a base layer image and a detail layer image of the source image; the base layer image of the source image comprises a base layer image corresponding to the fluorescent image and a base layer image corresponding to the visible light image, and the detail layer image of the source image comprises a detail layer image corresponding to the fluorescent image and a detail layer image corresponding to the visible light image;
a first weight map construction module for constructing a first weight map of the highlighted fluorescence information using a nonlinear function;
the base layer image fusion module is used for fusing the base layer image corresponding to the fluorescent image and the base layer image corresponding to the visible light image according to the first weight map to obtain a base layer image of a fused image;
a second weight map construction module for constructing a second weight map of enhanced fluorescence information based on the significance detection; the second weight map comprises a final weight map of a fluorescence image and a final weight map of a visible light image;
the detail layer image fusion module is used for fusing the detail layer image corresponding to the fluorescence image and the detail layer image corresponding to the visible light image according to the second weight map to obtain a detail layer image of a fused image;
and the reconstruction module is used for reconstructing the base layer image of the fused image and the detail layer image of the fused image to obtain the fused image.
8. The image fusion system of claim 7, wherein the two-scale decomposition module specifically comprises:
a fluorescence image two-scale decomposition unit for using a formula
Figure FDA0002366037930000041
Performing two-scale decomposition on the fluorescence image to obtain a base layer image and a detail layer image corresponding to the fluorescence image; wherein, INFor the fluorescence image, BNBase layer image corresponding to the fluorescence image, DNThe detail layer image corresponding to the fluorescence image is obtained, G (r, sigma) is a Gaussian filter, r is the size of a filter window, and sigma is a standard deviation;
visible light image twoA scale decomposition unit for utilizing the formula
Figure FDA0002366037930000042
Performing two-scale decomposition on the visible light image to obtain a base layer image and a detail layer image corresponding to the visible light image; wherein, IVFor the visible light image, BVFor base layer images corresponding to visible light images, DVThe corresponding detail layer image of the visible light image.
9. The image fusion system according to claim 7, wherein the first weight map construction module specifically comprises:
a target characteristic information identification unit for utilizing a formula
Figure FDA0002366037930000043
Identifying target characteristic information in a base layer image corresponding to the fluorescent image to obtain a fluorescent information characteristic image R; wherein, R (x, y) is the pixel value of the pixel point at (x, y) position in the fluorescence information characteristic image, | BN(x, y) | is the pixel value of the pixel point at (x, y) position in the base layer image corresponding to the fluorescence image, | BV(x, y) | is the pixel value of the pixel point at (x, y) position in the base layer image corresponding to the visible light image;
a first normalization unit for utilizing a formula
Figure FDA0002366037930000051
Normalizing the fluorescence information characteristic image to obtain an enhancement coefficient matrix P; wherein P (x, y) is the enhancement coefficient value at (x, y) in the enhancement coefficient matrix;
a non-linear adjustment unit for using the formula W by using a non-linear functionB=G(r,σ)*Sλ(P) adjusting the enhancement coefficient matrix to obtain a first weight map WB(ii) a Wherein G (r, σ) is a Gaussian filter, SλIn the form of a non-linear function,
Figure FDA0002366037930000052
x is the argument of the nonlinear function and λ is the enhancement coefficient.
10. The image fusion system according to claim 9, wherein the second weight map construction module specifically comprises:
a visual saliency feature construction unit for employing a median filter and a mean filter using a formula
Figure FDA0002366037930000053
Constructing visual saliency characteristics of the fluorescence image and the visible image; wherein HNAs a visually significant feature of the fluorescence image, HVBeing a visually significant feature of a visible light image, INAs the fluorescence image, IVFor the visible light image, MF is a median filter, and AF is a mean filter;
a second normalization unit for utilizing the formula
Figure FDA0002366037930000054
Normalizing the visual saliency characteristics of the fluorescence image and the visual saliency characteristics of the visible light image to obtain an initial weight map; wherein, WNIs an initial weight map of the fluorescence image, WVAn initial weight map which is a visible light image;
a final weight map construction unit for using a formula according to the enhancement coefficient matrix and the initial weight map
Figure FDA0002366037930000055
Constructing a second weight map; wherein the content of the first and second substances,
Figure FDA0002366037930000056
is the final weight map of the fluorescence image,
Figure FDA0002366037930000057
the final weight map of the visible light image is obtained, K is the fluorescence information enhancement coefficient, and P is the enhancement coefficientAnd (4) a strong coefficient matrix.
CN202010036038.XA 2020-01-14 2020-01-14 Image fusion method and system Active CN111223069B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010036038.XA CN111223069B (en) 2020-01-14 2020-01-14 Image fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010036038.XA CN111223069B (en) 2020-01-14 2020-01-14 Image fusion method and system

Publications (2)

Publication Number Publication Date
CN111223069A true CN111223069A (en) 2020-06-02
CN111223069B CN111223069B (en) 2023-06-02

Family

ID=70829558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010036038.XA Active CN111223069B (en) 2020-01-14 2020-01-14 Image fusion method and system

Country Status (1)

Country Link
CN (1) CN111223069B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652832A (en) * 2020-07-09 2020-09-11 南昌航空大学 Infrared and visible light image fusion method based on sliding window technology
CN111815549A (en) * 2020-07-09 2020-10-23 湖南大学 Night vision image colorization method based on guided filtering image fusion
CN111968105A (en) * 2020-08-28 2020-11-20 南京诺源医疗器械有限公司 Method for detecting salient region in medical fluorescence imaging
CN112037216A (en) * 2020-09-09 2020-12-04 南京诺源医疗器械有限公司 Image fusion method for medical fluorescence imaging system
CN112200735A (en) * 2020-09-18 2021-01-08 安徽理工大学 Temperature identification method based on flame image and control method of low-concentration gas combustion system
CN112419212A (en) * 2020-10-15 2021-02-26 卡乐微视科技(云南)有限公司 Infrared and visible light image fusion method based on side window guide filtering
CN112801927A (en) * 2021-01-28 2021-05-14 中国地质大学(武汉) Infrared and visible light image fusion method based on three-scale decomposition
CN112884690A (en) * 2021-02-26 2021-06-01 中国科学院西安光学精密机械研究所 Infrared and visible light image fusion method based on three-scale decomposition
CN114283486A (en) * 2021-12-20 2022-04-05 北京百度网讯科技有限公司 Image processing method, model training method, model recognition method, device, equipment and storage medium
CN115330624A (en) * 2022-08-17 2022-11-11 华伦医疗用品(深圳)有限公司 Method and device for acquiring fluorescence image and endoscope system
CN112419212B (en) * 2020-10-15 2024-05-17 卡乐微视科技(云南)有限公司 Infrared and visible light image fusion method based on side window guide filtering

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106361281A (en) * 2016-08-31 2017-02-01 北京数字精准医疗科技有限公司 Fluorescent real-time imaging and fusing method and device
CN107248150A (en) * 2017-07-31 2017-10-13 杭州电子科技大学 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area
CN108052988A (en) * 2018-01-04 2018-05-18 常州工学院 Guiding conspicuousness image interfusion method based on wavelet transformation
CN109509164A (en) * 2018-09-28 2019-03-22 洛阳师范学院 A kind of Multisensor Image Fusion Scheme and system based on GDGF
CN109509163A (en) * 2018-09-28 2019-03-22 洛阳师范学院 A kind of multi-focus image fusing method and system based on FGF
CN110189284A (en) * 2019-05-24 2019-08-30 南昌航空大学 A kind of infrared and visible light image fusion method
CN110490914A (en) * 2019-07-29 2019-11-22 广东工业大学 It is a kind of based on brightness adaptively and conspicuousness detect image interfusion method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106361281A (en) * 2016-08-31 2017-02-01 北京数字精准医疗科技有限公司 Fluorescent real-time imaging and fusing method and device
CN107248150A (en) * 2017-07-31 2017-10-13 杭州电子科技大学 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area
CN108052988A (en) * 2018-01-04 2018-05-18 常州工学院 Guiding conspicuousness image interfusion method based on wavelet transformation
CN109509164A (en) * 2018-09-28 2019-03-22 洛阳师范学院 A kind of Multisensor Image Fusion Scheme and system based on GDGF
CN109509163A (en) * 2018-09-28 2019-03-22 洛阳师范学院 A kind of multi-focus image fusing method and system based on FGF
CN110189284A (en) * 2019-05-24 2019-08-30 南昌航空大学 A kind of infrared and visible light image fusion method
CN110490914A (en) * 2019-07-29 2019-11-22 广东工业大学 It is a kind of based on brightness adaptively and conspicuousness detect image interfusion method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHIQIANG ZHOU等: ""Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters"" *
徐丹萍 等: ""基于双边滤波和NSST的红外与可见光图像融合"" *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652832B (en) * 2020-07-09 2023-05-12 南昌航空大学 Infrared and visible light image fusion method based on sliding window technology
CN111815549A (en) * 2020-07-09 2020-10-23 湖南大学 Night vision image colorization method based on guided filtering image fusion
CN111652832A (en) * 2020-07-09 2020-09-11 南昌航空大学 Infrared and visible light image fusion method based on sliding window technology
CN111968105A (en) * 2020-08-28 2020-11-20 南京诺源医疗器械有限公司 Method for detecting salient region in medical fluorescence imaging
CN112037216A (en) * 2020-09-09 2020-12-04 南京诺源医疗器械有限公司 Image fusion method for medical fluorescence imaging system
CN112037216B (en) * 2020-09-09 2022-02-15 南京诺源医疗器械有限公司 Image fusion method for medical fluorescence imaging system
CN112200735A (en) * 2020-09-18 2021-01-08 安徽理工大学 Temperature identification method based on flame image and control method of low-concentration gas combustion system
CN112419212A (en) * 2020-10-15 2021-02-26 卡乐微视科技(云南)有限公司 Infrared and visible light image fusion method based on side window guide filtering
CN112419212B (en) * 2020-10-15 2024-05-17 卡乐微视科技(云南)有限公司 Infrared and visible light image fusion method based on side window guide filtering
CN112801927A (en) * 2021-01-28 2021-05-14 中国地质大学(武汉) Infrared and visible light image fusion method based on three-scale decomposition
CN112801927B (en) * 2021-01-28 2022-07-19 中国地质大学(武汉) Infrared and visible light image fusion method based on three-scale decomposition
CN112884690B (en) * 2021-02-26 2023-01-06 中国科学院西安光学精密机械研究所 Infrared and visible light image fusion method based on three-scale decomposition
CN112884690A (en) * 2021-02-26 2021-06-01 中国科学院西安光学精密机械研究所 Infrared and visible light image fusion method based on three-scale decomposition
CN114283486A (en) * 2021-12-20 2022-04-05 北京百度网讯科技有限公司 Image processing method, model training method, model recognition method, device, equipment and storage medium
CN115330624A (en) * 2022-08-17 2022-11-11 华伦医疗用品(深圳)有限公司 Method and device for acquiring fluorescence image and endoscope system

Also Published As

Publication number Publication date
CN111223069B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
CN111223069A (en) Image fusion method and system
Sahu et al. Different image fusion techniques–a critical review
Zhao et al. Fusion of visible and infrared images using saliency analysis and detail preserving based image decomposition
CN111709902A (en) Infrared and visible light image fusion method based on self-attention mechanism
CN101630405B (en) Multi-focusing image fusion method utilizing core Fisher classification and redundant wavelet transformation
CN108230260B (en) Fusion method of infrared image and low-light-level image
CN109523513B (en) Stereoscopic image quality evaluation method based on sparse reconstruction color fusion image
He et al. Infrared and visible image fusion based on target extraction in the nonsubsampled contourlet transform domain
Tan et al. Remote sensing image fusion via boundary measured dual-channel PCNN in multi-scale morphological gradient domain
CN110097617B (en) Image fusion method based on convolutional neural network and significance weight
Tao et al. Hyperspectral image recovery based on fusion of coded aperture snapshot spectral imaging and RGB images by guided filtering
Song et al. Triple-discriminator generative adversarial network for infrared and visible image fusion
Arivazhagan et al. A modified statistical approach for image fusion using wavelet transform
CN112669249A (en) Infrared and visible light image fusion method combining improved NSCT (non-subsampled Contourlet transform) transformation and deep learning
CN115330653A (en) Multi-source image fusion method based on side window filtering
Liu et al. An attention-guided and wavelet-constrained generative adversarial network for infrared and visible image fusion
Patel et al. A review on infrared and visible image fusion techniques
CN111815550A (en) Infrared and visible light image fusion method based on gray level co-occurrence matrix
Jia et al. Research on the decomposition and fusion method for the infrared and visible images based on the guided image filtering and Gaussian filter
Ren et al. Fusion of infrared and visible images based on discrete cosine wavelet transform and high pass filter
Wang et al. Infrared weak-small targets fusion based on latent low-rank representation and DWT
CN116051444A (en) Effective infrared and visible light image self-adaptive fusion method
Pang et al. Infrared and visible image fusion based on double fluid pyramids and multi-scale gradient residual block
Avcı et al. MFIF-DWT-CNN: Multi-focus ımage fusion based on discrete wavelet transform with deep convolutional neural network
Xiao et al. MOFA: A novel dataset for Multi-modal Image Fusion Applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant