CN110415202B - Image fusion method and device, electronic equipment and storage medium - Google Patents

Image fusion method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110415202B
CN110415202B CN201910703771.XA CN201910703771A CN110415202B CN 110415202 B CN110415202 B CN 110415202B CN 201910703771 A CN201910703771 A CN 201910703771A CN 110415202 B CN110415202 B CN 110415202B
Authority
CN
China
Prior art keywords
image
layer
detail
visible light
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910703771.XA
Other languages
Chinese (zh)
Other versions
CN110415202A (en
Inventor
王家琪
程敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201910703771.XA priority Critical patent/CN110415202B/en
Publication of CN110415202A publication Critical patent/CN110415202A/en
Application granted granted Critical
Publication of CN110415202B publication Critical patent/CN110415202B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image fusion method, an image fusion device, electronic equipment and a storage medium, wherein the image fusion method comprises the following steps: filtering the first image to be fused by adopting a first filter to obtain a second image; filtering the first image by adopting a second filter to obtain a third image; calculating the difference value of corresponding pixel points in the first image and the second image to obtain a first detail image; calculating the difference value of corresponding pixel points in the second image and the third image to obtain a first edge image; and performing fusion processing on the first detail image and the first edge image. The method comprises the steps of obtaining a first detail image and a first edge image according to a first image and a filtered image, and then carrying out fusion processing on the first detail image and the first edge image. By adopting the image fusion scheme provided by the embodiment of the invention, the image detail information and the edge information can be considered at the same time, so that the fused image has better quality.

Description

Image fusion method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image fusion method and apparatus, an electronic device, and a storage medium.
Background
In recent years, in order to realize all-weather monitoring in the security field, requirements on a multi-band camera are met, wherein a combination of infrared and visible light is popular, and the infrared and visible light combined camera is mainly used for all-weather all-scenes such as forest fire prevention, industry temperature measurement, frontier defense and sea defense, night security, foggy day security and the like. The core idea of image fusion is to adopt a certain algorithm to 'fuse' a plurality of image information of the same scene acquired by various image sensors working in different wavelength ranges or having different imaging mechanisms to generate a new image, and the fused image contains more Human Visual System (HVS) sensitive information, thereby being more suitable for Human eye observation or computer monitoring, classification, identification and other processing. The image fusion technology is an important means for improving the performance of various image processing devices, and particularly in the fields of target detection and the like, the image fusion technology can increase the effective information content of images so as to improve the detection effect. Thermal infrared imagers and visible CCD cameras are two of the most commonly used imaging sensors in the field of target detection, wherein the infrared images obtained by the thermal infrared imagers reflect the spatial distribution of invisible infrared radiation of the target and the background, the radiation measurement distribution of the infrared images depends mainly on the temperature and emissivity of the object to be observed, and low visible infrared thermal targets are easily seen in the infrared images. Compared with the infrared image, the visible light image provides more detailed information of the target or the scene, and is beneficial to the observation of human eyes. The visible light image and the infrared image are widely applied due to good complementarity, and the fused image of the visible light image and the infrared image comprehensively utilizes the advantage information of the two wave band images, so that more comprehensive and accurate scene description can be obtained, and the accurate identification, analysis, understanding or judgment of a target can be realized. The visible light includes various illumination visible lights such as normal brightness visible light and dim light; the infrared light here includes short-wave infrared and long-wave infrared.
In the prior art, when image fusion is performed, algorithms such as a gaussian pyramid or a gradient pyramid are generally adopted to realize image fusion, fine detail textures of the gaussian pyramid algorithm can be lost during image fusion, edge information of an image can be retained during image fusion by the gradient pyramid algorithm, but image details cannot be completely retained. Therefore, the image fusion scheme in the prior art cannot give consideration to both image detail information and edge information, resulting in poor quality of fused images.
Disclosure of Invention
The embodiment of the invention provides an image fusion method and device, electronic equipment and a storage medium, which are used for solving the problem that the image fusion scheme in the prior art cannot give consideration to image detail information and edge information, so that the quality of the fused image is poor.
The embodiment of the invention provides an image fusion method, which comprises the following steps:
filtering the first image to be fused by adopting a first filter to obtain a second image; filtering the first image by adopting a second filter to obtain a third image; wherein the first filter and the second filter are different;
calculating the difference value of corresponding pixel points in the first image and the second image to obtain a first detail image; calculating the difference value of corresponding pixel points in the second image and the third image to obtain a first edge image;
performing fusion processing on the first detail image and the first edge image;
wherein the first image comprises an original visible light image and an original infrared light image.
Further, after obtaining the first edge image and before performing fusion processing on the first detail image and the first edge image, the method further includes:
carrying out layering processing on the second image and the third image based on a filter to obtain pyramid images of the second image and the third image;
calculating the difference value of corresponding pixel points in the upper layer third image and the layer second image aiming at each layer of pyramid image to obtain a layer second detail image; calculating the difference value of corresponding pixel points in the layer of second image and the layer of third image to obtain a layer of second edge image;
the fusing the first detail image and the first edge image comprises:
and carrying out fusion processing on the first detail image and the first edge image and the second detail image and the second edge image of each layer of pyramid image.
Further, the fusing the first detail image and the first edge image, and the second detail image and the second edge image of each layer of pyramid image includes:
and carrying out fusion processing on the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image and the third image of the last layer.
Further, before the filtering processing is performed on the first image to be fused by using the first filter to obtain the second image, the method further includes:
and performing enhancement processing on the first image to be fused, and performing subsequent steps on the enhanced first image.
Further, the enhancing the first image to be fused includes:
if the first image is an original visible light image, normalizing the pixel values of the pixel points in the original visible light image to be in the range of [0, 255 ];
performing guided filtering processing on the normalized visible light image to obtain a first base layer image, and determining a detail layer image and a second base layer image according to the normalized visible light image and the first base layer image;
and determining the visible light image after the enhancement processing according to the detail layer image and the second base layer image.
Further, the enhancing the first image to be fused includes:
and if the first image is an original infrared light image, determining the infrared light image after enhancement processing according to the pixel value of each pixel point in the infrared light image and the maximum pixel value and the minimum pixel value in the original infrared light image.
Further, before performing fusion processing on the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image, and the third image of the last layer, the method further includes:
determining a weight correlation coefficient of each group of corresponding pixel points according to the magnitude relation of pixel values of each group of corresponding pixel points in the same layer of image to be fused; the same layer of image to be fused comprises the same layer of infrared light detail image and visible light detail image, the same layer of infrared light edge image and visible light edge image, and the last layer of infrared light image and visible light image;
carrying out nonlinear transformation processing on the weight correlation coefficient of each group, and carrying out low-pass filtering processing on the weight correlation coefficient after the nonlinear transformation processing to obtain the weight of each group of corresponding pixel points in the image to be fused in the same layer;
the fusing the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image, and the last layer of third image comprises:
and according to the weight of each group of corresponding pixel points in the same layer of images to be fused, carrying out fusion processing on the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid images and the third image of the last layer.
Further, the performing a nonlinear transformation process on the weight correlation coefficient of each group includes:
determining an energy value of the first image, wherein the energy value of the first image comprises an energy value of an original visible light image to be fused and an energy value of an original infrared light image to be fused;
and determining a nonlinear transformation evaluation parameter according to the ratio of the energy value of the original infrared light image to the energy value of the original visible light image, and performing nonlinear transformation processing on the weight correlation coefficient of each group according to the nonlinear transformation evaluation parameter.
Further, the determining an energy value of the first image comprises:
dividing the first image into a plurality of regions, and performing Fourier transform on each region to obtain a frequency domain value of each pixel point in each region;
determining the sensitivity coefficient of each pixel point according to the frequency domain value of each pixel point and the corresponding sensitivity function;
aiming at each region, determining the energy value of the region according to the sensitivity coefficient of each pixel point in the region; determining the gradient strength of each pixel point in the first image according to an edge detection algorithm, and determining the gradient coefficient of the region according to the gradient strength of each pixel point in the region;
and determining the energy value of the first image according to the energy value and the gradient coefficient of each region and the number of pixel points contained in each region.
Further, the fusing the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image, and the last layer of third image according to the weight of each group of corresponding pixel points in the same layer of image to be fused includes:
respectively taking a higher pixel value in the corresponding pixel point as a pixel value of the corresponding pixel point of the fused fourth image aiming at a first infrared light detail image and a first visible light detail image in the first detail image and a first infrared light edge image and a first visible light edge image in the first edge image;
respectively weighting pixel values of pixel points in the infrared light image and the visible light image of the same layer according to the weight of each group of corresponding pixel points in the image to be fused of the same layer for a second infrared light detail image and a second visible light detail image in a second detail image of each layer of pyramid image, a second infrared light edge image and a second visible light edge image in the second edge image, and a third infrared light image and a third visible light image in a third image of the last layer to obtain a fifth fused image;
and carrying out fusion processing on the fourth image and the fifth image.
In another aspect, an embodiment of the present invention provides an image fusion apparatus, where the apparatus includes:
the filtering module is used for filtering the first image to be fused by adopting a first filter to obtain a second image; filtering the first image by adopting a second filter to obtain a third image; wherein the first filter and the second filter are different;
the calculating module is used for calculating the difference value of corresponding pixel points in the first image and the second image to obtain a first detail image; calculating the difference value of corresponding pixel points in the second image and the third image to obtain a first edge image;
the fusion module is used for carrying out fusion processing on the first detail image and the first edge image; wherein the first image comprises an original visible light image and an original infrared light image.
Further, the apparatus further comprises:
the layering module is used for carrying out layering processing on the second image and the third image based on a filter to obtain pyramid images of the second image and the third image; calculating the difference value of corresponding pixel points in the upper layer third image and the layer second image aiming at each layer of pyramid image to obtain a layer second detail image; calculating the difference value of corresponding pixel points in the layer of second image and the layer of third image to obtain a layer of second edge image;
the fusion module is specifically configured to perform fusion processing on the first detail image and the first edge image, and the second detail image and the second edge image of each layer of pyramid image.
Further, the fusion module is specifically configured to perform fusion processing on the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image, and the last layer of third image.
Further, the apparatus further comprises:
and the enhancement module is used for enhancing the first image to be fused and triggering the filtering module aiming at the enhanced first image.
Further, the enhancing module is specifically configured to normalize the pixel values of the pixels in the original visible light image to a range of [0, 255] if the first image is the original visible light image; performing guided filtering processing on the normalized visible light image to obtain a first base layer image, and determining a detail layer image and a second base layer image according to the normalized visible light image and the first base layer image; and determining the visible light image after the enhancement processing according to the detail layer image and the second base layer image.
Further, the enhancement module is specifically configured to determine the infrared light image after enhancement processing according to the pixel value of each pixel point in the infrared light image and the maximum pixel value and the minimum pixel value in the original infrared light image if the first image is the original infrared light image.
Further, the apparatus further comprises:
the determining module is used for determining the weight correlation coefficient of each group of corresponding pixel points according to the size relation of the pixel values of each group of corresponding pixel points in the same layer of image to be fused; the same layer of image to be fused comprises the same layer of infrared light detail image and visible light detail image, the same layer of infrared light edge image and visible light edge image, and the last layer of infrared light image and visible light image; carrying out nonlinear transformation processing on the weight correlation coefficient of each group, and carrying out low-pass filtering processing on the weight correlation coefficient after the nonlinear transformation processing to obtain the weight of each group of corresponding pixel points in the image to be fused in the same layer;
the fusion module is specifically configured to perform fusion processing on the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image, and the last layer of third image according to the weight of each group of corresponding pixel points in the same layer of image to be fused.
Further, the determining module is specifically configured to determine an energy value of the first image, where the energy value of the first image includes an energy value of an original visible light image to be fused and an energy value of an original infrared light image to be fused; and determining a nonlinear transformation evaluation parameter according to the ratio of the energy value of the original infrared light image to the energy value of the original visible light image, and performing nonlinear transformation processing on the weight correlation coefficient of each group according to the nonlinear transformation evaluation parameter.
Further, the determining module is specifically configured to divide the first image into a plurality of regions, and perform fourier transform on each region to obtain a frequency domain value of each pixel point in each region; determining the sensitivity coefficient of each pixel point according to the frequency domain value of each pixel point and the corresponding sensitivity function; aiming at each region, determining the energy value of the region according to the sensitivity coefficient of each pixel point in the region; determining the gradient strength of each pixel point in the first image according to an edge detection algorithm, and determining the gradient coefficient of the region according to the gradient strength of each pixel point in the region; and determining the energy value of the first image according to the energy value and the gradient coefficient of each region and the number of pixel points contained in each region.
Further, the fusion module is specifically configured to respectively use, for a first infrared light detail image and a first visible light detail image in the first detail image, and a first infrared light edge image and a first visible light edge image in the first edge image, a higher pixel value in a corresponding pixel point as a pixel value of a corresponding pixel point of the fused fourth image; respectively weighting pixel values of pixel points in the infrared light image and the visible light image of the same layer according to the weight of each group of corresponding pixel points in the image to be fused of the same layer for a second infrared light detail image and a second visible light detail image in a second detail image of each layer of pyramid image, a second infrared light edge image and a second visible light edge image in the second edge image, and a third infrared light image and a third visible light image in a third image of the last layer to obtain a fifth fused image; and carrying out fusion processing on the fourth image and the fifth image.
On the other hand, the embodiment of the invention provides electronic equipment, which comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
a processor for implementing any of the above method steps when executing a program stored in the memory.
In another aspect, an embodiment of the present invention provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the method steps of any one of the above.
The embodiment of the invention provides an image fusion method, an image fusion device, electronic equipment and a storage medium, wherein the method comprises the following steps: filtering the first image to be fused by adopting a first filter to obtain a second image; filtering the first image by adopting a second filter to obtain a third image; wherein the first filter and the second filter are different; calculating the difference value of corresponding pixel points in the first image and the second image to obtain a first detail image; calculating the difference value of corresponding pixel points in the second image and the third image to obtain a first edge image; performing fusion processing on the first detail image and the first edge image; wherein the first image comprises an original visible light image and an original infrared light image.
In the embodiment of the invention, the filter with different parameters is adopted to filter the first image to be fused to respectively obtain the second image and the third image, the first detail image is determined according to the difference value of the corresponding pixel points in the first image and the second image, the first edge image is obtained according to the difference value of the corresponding pixel points in the second image and the third image, and then the first detail image and the first edge image are fused. By adopting the image fusion scheme provided by the embodiment of the invention, the image detail information and the edge information can be considered at the same time, so that the fused image has better quality.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an image fusion process provided in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a detailed image fusion process according to an embodiment of the present invention;
FIG. 3 is a schematic view of a visible light image enhancement process flow according to an embodiment of the present invention;
FIG. 4 is a block diagram of an image fusion process provided by an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an image fusion apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the attached drawings, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic diagram of an image fusion process provided in an embodiment of the present invention, where the process includes the following steps:
s101: filtering the first image to be fused by adopting a first filter to obtain a second image; filtering the first image by adopting a second filter to obtain a third image; wherein the first filter and the second filter are different.
The image fusion method provided by the embodiment of the invention is applied to electronic equipment, and the electronic equipment can be equipment such as a PC (personal computer), a tablet personal computer and the like, and can also be image acquisition equipment. The image acquisition equipment comprises an infrared light image sensor and a visible light image sensor, the infrared light image can be acquired by the image acquisition equipment through the infrared light image sensor, and the visible light image can be acquired through the visible light image sensor. If the electronic equipment is image acquisition equipment, the infrared light image and the visible light image are fused after the image acquisition equipment acquires the infrared light image and the visible light image, if the electronic equipment is equipment such as a PC (personal computer) and a tablet personal computer, the infrared light image and the visible light image are firstly sent to the electronic equipment after the image acquisition equipment acquires the infrared light image and the visible light image, and then the electronic equipment performs the process of fusing the infrared light image and the visible light image.
It should be noted that, in the embodiment of the present invention, an image acquired by the image acquisition device is referred to as a first image, and the first image includes an original visible light image and an original infrared light image.
Specifically, the electronic device uses a first filter to perform filtering processing on a first image to be fused, and the image after the filtering processing is called a second image. And the electronic equipment adopts a second filter to filter the first image to be fused, and the image after filtering is called a third image. That is to say, the original visible light image and the original infrared light image are respectively filtered by the first filter, and the obtained second image includes the second visible light image and the second infrared light image. And respectively filtering the original visible light image and the original infrared light image by adopting a second filter, wherein the obtained third image comprises a third visible light image and a third infrared light image. The filter comprises a window size r and an edge preservation parameter epsilon, preferably the window size of the first filter and the second filter are the same and the edge preservation parameter is different. The filter in the embodiment of the invention is a guide filter.
S102: calculating the difference value of corresponding pixel points in the first image and the second image to obtain a first detail image; and calculating the difference value of the corresponding pixel points in the second image and the third image to obtain a first edge image.
The resolution ratios of the first image, the second image and the third image determined by the electronic equipment are the same, so that pixel points corresponding to the coordinates exist in the first image, the second image and the third image, and therefore the difference value of the corresponding pixel points in the first image and the second image and the difference value of the corresponding pixel points in the second image and the third image can be calculated. The image detail information in the image obtained by the difference value of the corresponding pixel points in the first image and the second image is completely reserved, so that the image obtained by the difference value of the corresponding pixel points in the first image and the second image is called a first detail image; because the image edge information in the image obtained by the difference value of the corresponding pixel points in the second image and the third image is completely reserved, the image obtained by the difference value of the corresponding pixel points in the second image and the third image is called a first edge image.
Specifically, when the first image is an original visible light image, the second image is a second visible light image, and the third image is a third visible light image. An image obtained by the difference value between the corresponding pixel points in the first image and the second image is called a first visible light detail image, and an image obtained by the difference value between the corresponding pixel points in the second image and the third image is called a first visible light edge image.
When the first image is an original infrared light image, the second image is a second infrared light image, and the third image is a third infrared light image. An image obtained by the difference value between the corresponding pixel points in the first image and the second image is called a first infrared light detail image, and an image obtained by the difference value between the corresponding pixel points in the second image and the third image is called a first infrared light edge image.
S103: performing fusion processing on the first detail image and the first edge image; wherein the first image comprises an original visible light image and an original infrared light image.
After the first detail image and the first edge image are determined, the first detail image and the first edge image are subjected to fusion processing. The fusion processing may be to add pixel values of pixel points at corresponding positions of the first detail image and the first edge image. Specifically, the fusion processing may be to add pixel values of pixel points at corresponding positions of the first visible light detail image, the first visible light edge image, the first infrared light detail image, and the first infrared light edge image.
In the embodiment of the invention, the filter with different parameters is adopted to filter the first image to be fused to respectively obtain the second image and the third image, the first detail image is determined according to the difference value of the corresponding pixel points in the first image and the second image, the first edge image is obtained according to the difference value of the corresponding pixel points in the second image and the third image, and then the first detail image and the first edge image are fused. By adopting the image fusion scheme provided by the embodiment of the invention, the image detail information and the edge information can be considered at the same time, so that the fused image has better quality.
In order to make the remaining detail information and edge information more complete, on the basis of the above embodiment, in an embodiment of the present invention, after obtaining the first edge image, before performing fusion processing on the first detail image and the first edge image, the method further includes:
carrying out layering processing on the second image and the third image based on a filter to obtain pyramid images of the second image and the third image;
calculating the difference value of corresponding pixel points in the upper layer third image and the layer second image aiming at each layer of pyramid image to obtain a layer second detail image; calculating the difference value of corresponding pixel points in the layer of second image and the layer of third image to obtain a layer of second edge image;
the fusing the first detail image and the first edge image comprises:
and carrying out fusion processing on the first detail image and the first edge image and the second detail image and the second edge image of each layer of pyramid image.
In the embodiment of the invention, after the first detail image and the first edge image are obtained, the second image and the third image are subjected to layering processing based on a filter, and pyramid images of the second image and the third image are obtained. The number of layers of the pyramid image may be 8 layers, 10 layers, or the like.
Specifically, the second image and the third image are respectively filtered by using filters, and an image obtained after filtering is a next layer image of the second image and the third image. The filters for performing filtering processing on the second image and the third image may be the same or different, and preferably, when the filters are set, the two filter windows corresponding to the two images of each layer have the same size, and the edge retention parameters are different. The filter window size and edge retention parameters vary from layer to layer. When the pyramid image is obtained, the two images obtained after the second image is filtered are respectively the next layer image of the second image and the third image, then the two images of the next layer are continuously filtered, and then the two images of the next layer are obtained. And repeating the steps until the layer number of the pyramid images meets the requirement.
The pyramid image is sequentially the first layer and the second known nth layer from top to bottom, wherein n is the number of pyramid layers. Then, calculating the difference value of corresponding pixel points in the upper layer third image and the layer second image aiming at each layer of pyramid image to obtain a layer second detail image; and calculating the difference value of the corresponding pixel points in the layer of second image and the layer of third image to obtain a layer of second edge image.
Specifically, when the previous layer of third image is a visible light image, the layer of second image is a visible light image, the layer of third image is a visible light image, a difference value between the previous layer of third image and a corresponding pixel point in the layer of second image is referred to as a second visible light detail image, and a difference value between the layer of second image and a corresponding pixel point in the layer of third image is referred to as a second visible light edge image.
When the previous layer of third image is an infrared light image, the layer of second image is an infrared light image, the layer of third image is an infrared light image, the difference value between the previous layer of third image and the corresponding pixel point in the layer of second image is called a second infrared light detail image, and the difference value between the layer of second image and the corresponding pixel point in the layer of third image is called a second infrared light edge image.
And after the second detail image and the second edge image of each layer of pyramid image are obtained, carrying out fusion processing on the first detail image and the first edge image as well as the second detail image and the second edge image of each layer of pyramid image. Specifically, a first visible light detail image, a first visible light edge image, a first infrared light detail image and a first infrared light edge image, a second visible light detail image, a second visible light edge image, a second infrared light detail image and a second infrared light edge image of each layer of pyramid image are fused. The image after the fusion processing may be obtained by adding pixel values of corresponding pixel points of the first visible light detail image, the first visible light edge image, the first infrared light detail image and the first infrared light edge image, the second visible light detail image of each layer of pyramid image, the second visible light edge image, the second infrared light detail image and the second infrared light edge image.
In the embodiment of the invention, the second image and the third image are layered based on the filter to obtain the pyramid images of the second image and the third image, and then the second detail image and the second edge image of each layer of pyramid image are determined. In the image fusion processing, the first detail image and the first edge image, and the second detail image and the second edge image of each layer of pyramid image are subjected to fusion processing. Therefore, the detail information and the edge information remained in the fused image are more complete.
In order to further improve information after image fusion, on the basis of the foregoing embodiments, in an embodiment of the present invention, the fusing the first detail image and the first edge image, and the second detail image and the second edge image of each layer of pyramid image includes:
and carrying out fusion processing on the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image and the third image of the last layer.
Specifically, a first visible light detail image, a first visible light edge image, a first infrared light detail image and a first infrared light edge image, a second visible light detail image, a second visible light edge image, a second infrared light detail image, a second infrared light edge image, a third visible light image of a last layer and a third infrared light image of a last layer of pyramid image are fused. For example, the image after the fusion processing may be obtained by adding pixel values of corresponding pixel points of a first visible light detail image, a first visible light edge image, a first infrared light detail image and a first infrared light edge image, a second visible light detail image of each layer of pyramid image, a second visible light edge image, a second infrared light detail image, a second infrared light edge image, a third visible light image of a last layer, and a third infrared light image of a last layer.
In the embodiment of the invention, the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image and the third image of the last layer are subjected to fusion processing. By adopting the image fusion scheme provided by the embodiment of the invention, the information contained in the fused image can be complete, and the quality of the fused image is further improved.
The following describes the processing procedure of image fusion with reference to fig. 2.
In the embodiment of the invention, the pyramid image is determined by a guide filtering method, and the edge preserving characteristic and the smoothing characteristic of the guide filtering are utilized to carry out multi-layer decomposition so as to obtain the texture detail and the edge characteristic of the multi-layer image. As shown in fig. 2, decomposition based on guided filtering is performed on the visible light image and the infrared light image to be fused, and the visible light image and the infrared light image are divided into j layers of visible light images and j layers of infrared light images, so as to obtain multilayer texture details and edge features of the visible light image and the infrared light image, respectively. The process is represented as:
Figure BDA0002151520810000141
Figure BDA0002151520810000142
where j is 1, 2, …, n represents the number of decomposition layers, and n may range from [1,10 ]],Dj,0Is the detail image of the current layer, Dj,1Is the edge image of the current layer. In the method, the visible light image and the infrared light image to be fused are input into an original image, and a third image of the last layer is additionally specified
Figure BDA0002151520810000143
The base layer Bv for the entire pyramid image. As shown in fig. 2, after the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image, and the last layer of third image are obtained, the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image, and the last layer of third image are fused to obtain the fused image.
In order to improve the image quality after the fusion processing, the image to be fused can be processed in addition to the improvement in the image fusion process, so that the image quality after the fusion processing can be improved. Therefore, on the basis of the foregoing embodiments, in an embodiment of the present invention, before performing filtering processing on the first image to be fused by using the first filter to obtain the second image, the method further includes:
and performing enhancement processing on the first image to be fused, and performing subsequent steps on the enhanced first image.
And performing enhancement processing on the first image to be fused, namely respectively performing enhancement processing on the original visible light image and the original infrared light image.
When the first image to be fused is subjected to enhancement processing, the brightness value of each pixel point in the first image can be increased by a preset brightness threshold value, so that the brightness information of the image is improved.
Preferably, the enhancing the first image to be fused includes:
if the first image is an original visible light image, normalizing the pixel values of the pixel points in the original visible light image to be in the range of [0, 255 ];
performing guided filtering processing on the normalized visible light image to obtain a first base layer image, and determining a detail layer image and a second base layer image according to the normalized visible light image and the first base layer image;
and determining the visible light image after the enhancement processing according to the detail layer image and the second base layer image.
Fig. 3 is a schematic view of a visible light image enhancement processing flow provided by an embodiment of the present invention. As shown in fig. 3, when the first image is the original visible light image, the dynamic range of the pixel values of the pixels in the original visible light image may be smaller, for example, the dynamic range of the pixel values of the pixels in the original visible light image is [50, 180%]In order to solve the problems of small dynamic range and low contrast of the image obtained by the visible light image in the severe scene, when the original visible light image is subjected to enhancement processing, firstly, the original visible light image I is input1Extended dynamic range, in the original visible imagePixel values of pixel points are normalized to [0, 255]]In this range, Img1 was obtained. Performing guiding filtering processing on the normalized visible light image Img1 to obtain a first base layer image IbaseThe process is represented as: i isbase=GFr,∈(Img 1). r and epsilon are parameters of the guided filter size and edge retention, respectively.
Detail layer image I by compressing contrastdetailThe extraction of (1) ensures that the images of the high brightness area and the low brightness area in the visible light tend to the same linear area, and the process is expressed as follows: i isdetail=log(Img1+ξ)-log(Ibase+ ξ), where ξ may be 1, 2 equivalent, ensuring log nonnegative, noting log (I)base+ ξ) is the compressed second base layer image Ilog_base
Using beta pairs of compressed second base layer images Ilog_baseContrast adjustment is carried out, so that the image dark area is brightened, the contrast is increased, the operation is carried out on the adjusted substrate layer, and the fine layer image I is superposed on the adjusted substrate layer aiming at the visible light under the condition of low illumination at nightdetailThen, the brightness of the whole picture is raised by a factor gamma to obtain an enhanced initial value
Figure BDA0002151520810000151
Wherein the value of the factor gamma can be [0, 255%]Any value within the range. The process is represented as:
Figure BDA0002151520810000152
wherein
Figure BDA0002151520810000153
T may be [1, 20 ]]Any value within the range. Contrast restoration through exponential operation to obtain enhanced visible light image
Figure BDA0002151520810000161
It should be noted that the dynamic range of the visible light image R after the enhancement processing needs to be limited to [0, 255], that is, the pixel value of the pixel point whose pixel value is in the range of [0, 255] in the visible light image R after the enhancement processing is unchanged, and the pixel value of the pixel point whose pixel value is greater than 255 in the visible light image R after the enhancement processing is updated to 255.
And if the original visible light image is a single-channel image, the original visible light image enhancement processing is completed after the steps are performed. If the original visible light image is a multi-channel image, the step of enhancing the visible light image of each channel is performed, and in the subsequent image fusion process, the enhanced visible light image of each channel is also correspondingly processed.
The embodiment of the invention provides a visible light image enhancement processing scheme, which solves the problems of small dynamic range and low contrast of an image obtained by a visible light image in a severe scene, improves the dynamic range of the visible light image, recovers the contrast conforming to visual observation, and recovers the hidden abundant details in the scene.
The enhancing processing of the first image to be fused comprises the following steps:
and if the first image is an original infrared light image, determining the infrared light image after enhancement processing according to the pixel value of each pixel point in the infrared light image and the maximum pixel value and the minimum pixel value in the original infrared light image.
If the first image is the original infrared light image I2And determining the infrared light image after enhancement processing according to the pixel value of each pixel point in the infrared light image and the maximum pixel value and the minimum pixel value in the original infrared light image. The process is represented as follows:
Figure BDA0002151520810000162
wherein Q is the infrared image after enhancement processing, max (I)2) Is an original infrared light image I2Middle maximum pixel value, min (I)2) Is an original infrared light image I2The smallest pixel value. I in the above formula2Is an original infrared light image I2The pixel value of each pixel point in the infrared image can be obtained by adopting the formula2And (4) obtaining the enhanced infrared light image Q by the enhanced pixel value of each pixel point.
The input visible light image Iv shown in fig. 2 may be an original visible light image, and preferably, in the embodiment of the present invention, may be an enhanced visible light image R. The input visible light image Ir shown in fig. 2 may be an original infrared light image, and preferably, in the embodiment of the present invention, may be an enhanced infrared light image Q.
In the embodiment of the invention, the original infrared light image and the original visible light image are firstly enhanced, so that the quality of the infrared light image and the visible light image to be fused is improved, and the infrared light image and the visible light image after enhancement processing are subjected to the subsequent image fusion processing process, so that the quality of the fused image is better.
In order to further improve the quality of the fused image, on the basis of the foregoing embodiments, in an embodiment of the present invention, before performing fusion processing on the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image, and the third image of the last layer, the method further includes:
determining a weight correlation coefficient of each group of corresponding pixel points according to the magnitude relation of pixel values of each group of corresponding pixel points in the same layer of image to be fused; the same layer of image to be fused comprises the same layer of infrared light detail image and visible light detail image, the same layer of infrared light edge image and visible light edge image, and the last layer of infrared light image and visible light image;
carrying out nonlinear transformation processing on the first weight correlation coefficient of each group, and carrying out low-pass filtering processing on the weight correlation coefficient subjected to the nonlinear transformation processing to obtain the weight of each group of corresponding pixel points in the image to be fused in the same layer;
the fusing the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image, and the last layer of third image comprises:
and according to the weight of each group of corresponding pixel points in the same layer of images to be fused, carrying out fusion processing on the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid images and the third image of the last layer.
In the embodiment of the invention, the weight correlation coefficient of each group of corresponding pixel points can be determined according to the size relationship of the pixel values of each group of corresponding pixel points in the same layer of image to be fused, wherein the same layer of image to be fused comprises the same layer of infrared light detail image and visible light detail image, the same layer of infrared light edge image and visible light edge image, and the last layer of infrared light image and visible light image.
For example, when the difference between the absolute values of the pixel values of the corresponding pixel points of the infrared light image and the visible light image in the same layer of image to be fused is greater than 0, the difference can be used as the weight correlation coefficient of the corresponding pixel point of the group, otherwise, the weight correlation coefficient of the corresponding pixel point of the group is determined to be 0. Or when the difference value of the absolute values of the pixel values of the corresponding pixel points of the infrared light image and the visible light image in the same layer of image to be fused is greater than 0, determining that the weight correlation coefficient of the corresponding pixel point of the group is 1, otherwise, determining that the weight correlation coefficient of the corresponding pixel point of the group is 0.
After the weight correlation coefficient of each group of corresponding pixel points is determined, nonlinear transformation processing is performed on the weight correlation coefficient of each group, wherein an evaluation parameter needs to be input in the process of the nonlinear transformation processing. And then, according to the weight of each group of corresponding pixel points in the image to be fused in the same layer, carrying out fusion processing on the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image and the third image in the last layer.
And after determining the weight of each group of corresponding pixel points according to the scheme, carrying out weighting processing on the pixel values of the pixel points in the infrared light image and the visible light image of the same layer. Specifically, the weight is used as the weight of a corresponding pixel point in the infrared light image of the same layer, the difference value between 1 and the weight is used as the weight of a corresponding pixel point in the visible light image of the same layer, and then weighted calculation is performed according to the weight of each pixel point in the infrared light image and the weight of each pixel point in the visible light image of the same layer, so that the image after fusion processing is obtained.
In the embodiment of the invention, the weight of each pixel point is determined in a self-adaptive manner according to the information of the pixel values in the image to be fused in the same layer, and then the image fusion is carried out through weighting processing, so that the fused image better retains the effective contents in the infrared light image and the visible light image to be fused.
In order to further ensure the quality of the fused image, it is necessary to accurately perform the nonlinear transformation processing on the weight correlation coefficient of each group, and therefore, on the basis of the above embodiments, in an embodiment of the present invention, the performing the nonlinear transformation processing on the weight correlation coefficient of each group includes:
determining an energy value of the first image, wherein the energy value of the first image comprises an energy value of an original visible light image to be fused and an energy value of an original infrared light image to be fused;
and determining a nonlinear transformation evaluation parameter according to the ratio of the energy value of the infrared light image to the energy value of the visible light image, and performing nonlinear transformation processing on the weight correlation coefficient of each group according to the nonlinear transformation evaluation parameter.
In order to accurately perform nonlinear transformation processing on the weight correlation coefficient of each group, accurate evaluation parameters need to be determined. Therefore, in the embodiment of the present invention, the energy value of the first image, that is, the energy values of the original infrared light image and the original visible light image to be fused, are determined first. When determining the energy value of the original infrared light image, the sum of squares of the pixel values of each pixel point in the original infrared light image may be used as the energy value of the original infrared light image. When determining the energy value of the original visible light image, the sum of squares of the pixel values of each pixel point in the original visible light image may be used as the energy value of the original visible light image.
And taking the ratio of the energy value of the original infrared light image to the energy value of the original visible light image as a nonlinear transformation evaluation parameter, and then carrying out nonlinear transformation processing on the weight correlation coefficient of each group according to the nonlinear transformation evaluation parameter.
According to the embodiment of the invention, the nonlinear transformation evaluation parameter is determined according to the energy value of the original infrared light image and the energy value of the original visible light image, and the nonlinear transformation evaluation parameter determined by the scheme provided by the embodiment of the invention is more accurate, so that the nonlinear transformation processing on the weight correlation coefficient of each group is more accurate, and the image quality after fusion processing is better.
In order to make the determining the energy value of the first image more accurate, in an embodiment of the present invention, the determining the energy value of the first image comprises:
dividing the first image into a plurality of regions, and performing Fourier transform on each region to obtain a frequency domain value of each pixel point in each region;
determining the sensitivity coefficient of each pixel point according to the frequency domain value of each pixel point and the corresponding sensitivity function;
aiming at each region, determining the energy value of the region according to the sensitivity coefficient of each pixel point in the region; determining the gradient strength of each pixel point in the first image according to an edge detection algorithm, and determining the gradient coefficient of the region according to the gradient strength of each pixel point in the region;
and determining the energy value of the first image according to the energy value and the gradient coefficient of each region and the number of pixel points contained in each region.
In the embodiment of the present invention, the first image is divided into a plurality of regions W1,W2,…,WmThe size of each region may be the same or different, for example 40 x 40 per region. Fourier transform is carried out on each region to obtain the frequency domain value F of each pixel point in each regioni(u1,v1),i∈[1,m]And m is the number of pixel points in the first image.
Determining the sensitivity coefficient of each pixel point according to the frequency domain value of each pixel point and the corresponding sensitivity function, wherein the implementation process comprises the following steps: ci(u2,v2)=Fi(u1,v1)Θ(r),i∈[1,m]. Wherein
Figure BDA0002151520810000201
Θ (r) represents a sensitivity function, where
Figure BDA0002151520810000202
And aiming at each region, determining the energy value of the region according to the sensitivity coefficient of each pixel point in the region. In particular, for each zone, the energy value E of the zoneiEqual to each C in the regioni(u2, v2) sum of the squares of the magnitudes, where each Ci(u2, v2) amplitude is shown as
Figure BDA0002151520810000203
The gradient strength of each pixel point in the first image can be determined according to an edge detection algorithm. Specifically, a horizontal gradient G1 and a vertical gradient G2 of each pixel point are respectively determined according to a Sobel operator in an edge detection algorithm, and for each pixel point, the gradient strength Gx of the pixel point is determined according to the horizontal gradient and the vertical gradient of the pixel point and is expressed as
Figure BDA0002151520810000204
For each region, determining the gradient coefficient of the region according to the gradient strength of each pixel point in the region, and expressing the gradient coefficient as
Figure BDA0002151520810000205
Determining an energy value PS of the first image according to the energy value and the gradient coefficient of each region and the number of pixel points contained in each region, and expressing the energy value PS as
Figure BDA0002151520810000206
Wherein, | W | is the number of pixel points in any region. By adopting the scheme, the determined energy value of the first image can be more accurateAnd then, the subsequent image fusion processing effect is better.
According to the ratio of the energy value of the original infrared light image to the energy value of the original visible light image, the process of determining the nonlinear transformation evaluation parameter lambda is represented as follows: ratio of
Figure BDA0002151520810000207
Figure BDA0002151520810000208
The embodiment of the invention provides block-type perception significance evaluation based on a contrast sensitivity function and a gradient, and the weight condition of the weighted sum of an infrared light image and a visible light image in the fusion process is autonomously judged by using a perception significance coefficient, so that the effective contents of the two images to be fused are highlighted in the fused image.
In order to make the image fusion processing process more effective, on the basis of the foregoing embodiments, in an embodiment of the present invention, the fusing, according to the weight of each group of corresponding pixel points in the image to be fused in the same layer, the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image, and the third image in the last layer, includes:
respectively taking a higher pixel value in the corresponding pixel point as a pixel value of the corresponding pixel point of the fused fourth image aiming at a first infrared light detail image and a first visible light detail image in the first detail image and a first infrared light edge image and a first visible light edge image in the first edge image;
respectively weighting pixel values of pixel points in the infrared light image and the visible light image of the same layer according to the weight of each group of corresponding pixel points in the image to be fused of the same layer for a second infrared light detail image and a second visible light detail image in a second detail image of each layer of pyramid image, a second infrared light edge image and a second visible light edge image in the second edge image, and a third infrared light image and a third visible light image in a third image of the last layer to obtain a fifth fused image;
and carrying out fusion processing on the fourth image and the fifth image.
In the embodiment of the invention, in order to ensure that the image fusion processing process has better effect, different fusion modes are adopted for fusing images to be fused in different layers. The specific fusion treatment process is as follows:
determining the fusion weight of the layer by using an absolute value maximum selection method for a first infrared light detail image and a first visible light detail image in the first detail image and a first infrared light edge image and a first visible light edge image in the first edge image respectively, wherein the process is represented as follows:
Figure BDA0002151520810000211
in order to prevent noise influence, gaussian filtering may be used to suppress noise that may exist, and then the first infrared light detail image and the first visible light detail image in the first detail image, and the first infrared light edge image and the first visible light edge image in the first edge image may be fused, where the process is expressed as:
Figure BDA0002151520810000221
Figure BDA0002151520810000222
(i ═ 0, 1). The subscript "v" denotes a visible light image, the subscript "r" denotes an infrared light image, i "0" denotes a detail map new, and i "1" denotes an edge image.
And weighting pixel values of pixel points in the infrared light image and the visible light image of the same layer according to the weight of each group of corresponding pixel points in the image to be fused of the same layer, and obtaining a fused fifth image.
Specifically, firstly, weight selection is performed on important features of the infrared light image by using an absolute value difference value selection method, and the process of which parts of infrared information need to be injected into visible light is represented as follows: rj
Figure BDA0002151520810000223
Obtaining the absolute value weight difference RjThen normalizing it to [ -1, 1],Rj>The 0 portion indicates that this portion of the infrared is a significant feature requiring the injection of visible light for fusion. In addition to RjAnd (3) carrying out low-pass filtering operation after nonlinear transformation to adjust the distribution of the weight, wherein the specific operation is as follows:
Figure BDA0002151520810000224
wherein Sλ(x) As a non-linear transformation function, LFIs a low-pass filter including various low-pass filters of mean, Gauss, guide, and the like, and takes part in the nonlinear transformation function S with lambda as an evaluation parameterλ(x) Determining the weight of each group of corresponding pixel points of the visible light image and the infrared light image, wherein the larger the lambda is, the more visual salient regions in the infrared light image are represented, and the function S is used for determining the weight of each group of corresponding pixel points of the visible light image and the infrared light imageλ(x) The obtained weight of the pixel point is more biased to the infrared light image; otherwise, the smaller the λ is, the weight of the pixel point is biased to the visible light image. Function Sλ(x) And the evaluation parameter lambda can adaptively select the significant information in the infrared light image and the visible light image to distribute weight, so that the fused image has rich visible light details and all infrared heat source information.
And weighting the pixel values of the pixel points in the infrared light image and the visible light image of the same layer to obtain a fused fifth image. The specific operation is as follows:
Figure BDA0002151520810000225
Figure BDA0002151520810000226
(j=2,…,n&i=0,1)。
for the fourth image and the fifth imagePerforming fusion treatment, specifically comprising the following operations:
Figure BDA0002151520810000227
Figure BDA0002151520810000228
(j=2,3,…,n)。
in the embodiment of the invention, different fusion modes are adopted for fusing the images to be fused of different layers, so that the image fusion processing process has better effect.
Fig. 4 is a block diagram of an image fusion process according to an embodiment of the present invention, and as shown in fig. 4, the visible light image and the infrared light image to be fused are respectively subjected to enhancement processing, and then the visible light image and the infrared light image after enhancement processing are respectively subjected to layering processing, and weight coefficient calculation is performed. And finally, performing weighted fusion on the visible light image and the infrared light image after the layering processing according to the weight coefficient obtained by calculation to obtain an image after the fusion processing.
Fig. 5 is a schematic structural diagram of an image fusion apparatus provided in an embodiment of the present invention, where the apparatus includes:
the filtering module 41 is configured to perform filtering processing on the first image to be fused by using a first filter to obtain a second image; filtering the first image by adopting a second filter to obtain a third image; wherein the first filter and the second filter are different;
a calculating module 42, configured to calculate a difference between corresponding pixel points in the first image and the second image, so as to obtain a first detail image; calculating the difference value of corresponding pixel points in the second image and the third image to obtain a first edge image;
a fusion module 43, configured to perform fusion processing on the first detail image and the first edge image; wherein the first image comprises an original visible light image and an original infrared light image.
The device further comprises:
a layering module 44, configured to perform layering processing on the second image and the third image based on a filter to obtain pyramid images of the second image and the third image; calculating the difference value of corresponding pixel points in the upper layer third image and the layer second image aiming at each layer of pyramid image to obtain a layer second detail image; calculating the difference value of corresponding pixel points in the layer of second image and the layer of third image to obtain a layer of second edge image;
the fusion module 43 is specifically configured to perform fusion processing on the first detail image and the first edge image, and the second detail image and the second edge image of each layer of pyramid image.
The fusion module 43 is specifically configured to perform fusion processing on the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image, and the last layer of third image.
The device further comprises:
and the enhancing module 45 is configured to perform enhancement processing on the first image to be fused, and trigger the filtering module 41 for the enhanced first image.
The enhancing module 45 is specifically configured to normalize the pixel values of the pixels in the original visible light image to a range of [0, 255] if the first image is the original visible light image; performing guided filtering processing on the normalized visible light image to obtain a first base layer image, and determining a detail layer image and a second base layer image according to the normalized visible light image and the first base layer image; and determining the visible light image after the enhancement processing according to the detail layer image and the second base layer image.
The enhancing module 45 is specifically configured to, if the first image is an original infrared light image, determine the infrared light image after enhancement processing according to the pixel value of each pixel point in the infrared light image and the maximum pixel value and the minimum pixel value in the original infrared light image.
The device further comprises:
the determining module 46 is configured to determine a weight correlation coefficient of each group of corresponding pixel points in the image to be fused in the same layer according to a size relationship between pixel values of each group of corresponding pixel points; the same layer of image to be fused comprises the same layer of infrared light detail image and visible light detail image, the same layer of infrared light edge image and visible light edge image, and the last layer of infrared light image and visible light image; carrying out nonlinear transformation processing on the weight correlation coefficient of each group, and carrying out low-pass filtering processing on the weight correlation coefficient after the nonlinear transformation processing to obtain the weight of each group of corresponding pixel points in the image to be fused in the same layer;
the fusion module 43 is specifically configured to perform fusion processing on the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image, and the last layer of third image according to the weight of each group of corresponding pixel points in the same layer of image to be fused.
The determining module 46 is specifically configured to determine an energy value of the first image, where the energy value of the first image includes an energy value of an original visible light image to be fused and an energy value of an original infrared light image to be fused; and determining a nonlinear transformation evaluation parameter according to the ratio of the energy value of the original infrared light image to the energy value of the original visible light image, and performing nonlinear transformation processing on the weight correlation coefficient of each group according to the nonlinear transformation evaluation parameter.
The determining module 46 is specifically configured to divide the first image into a plurality of regions, and perform fourier transform on each region to obtain a frequency domain value of each pixel point in each region; determining the sensitivity coefficient of each pixel point according to the frequency domain value of each pixel point and the corresponding sensitivity function; aiming at each region, determining the energy value of the region according to the sensitivity coefficient of each pixel point in the region; determining the gradient strength of each pixel point in the first image according to an edge detection algorithm, and determining the gradient coefficient of the region according to the gradient strength of each pixel point in the region; and determining the energy value of the first image according to the energy value and the gradient coefficient of each region and the number of pixel points contained in each region.
The fusion module 43 is specifically configured to respectively use, for a first infrared light detail image and a first visible light detail image in the first detail image, and a first infrared light edge image and a first visible light edge image in the first edge image, a higher pixel value in a corresponding pixel point as a pixel value of a corresponding pixel point of the fused fourth image; respectively weighting pixel values of pixel points in the infrared light image and the visible light image of the same layer according to the weight of each group of corresponding pixel points in the image to be fused of the same layer for a second infrared light detail image and a second visible light detail image in a second detail image of each layer of pyramid image, a second infrared light edge image and a second visible light edge image in the second edge image, and a third infrared light image and a third visible light image in a third image of the last layer to obtain a fifth fused image; and carrying out fusion processing on the fourth image and the fifth image.
On the basis of the foregoing embodiments, an embodiment of the present invention further provides an electronic device, as shown in fig. 6, including: the system comprises a processor 501, a communication interface 502, a memory 503 and a communication bus 504, wherein the processor 501, the communication interface 502 and the memory 503 are communicated with each other through the communication bus 504;
the memory 503 has stored therein a computer program which, when executed by the processor 501, causes the processor 501 to perform the steps of:
filtering the first image to be fused by adopting a first filter to obtain a second image; filtering the first image by adopting a second filter to obtain a third image; wherein the first filter and the second filter are different;
calculating the difference value of corresponding pixel points in the first image and the second image to obtain a first detail image; calculating the difference value of corresponding pixel points in the second image and the third image to obtain a first edge image;
performing fusion processing on the first detail image and the first edge image;
wherein the first image comprises an original visible light image and an original infrared light image.
Based on the same inventive concept, the embodiment of the present invention further provides an electronic device, and as the principle of solving the problem of the electronic device is similar to the image fusion method, the implementation of the electronic device may refer to the implementation of the method, and repeated details are not repeated.
The electronic device provided by the embodiment of the invention can be a desktop computer, a portable computer, a smart phone, a tablet computer, a Personal Digital Assistant (PDA), a network side device and the like.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface 502 is used for communication between the above-described electronic apparatus and other apparatuses.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a central processing unit, a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
When the processor executes the program stored in the memory in the embodiment of the invention, the first image to be fused is filtered by adopting the first filter to obtain the second image; filtering the first image by adopting a second filter to obtain a third image; wherein the first filter and the second filter are different; calculating the difference value of corresponding pixel points in the first image and the second image to obtain a first detail image; calculating the difference value of corresponding pixel points in the second image and the third image to obtain a first edge image; performing fusion processing on the first detail image and the first edge image; wherein the first image comprises an original visible light image and an original infrared light image. In the embodiment of the invention, the filter with different parameters is adopted to filter the first image to be fused to respectively obtain the second image and the third image, the first detail image is determined according to the difference value of the corresponding pixel points in the first image and the second image, the first edge image is obtained according to the difference value of the corresponding pixel points in the second image and the third image, and then the first detail image and the first edge image are fused. By adopting the image fusion scheme provided by the embodiment of the invention, the image detail information and the edge information can be considered at the same time, so that the fused image has better quality.
The processor is used for carrying out layering processing on the second image and the third image based on a filter to obtain pyramid images of the second image and the third image; calculating the difference value of corresponding pixel points in the upper layer third image and the layer second image aiming at each layer of pyramid image to obtain a layer second detail image; calculating the difference value of corresponding pixel points in the layer of second image and the layer of third image to obtain a layer of second edge image;
and the processor is used for carrying out fusion processing on the first detail image and the first edge image and the second detail image and the second edge image of each layer of pyramid image.
And the processor is used for carrying out fusion processing on the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image and the third image of the last layer.
The processor is used for enhancing the first image to be fused and performing subsequent processing on the enhanced first image.
The processor is used for normalizing the pixel values of the pixel points in the original visible light image to be in a range of [0, 255] if the first image is the original visible light image; performing guided filtering processing on the normalized visible light image to obtain a first base layer image, and determining a detail layer image and a second base layer image according to the normalized visible light image and the first base layer image; and determining the visible light image after the enhancement processing according to the detail layer image and the second base layer image.
And the processor is used for determining the infrared image after enhancement processing according to the pixel value of each pixel point in the infrared image and the maximum pixel value and the minimum pixel value in the original infrared image if the first image is the original infrared image.
The processor is used for determining the weight correlation coefficient of each group of corresponding pixel points according to the size relation of the pixel values of each group of corresponding pixel points in the same layer of image to be fused; the same layer of image to be fused comprises the same layer of infrared light detail image and visible light detail image, the same layer of infrared light edge image and visible light edge image, and the last layer of infrared light image and visible light image; carrying out nonlinear transformation processing on the weight correlation coefficient of each group, and carrying out low-pass filtering processing on the weight correlation coefficient after the nonlinear transformation processing to obtain the weight of each group of corresponding pixel points in the image to be fused in the same layer;
and the processor is used for fusing the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image and the third image of the last layer according to the weight of each group of corresponding pixel points in the same layer of image to be fused.
The processor is used for determining an energy value of the first image, wherein the energy value of the first image comprises an energy value of an original visible light image to be fused and an energy value of an original infrared light image to be fused; and determining a nonlinear transformation evaluation parameter according to the ratio of the energy value of the original infrared light image to the energy value of the original visible light image, and performing nonlinear transformation processing on the weight correlation coefficient of each group according to the nonlinear transformation evaluation parameter.
The processor is used for dividing the first image into a plurality of regions, and performing Fourier transform on each region to obtain a frequency domain value of each pixel point in each region; determining the sensitivity coefficient of each pixel point according to the frequency domain value of each pixel point and the corresponding sensitivity function; aiming at each region, determining the energy value of the region according to the sensitivity coefficient of each pixel point in the region; determining the gradient strength of each pixel point in the first image according to an edge detection algorithm, and determining the gradient coefficient of the region according to the gradient strength of each pixel point in the region; and determining the energy value of the first image according to the energy value and the gradient coefficient of each region and the number of pixel points contained in each region.
The processor is used for respectively taking a higher pixel value in the corresponding pixel point as a pixel value of the corresponding pixel point of the fused fourth image according to the first infrared light detail image and the first visible light detail image in the first detail image and the first infrared light edge image and the first visible light edge image in the first edge image; respectively weighting pixel values of pixel points in the infrared light image and the visible light image of the same layer according to the weight of each group of corresponding pixel points in the image to be fused of the same layer for a second infrared light detail image and a second visible light detail image in a second detail image of each layer of pyramid image, a second infrared light edge image and a second visible light edge image in the second edge image, and a third infrared light image and a third visible light image in a third image of the last layer to obtain a fifth fused image; and carrying out fusion processing on the fourth image and the fifth image.
On the basis of the foregoing embodiments, an embodiment of the present invention further provides a computer storage readable storage medium, in which a computer program executable by an electronic device is stored, and when the program is run on the electronic device, the electronic device is caused to execute the following steps:
filtering the first image to be fused by adopting a first filter to obtain a second image; filtering the first image by adopting a second filter to obtain a third image; wherein the first filter and the second filter are different;
calculating the difference value of corresponding pixel points in the first image and the second image to obtain a first detail image; calculating the difference value of corresponding pixel points in the second image and the third image to obtain a first edge image;
performing fusion processing on the first detail image and the first edge image; wherein the first image comprises an original visible light image and an original infrared light image.
Based on the same inventive concept, embodiments of the present invention further provide a computer-readable storage medium, and since a principle of solving a problem when a processor executes a computer program stored in the computer-readable storage medium is similar to that of an image fusion method, implementation of the computer program stored in the computer-readable storage medium by the processor may refer to implementation of the method, and repeated details are not repeated.
The computer readable storage medium may be any available medium or data storage device that can be accessed by a processor in an electronic device, including but not limited to magnetic memory such as floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc., optical memory such as CDs, DVDs, BDs, HVDs, etc., and semiconductor memory such as ROMs, EPROMs, EEPROMs, non-volatile memory (NAND FLASH), Solid State Disks (SSDs), etc.
The computer program is stored in the computer readable storage medium provided in the embodiment of the invention, and when being executed by the processor, the computer program realizes filtering processing on a first image to be fused by adopting a first filter to obtain a second image; filtering the first image by adopting a second filter to obtain a third image; wherein the first filter and the second filter are different; calculating the difference value of corresponding pixel points in the first image and the second image to obtain a first detail image; calculating the difference value of corresponding pixel points in the second image and the third image to obtain a first edge image; performing fusion processing on the first detail image and the first edge image; wherein the first image comprises an original visible light image and an original infrared light image. In the embodiment of the invention, the filter with different parameters is adopted to filter the first image to be fused to respectively obtain the second image and the third image, the first detail image is determined according to the difference value of the corresponding pixel points in the first image and the second image, the first edge image is obtained according to the difference value of the corresponding pixel points in the second image and the third image, and then the first detail image and the first edge image are fused. By adopting the image fusion scheme provided by the embodiment of the invention, the image detail information and the edge information can be considered at the same time, so that the fused image has better quality.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (9)

1. An image fusion method, characterized in that the method comprises:
filtering the first image to be fused by adopting a first filter to obtain a second image; filtering the first image by adopting a second filter to obtain a third image; wherein the first filter and the second filter are different, and both the first filter and the second filter are guide filters;
calculating the difference value of corresponding pixel points in the first image and the second image to obtain a first detail image; calculating the difference value of corresponding pixel points in the second image and the third image to obtain a first edge image;
performing fusion processing on the first detail image and the first edge image;
wherein the first image comprises an original visible light image and an original infrared light image;
after the obtaining of the first edge image and before the performing of the fusion processing on the first detail image and the first edge image, the method further includes:
carrying out layering processing on the second image and the third image based on a filter to obtain pyramid images of the second image and the third image;
calculating the difference value of corresponding pixel points in the upper layer third image and the layer second image aiming at each layer of pyramid image to obtain a layer second detail image; calculating the difference value of corresponding pixel points in the layer of second image and the layer of third image to obtain a layer of second edge image;
the fusing the first detail image and the first edge image comprises:
performing fusion processing on the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image and the third image of the last layer;
before the fusing processing is performed on the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image, and the last layer of third image, the method further includes:
determining a weight correlation coefficient of each group of corresponding pixel points according to the magnitude relation of pixel values of each group of corresponding pixel points in the same layer of image to be fused; the same layer of image to be fused comprises the same layer of infrared light detail image and visible light detail image, the same layer of infrared light edge image and visible light edge image, and the last layer of infrared light image and visible light image;
carrying out nonlinear transformation processing on the weight correlation coefficient of each group, and carrying out low-pass filtering processing on the weight correlation coefficient after the nonlinear transformation processing to obtain the weight of each group of corresponding pixel points in the image to be fused in the same layer;
the fusing the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image, and the last layer of third image comprises:
respectively taking a higher pixel value in the corresponding pixel point as a pixel value of the corresponding pixel point of the fused fourth image aiming at a first infrared light detail image and a first visible light detail image in the first detail image and a first infrared light edge image and a first visible light edge image in the first edge image;
respectively weighting pixel values of pixel points in the infrared light image and the visible light image of the same layer according to the weight of each group of corresponding pixel points in the image to be fused of the same layer for a second infrared light detail image and a second visible light detail image in a second detail image of each layer of pyramid image, a second infrared light edge image and a second visible light edge image in the second edge image, and a third infrared light image and a third visible light image in a third image of the last layer to obtain a fifth fused image;
and carrying out fusion processing on the fourth image and the fifth image.
2. The method of claim 1, wherein before the filtering the first image to be fused with the first filter to obtain the second image, the method further comprises:
and performing enhancement processing on the first image to be fused, and performing subsequent steps on the enhanced first image.
3. The method of claim 2, wherein the enhancing the first image to be fused comprises:
if the first image is an original visible light image, normalizing the pixel values of the pixel points in the original visible light image to be in the range of [0, 255 ];
performing guided filtering processing on the normalized visible light image to obtain a first base layer image, and determining a detail layer image and a second base layer image according to the normalized visible light image and the first base layer image;
determining the visible light image after enhancement processing according to the detail layer image and the second substrate layer image;
obtaining a second base layer image by compressing contrast processing on the first base layer image;
and compressing the contrast of the normalized visible light image, and determining a detail layer image according to a difference image of the visible light image subjected to the compression contrast processing and the second substrate layer image.
4. The method of claim 2, wherein the enhancing the first image to be fused comprises:
and if the first image is an original infrared light image, determining the infrared light image after enhancement processing according to the pixel value of each pixel point in the infrared light image and the maximum pixel value and the minimum pixel value in the original infrared light image.
5. The method of claim 1, wherein the non-linear transformation processing of the weight correlation coefficients of each group comprises:
determining an energy value of the first image, wherein the energy value of the first image comprises an energy value of an original visible light image to be fused and an energy value of an original infrared light image to be fused;
and determining a nonlinear transformation evaluation parameter according to the ratio of the energy value of the original infrared light image to the energy value of the original visible light image, and performing nonlinear transformation processing on the weight correlation coefficient of each group according to the nonlinear transformation evaluation parameter.
6. The method of claim 5, wherein the determining the energy value of the first image comprises:
dividing the first image into a plurality of regions, and performing Fourier transform on each region to obtain a frequency domain value of each pixel point in each region;
determining the sensitivity coefficient of each pixel point according to the frequency domain value of each pixel point and the corresponding sensitivity function;
aiming at each region, determining the energy value of the region according to the sensitivity coefficient of each pixel point in the region; determining the gradient strength of each pixel point in the first image according to an edge detection algorithm, and determining the gradient coefficient of the region according to the gradient strength of each pixel point in the region;
determining the energy value of the first image according to the energy value and the gradient coefficient of each region and the number of pixel points contained in each region;
determining the sensitivity coefficient of each pixel point according to the product of the frequency domain value of each pixel point and the corresponding contrast sensitivity function;
aiming at each region, determining the energy value E of the region according to the square sum of the sensitivity coefficient amplitude of each pixel point in the regionI
For each region, determining the gradient coefficient eta of the region according to the square sum of the gradient strength of each pixel point in the region1Determining an energy value PS of the first image according to the energy value and the gradient coefficient of each region and the number of pixel points contained in each region, and expressing the energy value PS as
Figure FDA0003468469130000041
Wherein | W | is the number of pixel points in any region, and m is the number of pixel points in the first image.
7. An image fusion apparatus, characterized in that the apparatus comprises:
the filtering module is used for filtering the first image to be fused by adopting a first filter to obtain a second image; filtering the first image by adopting a second filter to obtain a third image; wherein the first filter and the second filter are different, and both the first filter and the second filter are guide filters;
the calculating module is used for calculating the difference value of corresponding pixel points in the first image and the second image to obtain a first detail image; calculating the difference value of corresponding pixel points in the second image and the third image to obtain a first edge image;
the fusion module is used for carrying out fusion processing on the first detail image and the first edge image; wherein the first image comprises an original visible light image and an original infrared light image;
the device further comprises:
the layering module is used for carrying out layering processing on the second image and the third image based on a filter to obtain pyramid images of the second image and the third image; calculating the difference value of corresponding pixel points in the upper layer third image and the layer second image aiming at each layer of pyramid image to obtain a layer second detail image; calculating the difference value of corresponding pixel points in the layer of second image and the layer of third image to obtain a layer of second edge image;
the fusion module is specifically configured to perform fusion processing on the first detail image and the first edge image, the second detail image and the second edge image of each layer of pyramid image, and the last layer of third image;
the device further comprises:
the determining module is used for determining the weight correlation coefficient of each group of corresponding pixel points according to the size relation of the pixel values of each group of corresponding pixel points in the same layer of image to be fused; the same layer of image to be fused comprises the same layer of infrared light detail image and visible light detail image, the same layer of infrared light edge image and visible light edge image, and the last layer of infrared light image and visible light image; carrying out nonlinear transformation processing on the weight correlation coefficient of each group, and carrying out low-pass filtering processing on the weight correlation coefficient after the nonlinear transformation processing to obtain the weight of each group of corresponding pixel points in the image to be fused in the same layer;
the fusion module is specifically configured to respectively use, for a first infrared light detail image and a first visible light detail image in the first detail image, and a first infrared light edge image and a first visible light edge image in the first edge image, a higher pixel value in a corresponding pixel point as a pixel value of a corresponding pixel point of a fused fourth image; respectively weighting pixel values of pixel points in the infrared light image and the visible light image of the same layer according to the weight of each group of corresponding pixel points in the image to be fused of the same layer for a second infrared light detail image and a second visible light detail image in a second detail image of each layer of pyramid image, a second infrared light edge image and a second visible light edge image in the second edge image, and a third infrared light image and a third visible light image in a third image of the last layer to obtain a fifth fused image; and carrying out fusion processing on the fourth image and the fifth image.
8. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1 to 6 when executing a program stored in the memory.
9. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1-6.
CN201910703771.XA 2019-07-31 2019-07-31 Image fusion method and device, electronic equipment and storage medium Active CN110415202B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910703771.XA CN110415202B (en) 2019-07-31 2019-07-31 Image fusion method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910703771.XA CN110415202B (en) 2019-07-31 2019-07-31 Image fusion method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110415202A CN110415202A (en) 2019-11-05
CN110415202B true CN110415202B (en) 2022-04-12

Family

ID=68364969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910703771.XA Active CN110415202B (en) 2019-07-31 2019-07-31 Image fusion method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110415202B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111429381B (en) * 2020-04-10 2023-02-17 展讯通信(上海)有限公司 Image edge enhancement method and device, storage medium and computer equipment
CN111539902B (en) * 2020-04-16 2023-03-28 烟台艾睿光电科技有限公司 Image processing method, system, equipment and computer readable storage medium
CN111489319A (en) * 2020-04-17 2020-08-04 电子科技大学 Infrared image enhancement method based on multi-scale bilateral filtering and visual saliency
CN113902651B (en) * 2021-12-09 2022-02-25 环球数科集团有限公司 Video image quality enhancement system based on deep learning
CN114549382B (en) * 2022-02-21 2023-08-11 北京爱芯科技有限公司 Method and system for fusing infrared image and visible light image

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1545062A (en) * 2003-11-27 2004-11-10 上海交通大学 Pyramid image merging method being integrated with edge and texture information
CN101093580A (en) * 2007-08-29 2007-12-26 华中科技大学 Image interfusion method based on wave transform of not sub sampled contour
CN101109718A (en) * 2006-11-14 2008-01-23 北京国药恒瑞美联信息技术有限公司 Virtual grid imaging method and system used for eliminating influence of scattered radiation
CN101226635A (en) * 2007-12-18 2008-07-23 西安电子科技大学 Multisource image anastomosing method based on comb wave and Laplace tower-shaped decomposition
CN101916436A (en) * 2010-08-30 2010-12-15 武汉大学 Multi-scale spatial projecting and remote sensing image fusing method
CN101968883A (en) * 2010-10-28 2011-02-09 西北工业大学 Method for fusing multi-focus images based on wavelet transform and neighborhood characteristics
CN102096816A (en) * 2011-01-28 2011-06-15 武汉大学 Multi-scale multi-level image segmentation method based on minimum spanning tree
CN102306381A (en) * 2011-06-02 2012-01-04 西安电子科技大学 Method for fusing images based on beamlet and wavelet transform
CN105321172A (en) * 2015-08-31 2016-02-10 哈尔滨工业大学 SAR, infrared and visible light image fusion method
CN105654448A (en) * 2016-03-29 2016-06-08 微梦创科网络科技(中国)有限公司 Image fusion method and system based on bilateral filter and weight reconstruction
US9892361B2 (en) * 2015-01-21 2018-02-13 Siemens Healthcare Gmbh Method and system for cross-domain synthesis of medical images using contextual deep network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1545062A (en) * 2003-11-27 2004-11-10 上海交通大学 Pyramid image merging method being integrated with edge and texture information
CN101109718A (en) * 2006-11-14 2008-01-23 北京国药恒瑞美联信息技术有限公司 Virtual grid imaging method and system used for eliminating influence of scattered radiation
CN101093580A (en) * 2007-08-29 2007-12-26 华中科技大学 Image interfusion method based on wave transform of not sub sampled contour
CN101226635A (en) * 2007-12-18 2008-07-23 西安电子科技大学 Multisource image anastomosing method based on comb wave and Laplace tower-shaped decomposition
CN101916436A (en) * 2010-08-30 2010-12-15 武汉大学 Multi-scale spatial projecting and remote sensing image fusing method
CN101968883A (en) * 2010-10-28 2011-02-09 西北工业大学 Method for fusing multi-focus images based on wavelet transform and neighborhood characteristics
CN102096816A (en) * 2011-01-28 2011-06-15 武汉大学 Multi-scale multi-level image segmentation method based on minimum spanning tree
CN102306381A (en) * 2011-06-02 2012-01-04 西安电子科技大学 Method for fusing images based on beamlet and wavelet transform
US9892361B2 (en) * 2015-01-21 2018-02-13 Siemens Healthcare Gmbh Method and system for cross-domain synthesis of medical images using contextual deep network
CN105321172A (en) * 2015-08-31 2016-02-10 哈尔滨工业大学 SAR, infrared and visible light image fusion method
CN105654448A (en) * 2016-03-29 2016-06-08 微梦创科网络科技(中国)有限公司 Image fusion method and system based on bilateral filter and weight reconstruction

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Image Fusion Techniques: A Review;Dhirendra Mishra 等;《International Journal of Computer Applications》;20151130;全文 *
LOG与DOG边缘检测;我是一片小树叶;《CSDN》;20190701;第1-2页 *
结合目标提取和压缩感知的红外与可见光图像融合;王昕 等;《光学精密工程》;20160731;全文 *

Also Published As

Publication number Publication date
CN110415202A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN110415202B (en) Image fusion method and device, electronic equipment and storage medium
CN106780392B (en) Image fusion method and device
Parihar et al. A study on Retinex based method for image enhancement
CN107408296B (en) Real-time noise for high dynamic range images is eliminated and the method and system of image enhancement
Li et al. Single image haze removal using content‐adaptive dark channel and post enhancement
Ling et al. Adaptive extended piecewise histogram equalisation for dark image enhancement
US9349170B1 (en) Single image contrast enhancement method using the adaptive wiener filter
Li et al. A multi-scale fusion scheme based on haze-relevant features for single image dehazing
CN111080538A (en) Infrared fusion edge enhancement method
Park et al. Generation of high dynamic range illumination from a single image for the enhancement of undesirably illuminated images
Li et al. Movement detection for the synthesis of high dynamic range images
Lim et al. Robust contrast enhancement of noisy low-light images: Denoising-enhancement-completion
CN103607589A (en) Level selection visual attention mechanism-based image JND threshold calculating method in pixel domain
Chen et al. Improve transmission by designing filters for image dehazing
CN106454140B (en) A kind of information processing method and electronic equipment
CN110211082A (en) A kind of image interfusion method, device, electronic equipment and storage medium
Zou et al. Image haze removal algorithm using a logarithmic guide filtering and multi-channel prior
Agrawal et al. A joint cumulative distribution function and gradient fusion based method for dehazing of long shot hazy images
Parthasarathy et al. Fusion based multi scale RETINEX with color restoration for image enhancement
CN117218026A (en) Infrared image enhancement method and device
Gao et al. Single image haze removal algorithm using pixel-based airlight constraints
CN107945137B (en) Face detection method, electronic device and storage medium
CN110298807A (en) Based on the domain the NSCT infrared image enhancing method for improving Retinex and quantum flora algorithm
Liu et al. Infrared and visible image fusion for shipborne electro-optical pod in maritime environment
Moumene et al. Generalized exposure fusion weights estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant