CN116029954A - Image fusion method and device - Google Patents

Image fusion method and device Download PDF

Info

Publication number
CN116029954A
CN116029954A CN202310064964.1A CN202310064964A CN116029954A CN 116029954 A CN116029954 A CN 116029954A CN 202310064964 A CN202310064964 A CN 202310064964A CN 116029954 A CN116029954 A CN 116029954A
Authority
CN
China
Prior art keywords
image
fusion
weight
gray
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202310064964.1A
Other languages
Chinese (zh)
Inventor
包志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Ray Vision Electronics Technology Co ltd
Original Assignee
Xi'an Ray Vision Electronics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Ray Vision Electronics Technology Co ltd filed Critical Xi'an Ray Vision Electronics Technology Co ltd
Priority to CN202310064964.1A priority Critical patent/CN116029954A/en
Publication of CN116029954A publication Critical patent/CN116029954A/en
Withdrawn legal-status Critical Current

Links

Images

Abstract

The invention provides an image fusion method and device, which relate to the technical field of image processing and comprise the following steps: acquiring an original image to be fused, wherein the original image is a multi-sensor image sequence or a multi-exposure image sequence; carrying out graying treatment on the original image to obtain a gray image; the gray image is convolved through a high-pass filter to obtain image details, and the weight of the image details is calculated; performing overexposure treatment on the gray level image; calculating the main body fusion weight of the image after overexposure processing by using a convolution filter; combining the image detail weight and the main body fusion weight to obtain an image fusion weight; and carrying out weighted fusion on the original image through the image fusion weight to obtain a final fusion image. Aiming at the defect that the calculation weight of the pixel-level image fusion algorithm is complex, the algorithm provided by the invention does not need to calculate iterative filtering, can effectively reduce the complexity and improve the operation processing performance of hardware, thereby effectively reducing the time delay and achieving the real-time processing of image fusion.

Description

Image fusion method and device
Technical Field
The invention relates to the technical field of image processing, in particular to an image fusion method and device.
Background
Along with the development of modern imaging technology, the cooperation of multiple imaging modes is increasingly applied, and the cooperative processing of multi-source (multi-exposure) image data can integrate information and characteristics of multiple images so as to obtain a single fusion image with rich information, so that imaging scenes can be more accurately and comprehensively described. The image fusion method has wide application requirements in various fields such as military application, remote sensing imaging, public safety, traffic supervision, medical influence and the like.
Modern imaging technology is one of the most important information acquisition means for humans, and acquisition of image information by imaging technology almost covers all aspects of human social activities. Modern imaging modes are rich and various, imaging wave bands cover various wave bands such as visible light, millimeter waves, infrared rays and X rays, imaging modes and principles are different, but images obtained through different imaging means describe an imaging object from different angles, and the final purpose is to reveal the properties of things in different aspects, so that the things can be more comprehensively understood and explained. For example, in a remote sensing image, a low resolution Multispectral (MS) image and a high resolution Panchromatic (PAN) image are used to obtain a fused image containing spectral content of the multispectral image with enhanced spatial resolution; in the medical image, the data of a plurality of modes are fused, so that the analysis is more reliable and accurate; in monitoring applications, images of multiple frequency bands, such as infrared images and visible images, may be fused. In the application of the intelligent camera, images with various exposure times can be fused, so that the image quality is enhanced.
The image fusion technology comprises a fusion method of a spatial domain, a transformation domain and deep learning, wherein the method of the spatial domain is simpler and more direct than the transformation domain and the method based on the deep learning, and is convenient to realize.
The transform domain method generally consists of three phases: image transformation, coefficient fusion and inverse transformation. The most significant features include the inverse transformation stage of the fused image reconstruction, as compared to the spatial domain based approach. Depending on the transformation used, transform domain based methods can be further divided into multi-scale decomposition based methods, gradient domain based methods, sparse representation based methods, and other transform based methods. The transform domain method needs to perform transformation and inverse transformation of the image, and is difficult to deploy and implement under the condition of limited computational power of the embedded device.
In recent years, a deep learning-based method has become a very active direction in the field of image fusion. Neural networks with depth architecture have been widely demonstrated to have strong feature representation capabilities, and are very useful in a variety of image and visual tasks, including image fusion. Currently, deep learning models such as convolutional neural networks (convolutional neural networks, CNNs) and generative antagonism networks (generative adversarial networks, GANs) have been successfully applied to the field of image fusion. The deep learning-based methods can be further classified into a supervision-based method and an unsupervised-based method according to the model employed. However, the deep learning-based method requires more complex calculation, and needs a neural network acceleration unit to process in real time, so that the requirements on software and hardware are very high, the implementation cost is high, and the realization of embedded equipment is not facilitated.
The spatial domain method utilizes certain spatial characteristics to directly fuse the input source images in the spatial domain according to specific rules. In the processing, a weight map is generated for each input image, and the fused image is taken as a weighted average of all the input images. The spatial domain method is simple and direct, is convenient for implementation on the ground, and is favored by engineers, but the spatial domain-based method needs to calculate the fusion weight by using an iterative filtering method, for example, so that a better result can be obtained, and the iterative filtering is not beneficial to real-time image processing.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides an image fusion method.
In order to achieve the above object, the present invention provides the following technical solutions:
an image fusion method, comprising:
acquiring an original image to be fused, wherein the original image is a multi-sensor image sequence or a multi-exposure image sequence;
carrying out graying treatment on the original image to obtain a gray image;
the gray image is convolved through a high-pass filter to obtain image details, and the weight of the image details is calculated;
performing overexposure treatment on the gray level image;
calculating the main body fusion weight of the image after overexposure processing by using a convolution filter;
combining the image detail weight and the main body fusion weight to obtain an image fusion weight;
and carrying out weighted fusion on the original image through the image fusion weight to obtain a final fusion image.
Preferably, the original image is acquired through an image sensor or a camera, and the original image is read through an FPGA or an ARM to obtain the input image.
Preferably, the gray-scale processing is performed on the original image O (x, y), and the obtained gray-scale image I (x, y) is:
I(x,y)=0.299×R+0.587×G+0.114×B
wherein R, G, B is the three color channel value of the image; x represents the row coordinates of the image matrix and y represents the column coordinates of the image matrix.
Preferably, the convolution gray image with a high pass filter yields the image detail W e (x, y) and calculating image detail weights
Figure BDA0004061829180000031
The method comprises the following steps:
W e (x,y)=I(x,y)*h(x,y)
where h (x, y) is a high pass filter:
Figure BDA0004061829180000032
Figure BDA0004061829180000033
where n represents the number of the image sequence.
Preferably, the overexposure processing is performed on the gray image, specifically:
Figure BDA0004061829180000034
wherein I is dn (x, y) is the nth image after overexposure processing, I n (x, y) is a gray scale image of the nth image, α is a control factor, and T is a constant of 0 or more and 128 or less.
Preferably, the main body fusion weight W of the image after the overexposure processing is calculated by using a convolution filter mn (x, y), specifically:
W mn (x,y)=I dn (x,y)*H(x,y)
wherein I is dn (x, y) is the nth image after overexposure processing, and H (x, y) is a convolution filter;
Figure BDA0004061829180000041
/>
where σ is the variance, e is the natural index, x is the row coordinates of the image matrix, and y is the column coordinates of the image matrix.
Preferably, the method further comprises fusing the weight W to the main body mn (x, y) normalization, the main body fusion weight after normalization is as follows:
Figure BDA0004061829180000042
preferably, the image fusion weight is obtained by combining the image detail weight and the main fusion weight
Figure BDA0004061829180000043
The method comprises the following steps:
Figure BDA0004061829180000044
and then carrying out normalization processing on the image fusion weight to obtain:
Figure BDA0004061829180000045
preferably, the step of performing weighted fusion on the original image through the image fusion weight to obtain a final fusion image is as follows:
Figure BDA0004061829180000046
another object of the present invention is to provide an image fusion apparatus including:
the image acquisition module is used for acquiring an original image to be fused, wherein the original image is a multi-sensor image sequence or a multi-exposure image sequence;
the image preprocessing module is used for carrying out graying processing on the original image to obtain a gray image;
the image detail processing module is used for obtaining image details through the convolution gray image of the high-pass filter and calculating the weight of the image details;
the exposure processing module is used for performing overexposure processing on the gray level image;
the main body fusion weight calculation module is used for calculating main body fusion weights of the image subjected to overexposure processing by using a convolution filter;
the image fusion weight calculation module is used for combining the image detail weight and the main body fusion weight to obtain an image fusion weight;
and the image fusion module is used for carrying out weighted fusion on the original image through the image fusion weight to obtain a final fusion image.
The image fusion method and device provided by the invention have the following beneficial effects:
the method provided by the invention aims at a multi-sensor image sequence or a multi-exposure image sequence, obtains the image detail weight through a high-pass filter, obtains the main body fusion weight of the image through convolution filtering, then combines the image detail weight and the main body fusion weight to obtain the image fusion weight without calculating iterative filtering, thereby avoiding the calculation times of the iterative filter, reducing the complexity of image fusion, improving the operation processing performance of hardware, effectively reducing the time delay, achieving the real-time processing of image fusion and realizing the real-time fusion algorithm. Meanwhile, compared with a fusion algorithm based on a pixel level, the image fusion algorithm provided by the invention does not need to calculate fine weights in an iterative manner, so that the complexity is obviously reduced, and the real-time processing can be better performed.
Drawings
In order to more clearly illustrate the embodiments of the present invention and the design thereof, the drawings required for the embodiments will be briefly described below. The drawings in the following description are only some of the embodiments of the present invention and other drawings may be made by those skilled in the art without the exercise of inventive faculty.
Fig. 1 is a flowchart of an image fusion method according to embodiment 1 of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the drawings and the embodiments, so that those skilled in the art can better understand the technical scheme of the present invention and can implement the same. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
Example 1
The invention provides an image fusion method, which is specifically shown in fig. 1 and comprises the following steps:
step one: the original image O (x, y) to be fused is acquired, the original image being a color image and being a multi-sensor/multi-exposure image sequence.
Specifically, in this embodiment, an original image is collected by an image sensor or a camera, an input image is obtained by reading the camera through an FPGA or an ARM, and after image processing at the front end, a multi-sensor image or a multi-exposure image is registered in position and then used as input.
The multi-sensor refers to images acquired by white light, infrared or other multi-spectrum sensors, and also can be images acquired by single white light sensors at different exposure times. The image sequence refers to the sensor images or exposure images obtained above.
Step two: and (3) carrying out graying treatment on the original image O (x, y) to obtain a gray image I (x, y). I.e. to make the colour image greyscale, if the input is an infrared image, it needs to be converted into a greyscale map of 8bit representation.
Where I (x, y) is a gray scale image of the original image, can be calculated by:
I(x,y)=0.299×R+0.587×G+0.114×B (1)
where R, G, B is the three color channel value of the image.
Step three: calculating the detail weight of the image; x represents the row coordinates of the image matrix and y represents the column coordinates of the image matrix.
Image details W e (x, y) can be obtained by convolving the original gray image with a high pass filter.
W e (x,y)=I(x,y)*h(x,y) (2)
In the formula h, h (x, y) is a high-pass filter:
Figure BDA0004061829180000061
once the image details are obtained, the detail part is further processed by calculating the weight of the image details
Figure BDA0004061829180000062
Figure BDA0004061829180000063
Where n represents the number of the image sequence.
Step four: and performing overexposure processing on the gray level image.
The luminance value of a pixel determines whether the pixel is overexposed or not, and in order to calculate the main body fusion weight, the excessive influence of the weight caused by overexposure needs to be removed from the gray image.
Figure BDA0004061829180000064
Wherein I is dn (x, y) is the nth image after overexposure processing, I n (x, y) is the gray level image of the nth image, alpha is the control factor, and is selected to be 0.8 in the invention; t is greater thanEqual to 0, a constant less than 128.
Step five: and calculating the main body fusion weight of the image after the overexposure processing by using a convolution filter.
The main body fusion weight of the image needs to be obtained through a convolution filter. After the filter, the main weight W of the multi-sensor/multi-exposure image sequence can be obtained mn (x, y) is as in formula (6):
W mn (x,y)=I dn (x,y)*H(x,y) (6)
wherein I is dn (x, y) is the nth image after overexposure, and H (x, y) is a convolution filter.
Figure BDA0004061829180000071
Where σ is the variance, e is the natural index, x is the row coordinates of the image matrix, and y is the column coordinates of the image matrix.
And weight the main body W mn And (x, y) normalization to obtain:
Figure BDA0004061829180000072
step six: and (5) calculating the fusion weight of the image.
Specifically, combining the image detail weight and the main body fusion weight to obtain an image fusion weight
Figure BDA0004061829180000073
Figure BDA0004061829180000074
And then normalizing, and finally carrying out weighted fusion of the HDR images.
Figure BDA0004061829180000075
Step seven: and carrying out weighted fusion on the original image sequence to obtain a final fusion image.
Figure BDA0004061829180000076
Based on the same inventive concept, the invention also provides an image fusion device which comprises an image acquisition module, an image preprocessing module, an image detail processing module, an exposure processing module, a main fusion weight calculation module, an image fusion weight calculation module and an image fusion module.
Specifically, the image acquisition module is used for acquiring an original image to be fused, wherein the original image is a multi-sensor image sequence or a multi-exposure image sequence; the image preprocessing module is used for carrying out graying processing on the original image to obtain a gray image; the image detail processing module is used for obtaining image details through the convolution gray image of the high-pass filter and calculating the weight of the image details; the exposure processing module is used for performing overexposure processing on the gray level image; the main body fusion weight calculation module is used for calculating main body fusion weights of the image subjected to overexposure processing by using a convolution filter; the image fusion weight calculation module is used for combining the image detail weight and the main body fusion weight to obtain an image fusion weight; the image fusion module is used for carrying out weighted fusion on the original image through the image fusion weight to obtain a final fusion image.
Aiming at the defect that the calculation weight of a pixel-level image fusion algorithm is complex, aiming at a multi-sensor image sequence or a multi-exposure image sequence, the algorithm provided by the invention obtains the image detail weight through a high-pass filter, obtains the main fusion weight of an image through convolution filtering, then combines the image detail weight and the main fusion weight to obtain the image fusion weight, and does not need to calculate iterative filtering, thereby avoiding the calculation times of the iterative filter, reducing the complexity of image fusion, improving the operation processing performance of hardware, effectively reducing the time delay, achieving the real-time processing of image fusion and realizing the real-time fusion algorithm. Meanwhile, compared with a fusion algorithm based on a pixel level, the image fusion algorithm provided by the invention does not need to iteratively calculate the fine weight, so that the complexity is obviously reduced, and the FPGA hardware can be better processed in real time.
The above embodiments are merely preferred embodiments of the present invention, the protection scope of the present invention is not limited thereto, and any simple changes or equivalent substitutions of technical solutions that can be obviously obtained by those skilled in the art within the technical scope of the present invention disclosed in the present invention belong to the protection scope of the present invention.

Claims (10)

1. An image fusion method, comprising:
acquiring an original image to be fused, wherein the original image is a multi-sensor image sequence or a multi-exposure image sequence;
carrying out graying treatment on the original image to obtain a gray image;
the gray image is convolved through a high-pass filter to obtain image details, and the weight of the image details is calculated;
performing overexposure treatment on the gray level image;
calculating the main body fusion weight of the image after overexposure processing by using a convolution filter;
combining the image detail weight and the main body fusion weight to obtain an image fusion weight;
and carrying out weighted fusion on the original image through the image fusion weight to obtain a final fusion image.
2. The image fusion method of claim 1, wherein the raw image is acquired by an image sensor or a camera.
3. The image fusion method according to claim 1, wherein the gray-scale processing is performed on the original image O (x, y), and the obtained gray-scale image I (x, y) is:
I(x,y)=0.299×R+0.587×G+0.114×B
wherein R, G, B is the three color channel value of the image; x represents the row coordinates of the image matrix and y represents the column coordinates of the image matrix.
4. The method of claim 3, wherein the image detail W is obtained by convolving a gray image with a high pass filter e (x, y) and calculating image detail weights
Figure FDA0004061829170000011
The method comprises the following steps:
W e (x,y)=I(z,y)*h(x,y)
where h (x, y) is a high pass filter:
Figure FDA0004061829170000012
or/>
Figure FDA0004061829170000013
Figure FDA0004061829170000014
where n represents the number of the image sequence.
5. The image fusion method according to claim 4, wherein the overexposure processing is performed on the gray scale image, specifically:
Figure FDA0004061829170000021
wherein I is dn (x, y) is the nth image after overexposure processing, I n (x, y) is a gray scale image of the nth image, α is a control factor, and T is a constant of 0 or more and 128 or less.
6. The image fusion method according to claim 5, wherein the main fusion weight W of the overexposed image is calculated using a convolution filter mn (x, y), specifically:
W mn (x,y)=I dn (x,y)*H(x,y)
wherein I is dn (x, y) is the nth image after overexposure processing, and H (x, y) is a convolution filter;
Figure FDA0004061829170000022
where σ is the variance, e is the natural index, x is the row coordinates of the image matrix, and y is the column coordinates of the image matrix.
7. The image fusion method of claim 6, further comprising fusing weights W to the subject mn (x, y) normalization, the main body fusion weight after normalization is as follows:
Figure FDA0004061829170000023
8. the image fusion method of claim 7, wherein the combining the image detail weights and the subject fusion weights obtains image fusion weights
Figure FDA0004061829170000024
The method comprises the following steps:
Figure FDA0004061829170000025
and then carrying out normalization processing on the image fusion weight to obtain:
Figure FDA0004061829170000026
9. the image fusion method according to claim 8, wherein the weighting and fusing the original image by the image fusion weight to obtain a final fusion image is:
Figure FDA0004061829170000027
10. an image fusion apparatus, comprising:
the image acquisition module is used for acquiring an original image to be fused, wherein the original image is a multi-sensor image sequence or a multi-exposure image sequence;
the image preprocessing module is used for carrying out graying processing on the original image to obtain a gray image;
the image detail processing module is used for obtaining image details through the convolution gray image of the high-pass filter and calculating the weight of the image details;
the exposure processing module is used for performing overexposure processing on the gray level image;
the main body fusion weight calculation module is used for calculating main body fusion weights of the image subjected to overexposure processing by using a convolution filter;
the image fusion weight calculation module is used for combining the image detail weight and the main body fusion weight to obtain an image fusion weight;
and the image fusion module is used for carrying out weighted fusion on the original image through the image fusion weight to obtain a final fusion image.
CN202310064964.1A 2023-02-06 2023-02-06 Image fusion method and device Withdrawn CN116029954A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310064964.1A CN116029954A (en) 2023-02-06 2023-02-06 Image fusion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310064964.1A CN116029954A (en) 2023-02-06 2023-02-06 Image fusion method and device

Publications (1)

Publication Number Publication Date
CN116029954A true CN116029954A (en) 2023-04-28

Family

ID=86070580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310064964.1A Withdrawn CN116029954A (en) 2023-02-06 2023-02-06 Image fusion method and device

Country Status (1)

Country Link
CN (1) CN116029954A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116528058A (en) * 2023-05-26 2023-08-01 中国人民解放军战略支援部队航天工程大学 High dynamic imaging method and system based on compression reconstruction

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116528058A (en) * 2023-05-26 2023-08-01 中国人民解放军战略支援部队航天工程大学 High dynamic imaging method and system based on compression reconstruction
CN116528058B (en) * 2023-05-26 2023-10-31 中国人民解放军战略支援部队航天工程大学 High dynamic imaging method and system based on compression reconstruction

Similar Documents

Publication Publication Date Title
CN111046880B (en) Infrared target image segmentation method, system, electronic equipment and storage medium
Acharya et al. Image processing: principles and applications
CN108875821A (en) The training method and device of disaggregated model, mobile terminal, readable storage medium storing program for executing
WO2022042049A1 (en) Image fusion method, and training method and apparatus for image fusion model
CN105809640B (en) Low illumination level video image enhancement based on Multi-sensor Fusion
CN112733950A (en) Power equipment fault diagnosis method based on combination of image fusion and target detection
CN112184604B (en) Color image enhancement method based on image fusion
CN110473185A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN109753996B (en) Hyperspectral image classification method based on three-dimensional lightweight depth network
CN112651469A (en) Infrared and visible light image fusion method and system
CN111652817B (en) Underwater image sharpening method based on human eye visual perception mechanism
CN116029954A (en) Image fusion method and device
CN112669249A (en) Infrared and visible light image fusion method combining improved NSCT (non-subsampled Contourlet transform) transformation and deep learning
CN114187214A (en) Infrared and visible light image fusion system and method
CN111881924B (en) Dark-light vehicle illumination identification method combining illumination invariance and short-exposure illumination enhancement
CN111832508B (en) DIE _ GA-based low-illumination target detection method
CN110827375B (en) Infrared image true color coloring method and system based on low-light-level image
CN111861949B (en) Multi-exposure image fusion method and system based on generation countermeasure network
CN113378672A (en) Multi-target detection method for defects of power transmission line based on improved YOLOv3
CN116433822B (en) Neural radiation field training method, device, equipment and medium
Zheng et al. Near-infrared Image Enhancement Method in IRFPA Based on Steerable Pyramid.
CN112446835A (en) Image recovery method, image recovery network training method, device and storage medium
CN114638764B (en) Multi-exposure image fusion method and system based on artificial intelligence
CN113963427B (en) Method and system for rapid in-vivo detection
CN113379610B (en) Training method of image processing model, image processing method, medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20230428

WW01 Invention patent application withdrawn after publication