CN116029954A - Image fusion method and device - Google Patents
Image fusion method and device Download PDFInfo
- Publication number
- CN116029954A CN116029954A CN202310064964.1A CN202310064964A CN116029954A CN 116029954 A CN116029954 A CN 116029954A CN 202310064964 A CN202310064964 A CN 202310064964A CN 116029954 A CN116029954 A CN 116029954A
- Authority
- CN
- China
- Prior art keywords
- image
- fusion
- weight
- gray
- processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 19
- 230000004927 fusion Effects 0.000 claims abstract description 101
- 238000012545 processing Methods 0.000 claims abstract description 47
- 238000004364 calculation method Methods 0.000 claims abstract description 13
- 238000000034 method Methods 0.000 claims description 26
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000010606 normalization Methods 0.000 claims description 7
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 abstract description 7
- 230000007547 defect Effects 0.000 abstract description 3
- 238000003384 imaging method Methods 0.000 description 11
- 230000009466 transformation Effects 0.000 description 8
- 238000013135 deep learning Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 208000009119 Giant Axonal Neuropathy Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000008485 antagonism Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 201000003382 giant axonal neuropathy 1 Diseases 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Abstract
The invention provides an image fusion method and device, which relate to the technical field of image processing and comprise the following steps: acquiring an original image to be fused, wherein the original image is a multi-sensor image sequence or a multi-exposure image sequence; carrying out graying treatment on the original image to obtain a gray image; the gray image is convolved through a high-pass filter to obtain image details, and the weight of the image details is calculated; performing overexposure treatment on the gray level image; calculating the main body fusion weight of the image after overexposure processing by using a convolution filter; combining the image detail weight and the main body fusion weight to obtain an image fusion weight; and carrying out weighted fusion on the original image through the image fusion weight to obtain a final fusion image. Aiming at the defect that the calculation weight of the pixel-level image fusion algorithm is complex, the algorithm provided by the invention does not need to calculate iterative filtering, can effectively reduce the complexity and improve the operation processing performance of hardware, thereby effectively reducing the time delay and achieving the real-time processing of image fusion.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an image fusion method and device.
Background
Along with the development of modern imaging technology, the cooperation of multiple imaging modes is increasingly applied, and the cooperative processing of multi-source (multi-exposure) image data can integrate information and characteristics of multiple images so as to obtain a single fusion image with rich information, so that imaging scenes can be more accurately and comprehensively described. The image fusion method has wide application requirements in various fields such as military application, remote sensing imaging, public safety, traffic supervision, medical influence and the like.
Modern imaging technology is one of the most important information acquisition means for humans, and acquisition of image information by imaging technology almost covers all aspects of human social activities. Modern imaging modes are rich and various, imaging wave bands cover various wave bands such as visible light, millimeter waves, infrared rays and X rays, imaging modes and principles are different, but images obtained through different imaging means describe an imaging object from different angles, and the final purpose is to reveal the properties of things in different aspects, so that the things can be more comprehensively understood and explained. For example, in a remote sensing image, a low resolution Multispectral (MS) image and a high resolution Panchromatic (PAN) image are used to obtain a fused image containing spectral content of the multispectral image with enhanced spatial resolution; in the medical image, the data of a plurality of modes are fused, so that the analysis is more reliable and accurate; in monitoring applications, images of multiple frequency bands, such as infrared images and visible images, may be fused. In the application of the intelligent camera, images with various exposure times can be fused, so that the image quality is enhanced.
The image fusion technology comprises a fusion method of a spatial domain, a transformation domain and deep learning, wherein the method of the spatial domain is simpler and more direct than the transformation domain and the method based on the deep learning, and is convenient to realize.
The transform domain method generally consists of three phases: image transformation, coefficient fusion and inverse transformation. The most significant features include the inverse transformation stage of the fused image reconstruction, as compared to the spatial domain based approach. Depending on the transformation used, transform domain based methods can be further divided into multi-scale decomposition based methods, gradient domain based methods, sparse representation based methods, and other transform based methods. The transform domain method needs to perform transformation and inverse transformation of the image, and is difficult to deploy and implement under the condition of limited computational power of the embedded device.
In recent years, a deep learning-based method has become a very active direction in the field of image fusion. Neural networks with depth architecture have been widely demonstrated to have strong feature representation capabilities, and are very useful in a variety of image and visual tasks, including image fusion. Currently, deep learning models such as convolutional neural networks (convolutional neural networks, CNNs) and generative antagonism networks (generative adversarial networks, GANs) have been successfully applied to the field of image fusion. The deep learning-based methods can be further classified into a supervision-based method and an unsupervised-based method according to the model employed. However, the deep learning-based method requires more complex calculation, and needs a neural network acceleration unit to process in real time, so that the requirements on software and hardware are very high, the implementation cost is high, and the realization of embedded equipment is not facilitated.
The spatial domain method utilizes certain spatial characteristics to directly fuse the input source images in the spatial domain according to specific rules. In the processing, a weight map is generated for each input image, and the fused image is taken as a weighted average of all the input images. The spatial domain method is simple and direct, is convenient for implementation on the ground, and is favored by engineers, but the spatial domain-based method needs to calculate the fusion weight by using an iterative filtering method, for example, so that a better result can be obtained, and the iterative filtering is not beneficial to real-time image processing.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides an image fusion method.
In order to achieve the above object, the present invention provides the following technical solutions:
an image fusion method, comprising:
acquiring an original image to be fused, wherein the original image is a multi-sensor image sequence or a multi-exposure image sequence;
carrying out graying treatment on the original image to obtain a gray image;
the gray image is convolved through a high-pass filter to obtain image details, and the weight of the image details is calculated;
performing overexposure treatment on the gray level image;
calculating the main body fusion weight of the image after overexposure processing by using a convolution filter;
combining the image detail weight and the main body fusion weight to obtain an image fusion weight;
and carrying out weighted fusion on the original image through the image fusion weight to obtain a final fusion image.
Preferably, the original image is acquired through an image sensor or a camera, and the original image is read through an FPGA or an ARM to obtain the input image.
Preferably, the gray-scale processing is performed on the original image O (x, y), and the obtained gray-scale image I (x, y) is:
I(x,y)=0.299×R+0.587×G+0.114×B
wherein R, G, B is the three color channel value of the image; x represents the row coordinates of the image matrix and y represents the column coordinates of the image matrix.
Preferably, the convolution gray image with a high pass filter yields the image detail W e (x, y) and calculating image detail weightsThe method comprises the following steps:
W e (x,y)=I(x,y)*h(x,y)
where h (x, y) is a high pass filter:
where n represents the number of the image sequence.
Preferably, the overexposure processing is performed on the gray image, specifically:
wherein I is dn (x, y) is the nth image after overexposure processing, I n (x, y) is a gray scale image of the nth image, α is a control factor, and T is a constant of 0 or more and 128 or less.
Preferably, the main body fusion weight W of the image after the overexposure processing is calculated by using a convolution filter mn (x, y), specifically:
W mn (x,y)=I dn (x,y)*H(x,y)
wherein I is dn (x, y) is the nth image after overexposure processing, and H (x, y) is a convolution filter;
where σ is the variance, e is the natural index, x is the row coordinates of the image matrix, and y is the column coordinates of the image matrix.
Preferably, the method further comprises fusing the weight W to the main body mn (x, y) normalization, the main body fusion weight after normalization is as follows:
preferably, the image fusion weight is obtained by combining the image detail weight and the main fusion weightThe method comprises the following steps:
and then carrying out normalization processing on the image fusion weight to obtain:
preferably, the step of performing weighted fusion on the original image through the image fusion weight to obtain a final fusion image is as follows:
another object of the present invention is to provide an image fusion apparatus including:
the image acquisition module is used for acquiring an original image to be fused, wherein the original image is a multi-sensor image sequence or a multi-exposure image sequence;
the image preprocessing module is used for carrying out graying processing on the original image to obtain a gray image;
the image detail processing module is used for obtaining image details through the convolution gray image of the high-pass filter and calculating the weight of the image details;
the exposure processing module is used for performing overexposure processing on the gray level image;
the main body fusion weight calculation module is used for calculating main body fusion weights of the image subjected to overexposure processing by using a convolution filter;
the image fusion weight calculation module is used for combining the image detail weight and the main body fusion weight to obtain an image fusion weight;
and the image fusion module is used for carrying out weighted fusion on the original image through the image fusion weight to obtain a final fusion image.
The image fusion method and device provided by the invention have the following beneficial effects:
the method provided by the invention aims at a multi-sensor image sequence or a multi-exposure image sequence, obtains the image detail weight through a high-pass filter, obtains the main body fusion weight of the image through convolution filtering, then combines the image detail weight and the main body fusion weight to obtain the image fusion weight without calculating iterative filtering, thereby avoiding the calculation times of the iterative filter, reducing the complexity of image fusion, improving the operation processing performance of hardware, effectively reducing the time delay, achieving the real-time processing of image fusion and realizing the real-time fusion algorithm. Meanwhile, compared with a fusion algorithm based on a pixel level, the image fusion algorithm provided by the invention does not need to calculate fine weights in an iterative manner, so that the complexity is obviously reduced, and the real-time processing can be better performed.
Drawings
In order to more clearly illustrate the embodiments of the present invention and the design thereof, the drawings required for the embodiments will be briefly described below. The drawings in the following description are only some of the embodiments of the present invention and other drawings may be made by those skilled in the art without the exercise of inventive faculty.
Fig. 1 is a flowchart of an image fusion method according to embodiment 1 of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the drawings and the embodiments, so that those skilled in the art can better understand the technical scheme of the present invention and can implement the same. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
Example 1
The invention provides an image fusion method, which is specifically shown in fig. 1 and comprises the following steps:
step one: the original image O (x, y) to be fused is acquired, the original image being a color image and being a multi-sensor/multi-exposure image sequence.
Specifically, in this embodiment, an original image is collected by an image sensor or a camera, an input image is obtained by reading the camera through an FPGA or an ARM, and after image processing at the front end, a multi-sensor image or a multi-exposure image is registered in position and then used as input.
The multi-sensor refers to images acquired by white light, infrared or other multi-spectrum sensors, and also can be images acquired by single white light sensors at different exposure times. The image sequence refers to the sensor images or exposure images obtained above.
Step two: and (3) carrying out graying treatment on the original image O (x, y) to obtain a gray image I (x, y). I.e. to make the colour image greyscale, if the input is an infrared image, it needs to be converted into a greyscale map of 8bit representation.
Where I (x, y) is a gray scale image of the original image, can be calculated by:
I(x,y)=0.299×R+0.587×G+0.114×B (1)
where R, G, B is the three color channel value of the image.
Step three: calculating the detail weight of the image; x represents the row coordinates of the image matrix and y represents the column coordinates of the image matrix.
Image details W e (x, y) can be obtained by convolving the original gray image with a high pass filter.
W e (x,y)=I(x,y)*h(x,y) (2)
In the formula h, h (x, y) is a high-pass filter:
once the image details are obtained, the detail part is further processed by calculating the weight of the image details
Where n represents the number of the image sequence.
Step four: and performing overexposure processing on the gray level image.
The luminance value of a pixel determines whether the pixel is overexposed or not, and in order to calculate the main body fusion weight, the excessive influence of the weight caused by overexposure needs to be removed from the gray image.
Wherein I is dn (x, y) is the nth image after overexposure processing, I n (x, y) is the gray level image of the nth image, alpha is the control factor, and is selected to be 0.8 in the invention; t is greater thanEqual to 0, a constant less than 128.
Step five: and calculating the main body fusion weight of the image after the overexposure processing by using a convolution filter.
The main body fusion weight of the image needs to be obtained through a convolution filter. After the filter, the main weight W of the multi-sensor/multi-exposure image sequence can be obtained mn (x, y) is as in formula (6):
W mn (x,y)=I dn (x,y)*H(x,y) (6)
wherein I is dn (x, y) is the nth image after overexposure, and H (x, y) is a convolution filter.
Where σ is the variance, e is the natural index, x is the row coordinates of the image matrix, and y is the column coordinates of the image matrix.
And weight the main body W mn And (x, y) normalization to obtain:
step six: and (5) calculating the fusion weight of the image.
Specifically, combining the image detail weight and the main body fusion weight to obtain an image fusion weight
And then normalizing, and finally carrying out weighted fusion of the HDR images.
Step seven: and carrying out weighted fusion on the original image sequence to obtain a final fusion image.
Based on the same inventive concept, the invention also provides an image fusion device which comprises an image acquisition module, an image preprocessing module, an image detail processing module, an exposure processing module, a main fusion weight calculation module, an image fusion weight calculation module and an image fusion module.
Specifically, the image acquisition module is used for acquiring an original image to be fused, wherein the original image is a multi-sensor image sequence or a multi-exposure image sequence; the image preprocessing module is used for carrying out graying processing on the original image to obtain a gray image; the image detail processing module is used for obtaining image details through the convolution gray image of the high-pass filter and calculating the weight of the image details; the exposure processing module is used for performing overexposure processing on the gray level image; the main body fusion weight calculation module is used for calculating main body fusion weights of the image subjected to overexposure processing by using a convolution filter; the image fusion weight calculation module is used for combining the image detail weight and the main body fusion weight to obtain an image fusion weight; the image fusion module is used for carrying out weighted fusion on the original image through the image fusion weight to obtain a final fusion image.
Aiming at the defect that the calculation weight of a pixel-level image fusion algorithm is complex, aiming at a multi-sensor image sequence or a multi-exposure image sequence, the algorithm provided by the invention obtains the image detail weight through a high-pass filter, obtains the main fusion weight of an image through convolution filtering, then combines the image detail weight and the main fusion weight to obtain the image fusion weight, and does not need to calculate iterative filtering, thereby avoiding the calculation times of the iterative filter, reducing the complexity of image fusion, improving the operation processing performance of hardware, effectively reducing the time delay, achieving the real-time processing of image fusion and realizing the real-time fusion algorithm. Meanwhile, compared with a fusion algorithm based on a pixel level, the image fusion algorithm provided by the invention does not need to iteratively calculate the fine weight, so that the complexity is obviously reduced, and the FPGA hardware can be better processed in real time.
The above embodiments are merely preferred embodiments of the present invention, the protection scope of the present invention is not limited thereto, and any simple changes or equivalent substitutions of technical solutions that can be obviously obtained by those skilled in the art within the technical scope of the present invention disclosed in the present invention belong to the protection scope of the present invention.
Claims (10)
1. An image fusion method, comprising:
acquiring an original image to be fused, wherein the original image is a multi-sensor image sequence or a multi-exposure image sequence;
carrying out graying treatment on the original image to obtain a gray image;
the gray image is convolved through a high-pass filter to obtain image details, and the weight of the image details is calculated;
performing overexposure treatment on the gray level image;
calculating the main body fusion weight of the image after overexposure processing by using a convolution filter;
combining the image detail weight and the main body fusion weight to obtain an image fusion weight;
and carrying out weighted fusion on the original image through the image fusion weight to obtain a final fusion image.
2. The image fusion method of claim 1, wherein the raw image is acquired by an image sensor or a camera.
3. The image fusion method according to claim 1, wherein the gray-scale processing is performed on the original image O (x, y), and the obtained gray-scale image I (x, y) is:
I(x,y)=0.299×R+0.587×G+0.114×B
wherein R, G, B is the three color channel value of the image; x represents the row coordinates of the image matrix and y represents the column coordinates of the image matrix.
4. The method of claim 3, wherein the image detail W is obtained by convolving a gray image with a high pass filter e (x, y) and calculating image detail weightsThe method comprises the following steps:
W e (x,y)=I(z,y)*h(x,y)
where h (x, y) is a high pass filter:
where n represents the number of the image sequence.
5. The image fusion method according to claim 4, wherein the overexposure processing is performed on the gray scale image, specifically:
wherein I is dn (x, y) is the nth image after overexposure processing, I n (x, y) is a gray scale image of the nth image, α is a control factor, and T is a constant of 0 or more and 128 or less.
6. The image fusion method according to claim 5, wherein the main fusion weight W of the overexposed image is calculated using a convolution filter mn (x, y), specifically:
W mn (x,y)=I dn (x,y)*H(x,y)
wherein I is dn (x, y) is the nth image after overexposure processing, and H (x, y) is a convolution filter;
where σ is the variance, e is the natural index, x is the row coordinates of the image matrix, and y is the column coordinates of the image matrix.
10. an image fusion apparatus, comprising:
the image acquisition module is used for acquiring an original image to be fused, wherein the original image is a multi-sensor image sequence or a multi-exposure image sequence;
the image preprocessing module is used for carrying out graying processing on the original image to obtain a gray image;
the image detail processing module is used for obtaining image details through the convolution gray image of the high-pass filter and calculating the weight of the image details;
the exposure processing module is used for performing overexposure processing on the gray level image;
the main body fusion weight calculation module is used for calculating main body fusion weights of the image subjected to overexposure processing by using a convolution filter;
the image fusion weight calculation module is used for combining the image detail weight and the main body fusion weight to obtain an image fusion weight;
and the image fusion module is used for carrying out weighted fusion on the original image through the image fusion weight to obtain a final fusion image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310064964.1A CN116029954A (en) | 2023-02-06 | 2023-02-06 | Image fusion method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310064964.1A CN116029954A (en) | 2023-02-06 | 2023-02-06 | Image fusion method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116029954A true CN116029954A (en) | 2023-04-28 |
Family
ID=86070580
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310064964.1A Withdrawn CN116029954A (en) | 2023-02-06 | 2023-02-06 | Image fusion method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116029954A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116528058A (en) * | 2023-05-26 | 2023-08-01 | 中国人民解放军战略支援部队航天工程大学 | High dynamic imaging method and system based on compression reconstruction |
-
2023
- 2023-02-06 CN CN202310064964.1A patent/CN116029954A/en not_active Withdrawn
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116528058A (en) * | 2023-05-26 | 2023-08-01 | 中国人民解放军战略支援部队航天工程大学 | High dynamic imaging method and system based on compression reconstruction |
CN116528058B (en) * | 2023-05-26 | 2023-10-31 | 中国人民解放军战略支援部队航天工程大学 | High dynamic imaging method and system based on compression reconstruction |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111046880B (en) | Infrared target image segmentation method, system, electronic equipment and storage medium | |
Acharya et al. | Image processing: principles and applications | |
CN108875821A (en) | The training method and device of disaggregated model, mobile terminal, readable storage medium storing program for executing | |
WO2022042049A1 (en) | Image fusion method, and training method and apparatus for image fusion model | |
CN105809640B (en) | Low illumination level video image enhancement based on Multi-sensor Fusion | |
CN112733950A (en) | Power equipment fault diagnosis method based on combination of image fusion and target detection | |
CN112184604B (en) | Color image enhancement method based on image fusion | |
CN110473185A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN109753996B (en) | Hyperspectral image classification method based on three-dimensional lightweight depth network | |
CN112651469A (en) | Infrared and visible light image fusion method and system | |
CN111652817B (en) | Underwater image sharpening method based on human eye visual perception mechanism | |
CN116029954A (en) | Image fusion method and device | |
CN112669249A (en) | Infrared and visible light image fusion method combining improved NSCT (non-subsampled Contourlet transform) transformation and deep learning | |
CN114187214A (en) | Infrared and visible light image fusion system and method | |
CN111881924B (en) | Dark-light vehicle illumination identification method combining illumination invariance and short-exposure illumination enhancement | |
CN111832508B (en) | DIE _ GA-based low-illumination target detection method | |
CN110827375B (en) | Infrared image true color coloring method and system based on low-light-level image | |
CN111861949B (en) | Multi-exposure image fusion method and system based on generation countermeasure network | |
CN113378672A (en) | Multi-target detection method for defects of power transmission line based on improved YOLOv3 | |
CN116433822B (en) | Neural radiation field training method, device, equipment and medium | |
Zheng et al. | Near-infrared Image Enhancement Method in IRFPA Based on Steerable Pyramid. | |
CN112446835A (en) | Image recovery method, image recovery network training method, device and storage medium | |
CN114638764B (en) | Multi-exposure image fusion method and system based on artificial intelligence | |
CN113963427B (en) | Method and system for rapid in-vivo detection | |
CN113379610B (en) | Training method of image processing model, image processing method, medium and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20230428 |
|
WW01 | Invention patent application withdrawn after publication |