CN115631119A - Image fusion method for improving target significance - Google Patents

Image fusion method for improving target significance Download PDF

Info

Publication number
CN115631119A
CN115631119A CN202211094340.6A CN202211094340A CN115631119A CN 115631119 A CN115631119 A CN 115631119A CN 202211094340 A CN202211094340 A CN 202211094340A CN 115631119 A CN115631119 A CN 115631119A
Authority
CN
China
Prior art keywords
image
target
fusion
improving
saliency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211094340.6A
Other languages
Chinese (zh)
Inventor
王世允
李文
黄晓江
袁玉芬
谢佳玫
钱佳
唐骏
戴涧
高文研
高雪峰
阚亚进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
JIANGSU NORTH HUGUANG OPTICS ELECTRONICS CO Ltd
Original Assignee
JIANGSU NORTH HUGUANG OPTICS ELECTRONICS CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by JIANGSU NORTH HUGUANG OPTICS ELECTRONICS CO Ltd filed Critical JIANGSU NORTH HUGUANG OPTICS ELECTRONICS CO Ltd
Priority to CN202211094340.6A priority Critical patent/CN115631119A/en
Publication of CN115631119A publication Critical patent/CN115631119A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image fusion systems, in particular to an image fusion method for improving target saliency, which comprises the following steps of 1: processing the whole infrared image through a formula, selecting a parameter less than 1, and weakening background contents except the target; step 2: performing image fusion in a multi-scale mode to obtain a gray level fusion image, performing multi-scale decomposition on the infrared and low-light level images, and adopting different fusion strategies on different layers to highlight details of a target and a background; and step 3: the contrast of the colors of the image is improved, again improving the saliency of the target, by means of an improved method of local color transfer. The method comprises the steps of firstly preprocessing an infrared video image with stronger target characteristics to weaken the image contents except for a target; then, a multi-scale fusion algorithm with enhanced local detail is adopted; and finally, when the fusion image is rendered, a local color transfer method is adopted, so that the layering sense of the whole image color is enhanced, and the target is more highlighted.

Description

Image fusion method for improving target significance
Technical Field
The invention relates to the technical field of image fusion systems, in particular to an image fusion method for improving target significance.
Background
Object saliency is a measure of how conspicuous an object is relative to the surrounding background, and generally the stronger the saliency of an object in the observed image of the system, the easier it is to search for and detect, especially under special viewing conditions, such as at night, in rainy, snowy, foggy days, and the like. Since the ultimate user of the image fusion system is the observer, the main algorithms for fusion at present are multi-scale fusion and global color delivery. The processing is focused on the aspects of image integrity, such as transparency, naturalness, contrast and the like, and the target area is not processed in a targeted manner, so that the following conditions occur: 1. the image is overall clear, and the target significance is not strong; 2. the overall image feels natural, and the target significance is reduced; 3. only the target edge is enhanced; 4. the target is enhanced and the overall image is unnatural.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an image fusion method for improving the target significance, which comprises the steps of firstly preprocessing an infrared video image with stronger target characteristics to weaken the image contents except for a target; then, a multi-scale fusion algorithm with enhanced local detail is adopted; and finally, when the fusion image is rendered, a local color transfer method is adopted, so that the layering sense of the whole image color is enhanced, and the target is more highlighted.
The invention is realized by the following technical scheme:
an image fusion method for improving the significance of a target comprises the following steps:
step 1: processing the whole infrared image through a formula, selecting a parameter less than 1, and weakening background contents except the target;
namely the formula is:
Figure BDA0003838379610000021
wherein the content of the first and second substances,
p (i, j) represents a pixel value with the coordinate (i, j) in the original infrared image;
p' (i, j) represents a pixel value with coordinates (i, j) after infrared image preprocessing;
c is a parameter;
and 2, step: performing image fusion in a multi-scale mode to obtain a gray level fusion image, performing multi-scale decomposition on the infrared and low-light level images, and adopting different fusion strategies on different layers to highlight details of a target and a background;
and 3, step 3: the contrast of the colors of the image is improved, again improving the saliency of the target, by means of an improved method of local color transfer.
Preferably, in the step 1, the parameter c is 0.2.
Preferably, in the step 2, laplacian (laplacian) 3 layer decomposition and fusion are adopted.
Preferably, the step 3 specifically includes the following steps:
step a): calculating the local mean value of the reference image and the target image according to the image data fused in the step 2
Figure BDA0003838379610000022
Sum variance
Figure BDA0003838379610000023
Step b): calculating a matching error of a pixel corresponding to the reference image in the target image;
Figure BDA0003838379610000024
Err(i,j)=min(Err);
m 1 、m 2 representing an error coefficient;
step c): determining a value T, performing normal color transfer when Err (i, j) < T, otherwise not performing color transfer, only keeping a pixel brightness value, and taking U and V as unknown chromaticity spaces;
step d): carrying out extended filling on an unknown color space; sequentially calculating the local color space mean value u of a pixel 3 multiplied by 3 neighborhood in the whole image color space after the step c), wherein when the mean value is not 0, the chroma value of the pixel is the neighborhood mean value.
Preferably, in the step c), the value T is half of the mean value of the error matrix.
The invention has the following beneficial effects:
1. the invention can more effectively highlight the characteristic that the background imaging characteristics of the micro-light channel are closer to nature;
2. the method can avoid the interference of imaging data of the infrared channel background on the fusion image;
3. the invention can more highlight the infrared characteristics of the target, and effectively improve the identification degree of the target during long-distance observation;
4. the invention can well keep and even improve the definition and color gradation of the background while improving the target significance.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic flow chart diagram of an embodiment of the present invention.
FIG. 2 is a diagram illustrating the effect of pre-treatment according to an embodiment of the present invention; wherein (a) is an original infrared image; and (b) a target highlight effect image.
FIG. 3 is a graph comparing the effects of embodiments of the present invention with a current process; wherein (a) is a visible light image; (b) is an infrared image; (c) is a linear weighting algorithm image; (d) multi-scale fusion + color transfer images; and (e) obtaining an effect image of the method.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the present invention provides an image fusion method for improving target saliency, including the following steps:
step 1: processing the whole infrared image through a formula, selecting a parameter less than 1, and weakening background contents except the target;
namely the formula is:
Figure BDA0003838379610000041
wherein the content of the first and second substances,
p (i, j) represents a pixel value with the coordinate (i, j) in the original infrared image;
p' (i, j) represents a pixel value with coordinates (i, j) after infrared image preprocessing;
c is a parameter, and the parameter c is 0.2;
step 2: performing image fusion in a multi-scale mode to obtain a gray level fusion image, performing multi-scale decomposition on the infrared and low-light level images, and adopting different fusion strategies on different layers to highlight details of a target and a background; adopting Laplacian (Lapacian) 3-layer decomposition and fusion;
and step 3: the contrast of the colors of the image is improved, again improving the saliency of the target, by means of an improved method of local color transfer.
In this embodiment, the step 3 specifically includes the following steps:
step a): calculating the local mean value of the reference image and the target image according to the image data fused in the step 2
Figure BDA0003838379610000042
Sum variance
Figure BDA0003838379610000043
Step b): calculating the matching error of the pixel corresponding to the reference image in the target image;
Figure BDA0003838379610000044
Err(i,j)=min(Err);
m 1 、m 2 representing an error coefficient;
step c): determining a T value which is half of the mean value of the error matrix; when Err (i, j) < T, carrying out normal color transfer, otherwise, not carrying out color transfer, only keeping the brightness value of the pixel, and U and V are unknown chromaticity spaces;
step d): carrying out expansion filling on an unknown color space; sequentially calculating the local color space mean value u of a pixel 3 multiplied by 3 neighborhood in the whole image color space after the step c), and when the mean value is not 0, the chroma value of the pixel is the neighborhood mean value.
In summary, the invention first preprocesses the infrared video image with strong target characteristics to weaken the image contents except the target; then, a multi-scale fusion algorithm with local detail enhancement is adopted; and finally, when the fusion image is rendered, a local color transfer method is adopted, so that the layering sense of the whole image color is enhanced, and the target is more highlighted.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (5)

1. An image fusion method for improving the significance of a target is characterized by comprising the following steps:
step 1: processing the whole infrared image through a formula, selecting a parameter less than 1, and weakening background contents except the target;
namely the formula is:
Figure FDA0003838379600000011
wherein the content of the first and second substances,
p (i, j) represents a pixel value with the coordinate (i, j) in the original infrared image;
p' (i, j) represents a pixel value with coordinates (i, j) after infrared image preprocessing;
c is a parameter;
step 2: performing image fusion in a multi-scale mode to obtain a gray level fusion image, performing multi-scale decomposition on the infrared and low-light level images, and adopting different fusion strategies on different layers to highlight details of a target and a background;
and 3, step 3: the contrast of the colors of the image is improved, again improving the saliency of the target, by means of an improved method of local color transfer.
2. The image fusion method for improving the saliency of the target according to claim 1, characterized in that in step 1, the parameter c is 0.2.
3. The image fusion method for improving the saliency of the target of claim 2, characterized in that in said step 2, laplace 3 layer decomposition and fusion is used.
4. The image fusion method for improving the saliency of an object according to claim 3, characterized in that said step 3 specifically includes the following steps:
step a): calculating the local mean value of the reference image and the target image according to the image data fused in the step 2
Figure FDA0003838379600000012
Sum variance
Figure FDA0003838379600000013
Step b): calculating the matching error of the pixel corresponding to the reference image in the target image;
Figure FDA0003838379600000014
Err(i,j)=min(Err);
m 1 、m 2 representing an error coefficient;
step c): determining a value T, and when Err (i, j) < T, carrying out normal color transfer, otherwise, not carrying out color transfer, only keeping the brightness value of a pixel, wherein U and V are unknown chromaticity spaces;
step d): carrying out extended filling on an unknown color space; sequentially calculating the local color space mean value u of a pixel 3 multiplied by 3 neighborhood in the whole image color space after the step c), and when the mean value is not 0, the chroma value of the pixel is the neighborhood mean value.
5. The image fusion method for improving the saliency of the target according to claim 4, wherein in said step c), the value of T is half of the mean value of the error matrix.
CN202211094340.6A 2022-09-08 2022-09-08 Image fusion method for improving target significance Pending CN115631119A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211094340.6A CN115631119A (en) 2022-09-08 2022-09-08 Image fusion method for improving target significance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211094340.6A CN115631119A (en) 2022-09-08 2022-09-08 Image fusion method for improving target significance

Publications (1)

Publication Number Publication Date
CN115631119A true CN115631119A (en) 2023-01-20

Family

ID=84902520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211094340.6A Pending CN115631119A (en) 2022-09-08 2022-09-08 Image fusion method for improving target significance

Country Status (1)

Country Link
CN (1) CN115631119A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118154443A (en) * 2024-05-09 2024-06-07 江苏北方湖光光电有限公司 Method for improving fusion sight distance of fusion night vision device in real time

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780470A (en) * 2016-12-23 2017-05-31 浙江大学 CT image nipple automated detection methods
CN107705268A (en) * 2017-10-20 2018-02-16 天津工业大学 One kind is based on improved Retinex and the enhancing of Welsh near-infrared images and colorization algorithm
CN108389158A (en) * 2018-02-12 2018-08-10 河北大学 A kind of infrared and visible light image interfusion method
CN110751660A (en) * 2019-10-18 2020-02-04 南京林业大学 Color image segmentation method
CN111723670A (en) * 2020-05-21 2020-09-29 河海大学 Remote sensing target detection algorithm based on improved FastMBD
CN113935984A (en) * 2021-11-01 2022-01-14 中国电子科技集团公司第三十八研究所 Multi-feature fusion method and system for detecting infrared dim small target in complex background
CN113962900A (en) * 2021-11-15 2022-01-21 北京环境特性研究所 Method, device, equipment and medium for detecting infrared dim target under complex background
CN114612359A (en) * 2022-03-09 2022-06-10 南京理工大学 Visible light and infrared image fusion method based on feature extraction

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780470A (en) * 2016-12-23 2017-05-31 浙江大学 CT image nipple automated detection methods
CN107705268A (en) * 2017-10-20 2018-02-16 天津工业大学 One kind is based on improved Retinex and the enhancing of Welsh near-infrared images and colorization algorithm
CN108389158A (en) * 2018-02-12 2018-08-10 河北大学 A kind of infrared and visible light image interfusion method
CN110751660A (en) * 2019-10-18 2020-02-04 南京林业大学 Color image segmentation method
CN111723670A (en) * 2020-05-21 2020-09-29 河海大学 Remote sensing target detection algorithm based on improved FastMBD
CN113935984A (en) * 2021-11-01 2022-01-14 中国电子科技集团公司第三十八研究所 Multi-feature fusion method and system for detecting infrared dim small target in complex background
CN113962900A (en) * 2021-11-15 2022-01-21 北京环境特性研究所 Method, device, equipment and medium for detecting infrared dim target under complex background
CN114612359A (en) * 2022-03-09 2022-06-10 南京理工大学 Visible light and infrared image fusion method based on feature extraction

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
何永强 等: "基于融合和色彩传递的灰度图像彩色化技术", 《红外技术》, vol. 34, no. 5, 31 May 2012 (2012-05-31), pages 276 - 279 *
张骢 等: "红外成像探测技术与应用", 31 August 2022, 北京理工大学出版社, pages: 42 - 44 *
朱黎博;孙韶媛;王冬;: "基于图像分割和色彩扩展的灰度图像彩色化方法", 微型电脑应用, no. 05, 20 May 2009 (2009-05-20) *
朱黎博;孙韶媛;谷小婧;夏如镜;叶茂锹;: "基于色彩传递与扩展的图像着色算法", 中国图象图形学报, no. 02, 15 February 2010 (2010-02-15) *
谯涵丹 等: "红外与微光融合图像的 多尺度色彩传递算法", 《红外技术》, vol. 38, no. 2, 28 February 2016 (2016-02-28), pages 157 - 162 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118154443A (en) * 2024-05-09 2024-06-07 江苏北方湖光光电有限公司 Method for improving fusion sight distance of fusion night vision device in real time

Similar Documents

Publication Publication Date Title
Singh et al. A comprehensive review of computational dehazing techniques
CN108596849B (en) Single image defogging method based on sky region segmentation
Li et al. Nighttime haze removal with glow and multiple light colors
CN111968054B (en) Underwater image color enhancement method based on potential low-rank representation and image fusion
CN109064426B (en) Method and device for suppressing glare in low-illumination image and enhancing image
Gao et al. Sand-dust image restoration based on reversing the blue channel prior
Li et al. A multi-scale fusion scheme based on haze-relevant features for single image dehazing
Park et al. Single image haze removal with WLS-based edge-preserving smoothing filter
Yang et al. Coarse-to-fine luminance estimation for low-light image enhancement in maritime video surveillance
CN112837233A (en) Polarization image defogging method for acquiring transmissivity based on differential polarization
CN112700363B (en) Self-adaptive visual watermark embedding method and device based on region selection
Mi et al. Multi-purpose oriented real-world underwater image enhancement
CN111598814B (en) Single image defogging method based on extreme scattering channel
WO2019128459A1 (en) Method and device for image shadow elimination
CN112465708A (en) Improved image defogging method based on dark channel
CN110866889A (en) Multi-camera data fusion method in monitoring system
CN115631119A (en) Image fusion method for improving target significance
CN109410160B (en) Infrared polarization image fusion method based on multi-feature and feature difference driving
Wei et al. An image fusion dehazing algorithm based on dark channel prior and retinex
CN111311503A (en) Night low-brightness image enhancement system
CN108898561B (en) Defogging method, server and system for foggy image containing sky area
CN112184608B (en) Infrared and visible light image fusion method based on feature transfer
CN107301625B (en) Image defogging method based on brightness fusion network
Lai et al. Single image dehazing with optimal transmission map
Hong et al. Single image dehazing based on pixel-wise transmission estimation with estimated radiance patches

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination