CN112712485B - Image fusion method and device - Google Patents

Image fusion method and device Download PDF

Info

Publication number
CN112712485B
CN112712485B CN201911019226.5A CN201911019226A CN112712485B CN 112712485 B CN112712485 B CN 112712485B CN 201911019226 A CN201911019226 A CN 201911019226A CN 112712485 B CN112712485 B CN 112712485B
Authority
CN
China
Prior art keywords
image
fusion
value
brightness
detail
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911019226.5A
Other languages
Chinese (zh)
Other versions
CN112712485A (en
Inventor
郑海涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201911019226.5A priority Critical patent/CN112712485B/en
Publication of CN112712485A publication Critical patent/CN112712485A/en
Application granted granted Critical
Publication of CN112712485B publication Critical patent/CN112712485B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image fusion method and device. The method comprises the following steps: obtaining a visible light image and an infrared image of a target image; respectively filling pixels of the visible light image and the infrared image to obtain a first visible light image and a first infrared image; according to a preset fusion rule, sequentially carrying out brightness fusion and detail layer fusion on the first visible light image and the first infrared image to obtain a target brightness value and a target chromaticity value of each pixel point in the target image; and obtaining a fusion image according to the target brightness value and the target chromaticity value of each pixel point. The embodiment of the invention solves the problems of large calculation amount and single use scene of the fusion method of the infrared image and the visible light image in the prior art.

Description

Image fusion method and device
Technical Field
The invention relates to the technical field of internet, in particular to an image fusion method and device.
Background
With the rapid development of internet technology, video monitoring systems have entered various fields in work and life, and have an important role in many fields; the video monitoring system mainly contributes to the application field according to the pictures shot by the video monitoring system, so that the definition of the shot pictures has important significance for the video monitoring system.
When the camera shoots, the dependence on light is strong, so that the effect is generally poor and the definition is low when shooting at night; to solve this problem, in order to obtain a good monitoring image quality in the night or other scenes where the environmental brightness is low, it is often necessary to add a high-energy-density visible light-compensating lamp irradiation. However, the light supplementing lamp has great stimulation to human eyes, and is easy to cause vision blind areas for pedestrians and vehicle drivers in the past, so that serious light pollution is caused, and even traffic accidents are caused.
The human eyes have weak or even no perception on infrared light, and an imaging system (comprising a lens, a sensor and the like) in the monitoring equipment has good imaging capability on near infrared light. Thus, the above-described light pollution problem can be solved by adopting infrared illumination and imaging. However, the infrared image has the problems of no color and poor layering, and the face comparison function has important significance in the video monitoring system, and has higher requirements on the resolution, details and the like of the face, so that the infrared image is difficult to meet the requirements. In order to solve this problem, a method of fusing an infrared image and a visible light image has emerged to enhance the sharpness of a monitoring image.
In the prior art, a multi-scale decomposition fusion method, such as a fusion method based on Laplacian pyramid decomposition, or a method for combining a face to be fused with a material face is generally adopted in the fusion method of the infrared image and the visible light image; however, in the prior art, the method for fusing the infrared image and the visible light image generally needs multi-layer sampling, so that the calculation amount is large, the use scene is single, and the requirement of a video monitoring system is difficult to meet.
Disclosure of Invention
The embodiment of the invention provides an image fusion method and device, which are used for solving the problems of large calculation amount and single use scene of an infrared image and visible light image fusion method in the prior art.
In one aspect, an embodiment of the present invention provides an image fusion method, where the method includes:
Obtaining a visible light image and an infrared image of a target image;
Respectively filling pixels of the visible light image and the infrared image to obtain a first visible light image and a first infrared image;
According to a preset fusion rule, sequentially carrying out brightness fusion and detail layer fusion on the first visible light image and the first infrared image to obtain a target brightness value and a target chromaticity value of each pixel point in the target image;
And obtaining a fusion image according to the target brightness value and the target chromaticity value of each pixel point.
Optionally, the step of filling pixels into the visible light image and the infrared image to obtain a first visible light image and a first infrared image includes:
Sampling interpolation processing is carried out on the visible light image to obtain a first visible light image;
And performing super-resolution processing on the infrared image to obtain a first infrared image.
Optionally, the step of sequentially performing luminance fusion and detail layer fusion on the first visible light image and the first infrared image according to a preset fusion rule to obtain a target luminance value and a target chromaticity value of each pixel point in the target image includes:
Sequentially carrying out brightness fusion and detail layer fusion on the first visible light image and the first infrared image to obtain a target brightness value of each pixel point in the target image;
Mapping the brightness change ratio to a chromaticity domain to obtain a target chromaticity value of each pixel point; wherein the luminance change ratio is a ratio between the target luminance value and an original luminance value of the pixel point in the first visible light image.
Optionally, the step of sequentially performing brightness fusion and detail layer fusion on the first visible light image and the first infrared image to obtain a target brightness value of each pixel point in the target image includes:
For each of the pixel points in question,
Calculating a first brightness fusion weight of the pixel point in the first visible light image and a second brightness fusion weight of the pixel point in the first infrared image; the brightness fusion weight is used for adjusting brightness of the pixel points in brightness fusion;
Calculating a first detail layer fusion weight of the pixel point in the first visible light image and a second detail layer fusion weight of the pixel point in the first infrared image; the detail layer fusion weight is used for adjusting the brightness of the pixel point in detail layer fusion;
And carrying out brightness fusion and detail layer fusion on the pixel points according to the first brightness fusion weight, the second brightness fusion weight, the first detail layer fusion weight and the second detail layer fusion weight to obtain target brightness values of the pixel points.
Optionally, the step of calculating a first luminance fusion weight of the pixel point in the first visible light image and a second luminance fusion weight of the pixel point in the first infrared image includes:
acquiring a first brightness value of the pixel point in the first visible light image and a second brightness value of the pixel point in the first infrared image;
And inputting the first brightness value and the second brightness value into a preset brightness weight calculation formula to obtain a first brightness fusion weight of the first brightness value and a second brightness fusion weight of the second brightness value.
Optionally, the step of calculating a first detail layer fusion weight of the pixel point in the first visible light image and a second detail layer fusion weight of the pixel point in the first infrared image includes:
Carrying out neighborhood processing on the pixel points to obtain a first infrared brightness value of a neighborhood pixel of the pixel points in the first infrared image and a first visible brightness value of the neighborhood pixel in the first visible image;
And inputting the first visible light brightness value and the first infrared brightness value into a preset detail layer fusion weight calculation formula, and determining a first detail layer fusion weight of the pixel point and a second detail layer fusion weight of the pixel point.
Optionally, the step of performing luminance fusion and detail layer fusion on the pixel point according to the first luminance fusion weight, the second luminance fusion weight, the first detail layer fusion weight and the second detail layer fusion weight to obtain a target luminance value of the pixel point includes:
Respectively sharpening the first visible light image and the first infrared image to obtain a first detail layer image of the first visible light image and a second detail layer image of the first infrared image;
acquiring a first detail value of the pixel point in a first detail layer image and a second detail value of the pixel point in a second detail layer image;
Calculating the first brightness fusion weight, the second brightness fusion weight, the first detail layer fusion weight and the second detail layer fusion weight to obtain an initial target brightness value of the pixel point;
And performing gamma mapping on the initial target brightness value to obtain a target brightness value.
Optionally, the step of sharpening the first visible light image and the first infrared image to obtain a first detail layer image of the first visible light image and a second detail layer image of the first infrared image includes:
Performing two-dimensional Gaussian filtering on an original brightness value of an original image to obtain a Gaussian blur image, and determining the Gaussian brightness value of each pixel point in the Gaussian blur image; wherein the original image is the first visible light image or the first infrared image;
For each pixel point, obtaining a detail value according to the difference between the original brightness value and the Gaussian brightness value;
Performing two-dimensional average filtering on the original image to obtain an average blurred image, and determining an average brightness value of each pixel point in the average blurred image;
Determining an initial detail value corresponding to the mean value brightness value according to a first corresponding relation between the brightness value and the detail value; in the first corresponding relation, each detail value corresponds to a brightness value in a continuous numerical range;
Determining a detail value corresponding to the initial detail value; if the initial detail value is smaller than a preset detail threshold value, the detail value is zero; otherwise, the detail value is equal to the initial detail value;
And enhancing the detail value through presetting an enhancement coefficient to obtain an enhanced detail value.
On the other hand, the embodiment of the invention also provides an image fusion device, which is applied to a server and comprises:
The image acquisition module is used for acquiring a visible light image and an infrared image of the target image;
The pixel filling module is used for respectively filling pixels into the visible light image and the infrared image to obtain a first visible light image and a first infrared image;
The fusion processing module is used for sequentially carrying out brightness fusion and detail layer fusion on the first visible light image and the first infrared image according to a preset fusion rule to obtain a target brightness value and a target chromaticity value of each pixel point in the target image;
And the image fusion module is used for obtaining a fusion image according to the target brightness value and the target chromaticity value of each pixel point.
In yet another aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and where the processor implements the steps in the image fusion method as described above when the processor executes the computer program.
In yet another aspect, embodiments of the present invention also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps in the image fusion method as described above.
In the embodiment of the invention, after a visible light image and an infrared image of a target image are acquired, respectively filling pixels of the visible light image and the infrared image to obtain a first visible light image and a first infrared image; then, according to a preset fusion rule, sequentially carrying out brightness fusion and detail layer fusion on the first visible light image and the first infrared image to obtain a target brightness value and a target chromaticity value of each pixel point in the target image; obtaining a fusion image according to the target brightness value and the target chromaticity value of each pixel point; the brightness and detail information of the fusion image are improved through the brightness fusion and the detail layer fusion, the color information is kept, and the resolution of the fusion image is improved; the process has low complexity and less calculation amount, is convenient to integrate to front-end equipment, is easy to transplant to various platforms, and can meet the real-time processing requirement of video signals; the image fusion method is wide in applicability and can be applied to an image acquisition scene under low illumination;
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of one of the steps of an image fusion method according to an embodiment of the present invention;
FIG. 2 is a second flowchart illustrating a step of an image fusion method according to an embodiment of the present invention;
FIG. 3 is a block diagram of a video surveillance system of a specific example provided by an embodiment of the present invention;
FIG. 4 is a flowchart of steps of a specific example provided by an embodiment of the present invention;
Fig. 5 is a block diagram of an image fusion apparatus according to an embodiment of the present invention;
fig. 6 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In various embodiments of the present invention, it should be understood that the sequence numbers of the following processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
Referring to fig. 1, an embodiment of the present invention provides an image fusion method, including:
Step 101, obtaining a visible light image and an infrared image of a target image.
The target image may be a frame of image in the video monitoring system, or may be other forms of images to be fused. The visible light image can be obtained through a visible light imaging sensor, such as a picture shot by the existing camera module, and most of the visible light image is the visible light image; the visible light image shot under low illumination has less details and poor definition.
The infrared image is obtained by acquiring infrared light of the surface of the object, and can be obtained by a lens or an infrared sensor, such as an image obtained by irradiating infrared light to a digital camera sensor through an imaging system (lens); the infrared image is capable of reflecting the thermal target characteristics of the image. By combining the infrared image and the visible light image, the target characteristic information in the infrared image and the scene detail information in the visible light image can be synthesized, and the fusion graph of the target image can be obtained.
It can be appreciated that if the embodiment of the present invention is applied to the field of video surveillance, the target image should include a face image.
And 102, respectively filling pixels of the visible light image and the infrared image to obtain a first visible light image and a first infrared image.
The pixel filling process is used for increasing the number of pixels contained in the image so as to increase the resolution of the image;
For example, taking an infrared image as an example, the pixel filling process may be Super-Resolution (SR), which is a process of improving the Resolution of an original image by a hardware or software method, and has important application value in the fields of monitoring equipment, satellite images, medical images and the like, and a higher Resolution image is obtained by performing Resolution reconstruction on a lower Resolution image.
Taking a visible light image as an example, the pixel filling process may be a pixel interpolation process; specifically, the pixel interpolation process calculates the actual pixels of the visible light image according to a certain operation mode to generate new pixel points, and inserts the new pixel points into the gaps adjacent to the actual pixels, so that the total pixel quantity and the pixel density are increased.
And performing pixel filling processing on the visible light image to obtain a first visible light image, and performing pixel filling processing on the infrared image to obtain a first infrared image.
And 103, according to a preset fusion rule, sequentially carrying out brightness fusion and detail layer fusion on the first visible light image and the first infrared image to obtain a target brightness value and a target chromaticity value of each pixel point in the target image.
In the field of image fusion, a visible light image to be fused is an image under low illumination, the brightness of the visible light image is lower, an infrared image is an infrared burst or an image under strong infrared light filling, and the brightness of the infrared image is higher; therefore, the difference in brightness between the first visible light image and the first infrared image is large, and when the images are fused, it is necessary to perform brightness fusion, perform brightness compensation on the image with lower brightness, and perform brightness suppression on the image with higher brightness.
In addition, the image fusion also comprises detail layer fusion; the detail layer fusion is to fuse the detail image of the first visible light image and the detail image of the first infrared image; specifically, an image may be decomposed into an underlayer and a detail layer; the bottom layer contains low-frequency information of the image and reflects the intensity change of the image on a large scale; the detail layer contains high-frequency information of the image, reflecting details of the image on a small scale.
Firstly, carrying out brightness fusion and detail layer fusion on a first visible light image and a first infrared image; and synthesizing the brightness of each pixel point in the image after brightness fusion and the brightness of each pixel point in the image after detail layer fusion to obtain a target brightness value of each pixel point in the target image, and finally mapping the target brightness value to a chromaticity domain to obtain a target chromaticity value of each pixel point in the target image.
And 104, obtaining a fusion image according to the target brightness value and the target chromaticity value of each pixel point.
Preferably, the fused image is a YUV image; wherein, the YUV image is an image format for dividing the image into a Luminance domain and a chrominance, and "Y" represents Luminance (Luminance or Luma), that is, a gray scale value; "U" and "V" refer to chromaticity (Chrominance or Chroma) which is used to describe the image color and saturation for a given pixel color.
And obtaining a target brightness value, namely a Y value, a target chromaticity value, namely a U value and a V value, and obtaining a fusion image after obtaining the target brightness value and the target chromaticity value of each pixel point. Therefore, image details in the target image are reserved in the fusion image, and the irradiation of a visible light supplementing lamp with high energy density is not needed, so that the problem of light pollution can be avoided.
In the above embodiment of the present invention, after a visible light image and an infrared image of a target image are acquired, respectively performing pixel filling on the visible light image and the infrared image to obtain a first visible light image and a first infrared image; then, according to a preset fusion rule, sequentially carrying out brightness fusion and detail layer fusion on the first visible light image and the first infrared image to obtain a target brightness value and a target chromaticity value of each pixel point in the target image; obtaining a fusion image according to the target brightness value and the target chromaticity value of each pixel point; the brightness and detail information of the fusion image are improved through the brightness fusion and the detail layer fusion, the color information is kept, and the resolution of the fusion image is improved; the process has low complexity and less calculation amount, is convenient to integrate to front-end equipment, is easy to transplant to various platforms, and can meet the real-time processing requirement of video signals; the image fusion method is wide in applicability and can be applied to an image acquisition scene under low illumination; the embodiment of the invention solves the problems of large calculation amount and single use scene of the fusion method of the infrared image and the visible light image in the prior art.
Optionally, in an embodiment of the present invention, step 102 includes:
Sampling interpolation processing is carried out on the visible light image to obtain a first visible light image;
And performing super-resolution processing on the infrared image to obtain a first infrared image.
The super resolution processing is to increase the resolution of the original image by a hardware or software method; as one implementation, the super-resolution processing may be implemented using a high-efficiency sub-pixel convolutional neural network (EFFICIENT SUB-Pixel Convolutional Neural Network, ESPCN) network architecture; specifically, the input of ESPCN network is the original low-resolution image, and after three convolution layers, the characteristic image with the channel number of r 2 (r is the number of convolution layer channels and can also be regarded as the magnification of the width or the height of super-resolution processing) is obtained, wherein the characteristic image is the same as the size of the input image. Rearranging r 2 channels of each pixel of the characteristic image into an r multiplied by r region, wherein the r multiplied by r region corresponds to an r multiplied by r sub-block in the high-resolution image; thus, the feature images of the size h×w×r 2 are rearranged into high-resolution images of rh× rW.
In ESPCN networks, the interpolation function of the image size enlargement process is implicitly included in the previous convolution layer, and it can be automatically learned that the efficiency is high because the convolution operation is performed on the low resolution image size.
The pixel interpolation process calculates the actual pixels of the infrared image according to a certain operation mode to generate new pixel points, and inserts the new pixel points into gaps adjacent to the actual pixels, so that the total pixel quantity and the pixel density are increased. As one implementation, the pixel interpolation process may utilize a bicubic Bicubic interpolation algorithm. Specifically, bicubic interpolation is also called cubic convolution interpolation, and the algorithm utilizes gray values of 16 points around a point to be sampled to conduct cubic interpolation, so that not only is the gray effect of 4 directly adjacent pixel points considered, but also the effect of the gray value change rate between adjacent pixel points is considered, and the amplification effect which is closer to a high-resolution image is obtained.
Optionally, in an embodiment of the present invention, step 103 includes:
Sequentially carrying out brightness fusion and detail layer fusion on the first visible light image and the first infrared image to obtain a target brightness value of each pixel point in the target image;
Mapping the brightness change ratio to a chromaticity domain to obtain a target chromaticity value of each pixel point; wherein the luminance change ratio is a ratio between the target luminance value and an original luminance value of the pixel point in the first visible light image.
First, carrying out brightness fusion and detail layer fusion on a first visible light image and a first infrared image; then synthesizing the brightness of each pixel point in the image after brightness fusion and the brightness of each pixel point in the image after detail layer fusion to obtain a target brightness value of each pixel point in the target image; and finally, mapping the brightness change ratio before and after fusion to a chromaticity domain according to a preset mapping formula to obtain a target chromaticity value of each pixel point in the target image.
Referring to fig. 2, still another embodiment of the present invention provides an image fusion method, the method including:
in step 201, a visible light image and an infrared image of a target image are acquired.
The target image may be a frame of image in the video monitoring system, or may be other forms of images to be fused.
And 202, respectively filling pixels of the visible light image and the infrared image to obtain a first visible light image and a first infrared image.
Wherein the pixel filling process is used to increase the number of pixels contained in the image to increase the resolution of the image.
Step 203, calculating, for each pixel point, a first luminance fusion weight of the pixel point in the first visible light image and a second luminance fusion weight of the pixel point in the first infrared image; the brightness fusion weight is used for adjusting brightness of the pixel points in brightness fusion.
In the field of image fusion, a visible light image to be fused is an image under low illumination, the brightness of the visible light image is lower, an infrared image is an image under infrared explosion or strong infrared light filling, and the brightness of the infrared image is higher; therefore, the brightness difference between the first visible light image and the first infrared image is large, when the images are fused, the brightness fusion is needed, the brightness compensation is performed on the image with lower brightness, and the brightness suppression is performed on the image with higher brightness; therefore, for each pixel point in the target image, calculating a first brightness fusion weight of the pixel point in the first visible light image and a second brightness fusion weight of the pixel point in the first infrared image, and realizing brightness compensation or inhibition through the brightness weights.
Step 204, calculating a first detail layer fusion weight of the pixel point in the first visible light image and a second detail layer fusion weight of the pixel point in the first infrared image; the detail layer fusion weight is used for adjusting the brightness of the pixel point in detail layer fusion.
In the process of detail layer fusion, for each pixel point in a target image, calculating a first detail layer fusion weight of the pixel point in the first visible light image and a second detail layer fusion weight of the pixel point in the first infrared image, and adjusting image detail information when the two are fused through the detail layer fusion weights.
And step 205, performing luminance fusion and detail layer fusion on the pixel point according to the first luminance fusion weight, the second luminance fusion weight, the first detail layer fusion weight and the second detail layer fusion weight to obtain a target luminance value of the pixel point.
Carrying out brightness fusion and detail layer fusion on the first visible light image and the first infrared image; and then synthesizing the brightness of each pixel point in the image after brightness fusion and the brightness of each pixel point in the image after detail layer fusion to obtain a target brightness value of each pixel point in the target image.
And step 206, mapping the brightness change ratio to a chromaticity domain to obtain a target chromaticity value of each pixel point.
And after the fusion brightness is obtained, mapping the target brightness value to a chromaticity domain to obtain a target chromaticity value of each pixel point in the target image.
And step 207, obtaining a fusion image according to the target brightness value and the target chromaticity value of each pixel point.
The fusion image retains image details in the target image, and no high-energy-density visible light supplementing lamp is required to irradiate, so that the problem of light pollution can be avoided.
The brightness fusion process and detail layer fusion process in the embodiments of the present invention will be described below respectively:
First, luminance fusion process
Optionally, in an embodiment of the present invention, the step of calculating a first luminance fusion weight of the pixel point in the first visible light image and a second luminance fusion weight of the pixel point in the first infrared image includes:
acquiring a first brightness value of the pixel point in the first visible light image and a second brightness value of the pixel point in the first infrared image;
And inputting the first brightness value and the second brightness value into a preset brightness weight calculation formula to obtain a first brightness fusion weight of the first brightness value and a second brightness fusion weight of the second brightness value.
First, acquiring a first brightness value and a second brightness value of the pixel point; then, according to a preset brightness weight calculation formula, calculating respective brightness weights;
Specifically, the step of inputting the first luminance value and the second luminance value into a preset luminance weight calculation formula to obtain a first luminance fusion weight of the first luminance value and a second luminance fusion weight of the second luminance value includes:
determining a first luminance fusion weight of the first luminance value and a second luminance fusion weight of the second luminance value according to the following formula group, the first luminance value and the second luminance value;
Wherein vis y is the first luminance value, ir y is the second luminance value; for the first luminance fusion weight, ir y is relatively large under strong infrared light, but vis y is relatively small under low illumination, so according to formula/> The brightness ratio of the visible light in the fusion image is improved relatively more; /(I)For the second brightness fusion weight, the duty ratio of the infrared image is relatively smaller so as to reduce the brightness duty ratio of the infrared image in the fusion image and inhibit the brightness of the infrared image; the brightness ratio of each of the visible light image and the infrared image is adjusted through brightness fusion, so that the color cast problem of the fusion image is avoided, and the imaging quality of the fusion image is improved.
Second, detail layer fusion process
Optionally, in an embodiment of the present invention, the step of calculating a first detail layer fusion weight of the pixel point in the first visible light image and a second detail layer fusion weight of the pixel point in the first infrared image includes:
Carrying out neighborhood processing on the pixel points to obtain a first infrared brightness value of a neighborhood pixel of the pixel points in the first infrared image and a first visible brightness value of the neighborhood pixel in the first visible image;
And inputting the first visible light brightness value and the first infrared brightness value into a preset detail layer fusion weight calculation formula, and determining a first detail layer fusion weight of the pixel point and a second detail layer fusion weight of the pixel point.
The contrast is measured at different brightness levels between the brightest part and the darkest part of a bright-dark area in an image, the larger the difference range is, the larger the contrast is, and the smaller the difference range is, the smaller the contrast is; the contrast can reflect the detail information quantity of the image to a certain extent; therefore, detail layer fusion is performed based on contrast, and image detail information when the detail layer fusion weight and the detail layer fusion weight are fused is adjusted.
Carrying out neighborhood processing on the pixel points to obtain neighborhood pixels of the pixel points, wherein the range of the neighborhood pixels can be preset; then, after a first visible light brightness value and a first infrared brightness value of a neighborhood pixel are obtained, calculating a fusion specific gravity of the neighborhood pixel and the neighborhood pixel when the neighborhood pixel and the first infrared brightness value are fused according to a preset detail layer fusion weight calculation formula; specifically, the step of inputting the first visible light brightness value and the first infrared brightness value to a preset detail layer fusion weight calculation formula to determine a first detail layer fusion weight of the pixel point and a second detail layer fusion weight of the pixel point includes:
A first step of determining a first local contrast of the first visible light luminance value and a second local contrast of the first infrared luminance value according to the following formula set, and the first visible light luminance value, the first infrared luminance value:
Wherein Ω (p) represents a neighborhood of the pixel point p, ir y (x) is the first infrared brightness value of the neighborhood pixel of the pixel point p;
vis y (x) is the first visible light intensity value of the neighborhood pixel, (P) represents p gradient processing, and alpha is a first preset parameter;
vis y(p)lc is the first local contrast, ir y(p)lc is the second local contrast;
The local contrast is the contrast in a smaller pixel point range; in actual image processing, since the amount of detail information is generally large where the local contrast is large, the local contrast is introduced as a measure of the fusion weight when fusing the detail layers.
A second step of determining a first detail layer fusion weight of the pixel point and a second detail layer fusion weight of the pixel point according to the following formula set, the first pair of local contrasts and the second local contrast:
Wherein, Fusing weights for the first detail layer at the pixel point p,/>And fusing weights for the second detail layer at the pixel point p.
After the first brightness fusion weight, the second brightness fusion weight, the first detail layer fusion weight and the second detail layer fusion weight are obtained, calculating a target brightness value of each pixel point;
Specifically, the step of performing luminance fusion and detail layer fusion on the pixel point according to the first luminance fusion weight, the second luminance fusion weight, the first detail layer fusion weight and the second detail layer fusion weight to obtain a target luminance value of the pixel point includes:
And the first step is to respectively sharpen the first visible light image and the first infrared image to obtain a first detail layer image of the first visible light image and a second detail layer image of the first infrared image.
The sharpening process can improve the definition or focal length of a certain part in the image, so that the color of a specific area of the image is clearer, the detail information in the visible light image and infrared, such as the image containing the human face, is enhanced, and the sharpening process can improve the definition of the outline of the five sense organs of the human face.
An image may be broken down into a bottom layer and a detail layer; the bottom layer contains low-frequency information of the image and reflects the intensity change of the image on a large scale; the detail layer comprises high-frequency information of the image, reflects details of the image on a small scale, and obtains a first detail layer image of a first visible light image and a second detail layer image of the first infrared image after sharpening treatment, wherein the first detail layer image and the second detail layer image are used for carrying out detail layer fusion.
And a second step of acquiring a first detail value of the pixel point in the first detail layer image and a second detail value of the pixel point in the second detail layer image.
Wherein for each pixel point the luminance values in the two detail layer images are determined separately.
And thirdly, calculating the first brightness fusion weight, the second brightness fusion weight, the first detail layer fusion weight and the second detail layer fusion weight to obtain an initial target brightness value of the pixel point.
Alternatively, the calculation may be performed according to the following formula:
wherein fus y0 is the initial target luminance value; vis y is a first brightness value of the pixel point in the first visible light image, and ir y is a second brightness value of the pixel point in the first infrared image; fusing weights for the first luminance,/> Fusing weights for the second luminance;
For the first detail value,/> Is the second detail value; /(I)Fusing weights for the first detail layer,/>Fusing weights for the second detail layer; beta is a second preset coefficient.
Fourth, gamma mapping is carried out on the initial target brightness value, and a target brightness value is obtained.
Optionally, gamma mapping is performed according to the following formula:
fusy1=gamma(fusy0)
Wherein fus y1 is the target luminance value.
The gamma mapping is used for correcting the brightness of the image; in a computer system, there is a deviation in brightness of an actually output image due to a display card or a display, and a gamma curve is used to correct the deviation, so that gamma mapping is performed after an initial target brightness value is obtained.
The process of the sharpening process will be described in detail below:
optionally, in the embodiment of the present invention, the step of sharpening the first visible light image and the first infrared image to obtain a first detail layer image of the first visible light image and a second detail layer image of the first infrared image includes:
The method comprises the steps of firstly, carrying out two-dimensional Gaussian filtering on an original brightness value of an original image to obtain a Gaussian blur image, and determining the Gaussian brightness value of each pixel point in the Gaussian blur image; wherein the original image is the first visible light image or the first infrared image;
The sharpening process is the same for the first visible light image and the first infrared image, so that the original image can be the first visible light image or the first infrared image; the two-dimensional Gaussian filtering is used for eliminating Gaussian noise in the original image, and the whole original image is subjected to weighted average, so that the pixel value (namely the Gaussian brightness value) of each pixel point after filtering is obtained after weighted average is carried out on the pixel value and other pixel values in the neighborhood. Specifically, each pixel in the image is scanned through a two-dimensional Gaussian filter template, and the original pixel value of the central pixel point of the template is replaced by the weighted average gray value of the pixels in the neighborhood determined by the template to serve as a Gaussian brightness value.
And secondly, obtaining a detail value according to the difference between the original brightness value and the Gaussian brightness value for each pixel point.
Wherein, the original image y ori is subjected to two-dimensional Gaussian filtering to obtain a Gaussian blur imageAnd subtracting from y ori A detail image y detail is obtained.
And thirdly, carrying out two-dimensional average filtering on the original image to obtain an average blurred image, and determining an average brightness value of each pixel point in the average blurred image.
Wherein, the original image y ori is subjected to two-dimensional average value filtering to obtain an average value brightness valueThe mean filtering is similar to Gaussian filtering, and the main method adopted is a neighborhood averaging method, wherein the average value is used for replacing each pixel value in an original image, for example, a template is selected from a current pixel point (x, y) to be processed, the template consists of a plurality of pixels adjacent to the template, the average value of all pixels in the template is calculated, and then the average value is given to the current pixel point (x, y) to serve as the gray level g (x, y) of the processed image at the point, namely/>
M is the total number of pixels in the template including the current pixel.
Fourthly, determining an initial detail value corresponding to the average brightness value according to a first corresponding relation between the brightness value and the detail value; in the first correspondence, each detail value corresponds to a brightness value in a continuous numerical range.
Optionally, the first correspondence is as follows:
Wherein, For the initial detail value, y detail is the detail value,/>For the average brightness value, k, s 0,s1,s2,s3 are preset parameters respectively; and realizing brightness control through preset parameters.
Fifthly, determining a detail value corresponding to the initial detail value; if the initial detail value is smaller than a preset detail threshold value, the detail value is zero; otherwise, the detail value is equal to the initial detail value.
Optionally, a detail value corresponding to the initial detail value is determined according to the following formula.
Wherein,For the detail value, thr is a preset detail threshold value and is used for controlling the brightness of the detail image;
and sixthly, enhancing the detail value through presetting an enhancement coefficient to obtain an enhanced detail value.
Optionally, enhancing the detail value according to the following formula to obtain an enhanced detail value;
Wherein, For the enhanced detail value, str is a preset enhancement coefficient.
The sharpening process can improve the definition or focal length of a certain part in the image, so that the color of a specific area of the image is clearer, the detail information in the visible light image and infrared, such as the image containing the human face, is enhanced, and the sharpening process can improve the definition of the outline of the five sense organs of the human face.
Optionally, in an embodiment of the present invention, the step of mapping the luminance change ratio to a chrominance domain to obtain a target chrominance value of each pixel includes:
for each pixel point, acquiring an original brightness value of the pixel point in the first visible light image, and an original chromaticity U value and an original chromaticity V value of the original pixel point in the visible light image;
determining a target chromaticity value of each pixel point according to the following formula group; the target chromaticity value comprises a fused chromaticity U value and a fused chromaticity V value:
/>
Wherein fus u is the fused chromaticity U value, fus v is the fused chromaticity V value;
vis u is the original chroma U value, vis v is the original chroma V value; fus y1 is the target luminance value f.
After obtaining a target brightness value of each pixel point in the target image, determining a brightness change ratio before and after fusion; and finally, mapping the brightness change ratio before and after fusion to a chromaticity U domain and a chromaticity V domain according to a preset mapping formula to obtain a fused chromaticity U value and a fused chromaticity V value of each pixel point in the target image.
As a specific example, referring to fig. 3, fig. 3 is a block diagram of a video monitoring system applied to an image fusion method provided by an embodiment of the present invention, where the video monitoring system mainly includes the following modules:
An image preprocessing module 301, a sharpening module 302, a weight calculation module 303, a fusion module 304 and a YUV fusion module 305.
Referring to fig. 4, the main working process of the video monitoring system includes:
step 4011, visible light face matting;
Step 4012, performing sampling interpolation processing on the visible light image, and obtaining a YUV image;
Step 4021, infrared face matting;
step 4022, performing super-resolution processing on the infrared image, where a Y image (brightness) is obtained;
Wherein steps 4011, 4012, 4021, 4022 are performed by image preprocessing module 301;
step 403, fusing the brightness and the detail layers to obtain a target brightness value;
this process is performed by sharpening module 302, weight calculation module 303, and fusion module 304, respectively;
In step 404, the luminance change before and after fusion is mapped to the chromaticity domain to obtain a target chromaticity value, and finally a fused image is obtained, and the process is executed by the YUV fusion module 305.
Specifically, the image preprocessing module 301 includes an infrared face image preprocessing sub-module and a visible light face image preprocessing sub-module, which respectively perform infrared face image preprocessing and visible light image preprocessing.
The weight calculation module 303 includes a luminance fusion weight calculation sub-module and a detail fusion weight calculation sub-module; the brightness fusion weight calculation submodule is used for calculating a first brightness fusion weight of the pixel point in the visible light image and a brightness fusion weight of the pixel point in the infrared image according to a preset brightness weight calculation formula;
The detail fusion weight calculation submodule is used for calculating a first detail layer fusion weight of the pixel point in the visible light image and a second detail layer fusion weight of the pixel point in the infrared image according to a preset detail layer fusion weight calculation formula;
The fusion module 304 is configured to calculate the first luminance fusion weight, the second luminance fusion weight, the first detail layer fusion weight, and the second detail layer fusion weight according to a preset luminance fusion formula, so as to obtain a target luminance value of the pixel point.
The fusion module 304 includes a luminance fusion sub-module and a detail fusion sub-module, where the luminance fusion sub-module is configured to perform luminance fusion, and perform luminance fusion according to the first luminance fusion weight and the second luminance fusion weight;
The detail fusion submodule is used for fusing detail layers among the first detail layer fusion weights and the second detail layer fusion weights; in the process of detail fusion, the sharpening module 302 is configured to perform sharpening processing on the image output by the image preprocessing module 301, and output detail layer images of the visible light image and the infrared image respectively.
The YUV fusion module 305 is configured to map the luminance change ratio to a chromaticity domain, so as to obtain a target chromaticity value of each pixel point, and finally obtain a fusion image.
In the above embodiment of the present invention, after a visible light image and an infrared image of a target image are acquired, respectively performing pixel filling on the visible light image and the infrared image to obtain a first visible light image and a first infrared image; then, according to a preset fusion rule, sequentially carrying out brightness fusion and detail layer fusion on the first visible light image and the first infrared image to obtain a target brightness value and a target chromaticity value of each pixel point in the target image; obtaining a fusion image according to the target brightness value and the target chromaticity value of each pixel point; the brightness and detail information of the fusion image are improved through the brightness fusion and the detail layer fusion, the color information is kept, and the resolution of the fusion image is improved; the process has low complexity and less calculation amount, is convenient to integrate to front-end equipment, is easy to transplant to various platforms, and can meet the real-time processing requirement of video signals; the image fusion method is wide in applicability and can be applied to an image acquisition scene under low illumination.
Having described the image fusion method provided by the embodiment of the present invention, the image fusion apparatus provided by the embodiment of the present invention will be described with reference to the accompanying drawings.
Referring to fig. 5, an embodiment of the present invention further provides an image fusion apparatus, including:
the image acquisition module 501 is configured to acquire a visible light image and an infrared image of a target image.
The target image may be a frame of image in the video monitoring system, or may be other forms of images to be fused. The visible light image can be obtained through a visible light imaging sensor, such as a picture shot by the existing camera module, and most of the visible light image is the visible light image; the visible light image shot under low illumination has less details and poor definition.
The infrared image is obtained by acquiring infrared light of the surface of the object, and can be obtained by a lens or an infrared sensor, such as an image obtained by irradiating infrared light to a digital camera sensor through an imaging system (lens); the infrared image is capable of reflecting the thermal target characteristics of the image. By combining the infrared image and the visible light image, the target characteristic information in the infrared image and the scene detail information in the visible light image can be synthesized, and the fusion graph of the target image can be obtained.
It can be appreciated that if the embodiment of the present invention is applied to the field of video surveillance, the target image should include a face image.
The pixel filling module 502 is configured to perform pixel filling on the visible light image and the infrared image respectively, so as to obtain a first visible light image and a first infrared image.
The pixel filling process is used for increasing the number of pixels contained in the image so as to increase the resolution of the image; for example, taking an infrared image as an example, the pixel filling process may be super-resolution process, that is, a process of improving the resolution of an original image by a hardware or software method, which has important application value in the fields of monitoring equipment, satellite images, medical images and the like, and performing resolution reconstruction on an image with lower resolution to obtain an image with higher resolution.
Taking a visible light diagram as an example, the pixel filling process may be a pixel interpolation process; specifically, the pixel interpolation process calculates the actual pixels of the visible light image according to a certain operation mode to generate new pixel points, and inserts the new pixel points into the gaps adjacent to the actual pixels, so that the total pixel quantity and the pixel density are increased.
And performing pixel filling processing on the visible light image to obtain a first visible light image, and performing pixel filling processing on the infrared image to obtain a first infrared image.
And the fusion processing module 503 is configured to sequentially perform luminance fusion and detail layer fusion on the first visible light image and the first infrared image according to a preset fusion rule, so as to obtain a target luminance value and a target chromaticity value of each pixel point in the target image.
In the field of image fusion, a visible light image to be fused is an image under low illumination, the brightness of the visible light image is lower, and an infrared image is an image under infrared explosion flash or strong infrared light filling, and the brightness of the infrared image is higher; therefore, the difference in brightness between the first visible light image and the first infrared image is large, and when the images are fused, it is necessary to perform brightness fusion, perform brightness compensation on the image with lower brightness, and perform brightness suppression on the image with higher brightness.
In addition, the image fusion also comprises detail layer fusion; the detail layer fusion is to fuse the detail image of the first visible light image and the detail image of the first infrared image; specifically, an image may be decomposed into an underlayer and a detail layer; the bottom layer contains low-frequency information of the image and reflects the intensity change of the image on a large scale; the detail layer contains high-frequency information of the image, reflecting details of the image on a small scale.
Firstly, carrying out brightness fusion and detail layer fusion on a first visible light image and a first infrared image; and synthesizing the brightness of each pixel point in the image after brightness fusion and the brightness of each pixel point in the image after detail layer fusion to obtain a target brightness value of each pixel point in the target image, and finally mapping the target brightness value to a chromaticity domain to obtain a target chromaticity value of each pixel point in the target image.
The image fusion module 504 is configured to obtain a fused image according to the target luminance value and the target chrominance value of each pixel point.
Preferably, the fused image is a YUV image; wherein, the YUV image is an image format for dividing the image into a Luminance domain and a chrominance, and "Y" represents Luminance (Luminance or Luma), that is, a gray scale value; "U" and "V" refer to chromaticity (Chrominance or Chroma) which is used to describe the image color and saturation for a given pixel color.
And obtaining a target brightness value, namely a Y value, a target chromaticity value, namely a U value and a V value, and obtaining a fusion image after obtaining the target brightness value and the target chromaticity value of each pixel point. Therefore, image details in the target image are reserved in the fusion image, and the irradiation of a visible light supplementing lamp with high energy density is not needed, so that the problem of light pollution can be avoided.
Optionally, in an embodiment of the present invention, the pixel filling module 502 includes:
the sampling interpolation processing sub-module is used for carrying out sampling interpolation processing on the visible light image to obtain a first visible light image;
and the super-resolution processing sub-module is used for performing super-resolution processing on the infrared image to obtain a first infrared image.
Optionally, in an embodiment of the present invention, the fusion processing module 503 includes:
The processing sub-module is used for sequentially carrying out brightness fusion and detail layer fusion on the first visible light image and the first infrared image to obtain a target brightness value of each pixel point in the target image;
The mapping submodule is used for mapping the brightness change ratio to a chromaticity domain to obtain a target chromaticity value of each pixel point; wherein the luminance change ratio is a ratio between the target luminance value and an original luminance value of the pixel point in the first visible light image.
Optionally, in an embodiment of the present invention, the processing submodule includes:
The weight calculation unit is used for calculating a first brightness fusion weight of the pixel point in the first visible light image and a second brightness fusion weight of the pixel point in the first infrared image for each pixel point; the brightness fusion weight is used for adjusting brightness of the pixel points in brightness fusion; calculating a first detail layer fusion weight of the pixel point in the first visible light image and a second detail layer fusion weight of the pixel point in the first infrared image; the detail layer fusion weight is used for adjusting the brightness of the pixel point in detail layer fusion;
And the fusion unit is used for carrying out brightness fusion and detail layer fusion on the pixel points according to the first brightness fusion weight, the second brightness fusion weight, the first detail layer fusion weight and the second detail layer fusion weight to obtain target brightness values of the pixel points.
Optionally, in an embodiment of the present invention, the weight calculating unit includes:
a first brightness acquisition subunit, configured to acquire a first brightness value of the pixel point in the first visible light image and a second brightness value of the pixel point in the first infrared image;
The first weight determining subunit is configured to input the first luminance value and the second luminance value to a preset luminance weight calculation formula, so as to obtain a first luminance fusion weight of the first luminance value and a second luminance fusion weight of the second luminance value.
Optionally, in an embodiment of the present invention, the weight calculating unit includes:
The neighborhood processing subunit is used for carrying out neighborhood processing on the pixel points to obtain a first infrared brightness value of a neighborhood pixel of the pixel points in the first infrared image and a first visible brightness value of the neighborhood pixel in the first visible image;
the second weight determining subunit is configured to input the first visible light brightness value and the first infrared brightness value to a preset detail layer fusion weight calculation formula, and determine a first detail layer fusion weight of the pixel point and a second detail layer fusion weight of the pixel point.
Optionally, in an embodiment of the present invention, the fusion unit includes:
the sharpening processing subunit is used for respectively carrying out sharpening processing on the first visible light image and the first infrared image to obtain a first detail layer image of the first visible light image and a second detail layer image of the first infrared image;
The second brightness acquisition subunit is used for acquiring a first detail value of the pixel point in the first detail layer image and a second detail value of the pixel point in the second detail layer image;
The initial fusion subunit is used for calculating the first brightness fusion weight, the second brightness fusion weight, the first detail layer fusion weight and the second detail layer fusion weight to obtain an initial target brightness value of the pixel point;
And the mapping subunit is used for performing gamma mapping on the initial target brightness value to obtain a target brightness value.
Optionally, in an embodiment of the present invention, the sharpening processing subunit is configured to:
Performing two-dimensional Gaussian filtering on an original brightness value of an original image to obtain a Gaussian blur image, and determining the Gaussian brightness value of each pixel point in the Gaussian blur image; wherein the original image is the first visible light image or the first infrared image;
For each pixel point, obtaining a detail value according to the difference between the original brightness value and the Gaussian brightness value;
Performing two-dimensional average filtering on the original image to obtain an average blurred image, and determining an average brightness value of each pixel point in the average blurred image;
Determining an initial detail value corresponding to the mean value brightness value according to a first corresponding relation between the brightness value and the detail value; in the first corresponding relation, each detail value corresponds to a brightness value in a continuous numerical range;
Determining a detail value corresponding to the initial detail value; if the initial detail value is smaller than a preset detail threshold value, the detail value is zero; otherwise, the detail value is equal to the initial detail value;
And enhancing the detail value through presetting an enhancement coefficient to obtain an enhanced detail value.
The image fusion device provided by the embodiment of the present invention can implement each process implemented by the image fusion device in the method embodiments of fig. 1 to 3, and in order to avoid repetition, a description is omitted here.
In the embodiment of the present invention, after the image acquisition module 501 acquires a visible light image and an infrared image of a target image, the pixel filling module 502 performs pixel filling on the visible light image and the infrared image respectively to obtain a first visible light image and a first infrared image; the fusion processing module 503 sequentially performs brightness fusion and detail layer fusion on the first visible light image and the first infrared image according to a preset fusion rule to obtain a target brightness value and a target chromaticity value of each pixel point in the target image; the image fusion module 504 obtains a fusion image according to the target brightness value and the target chromaticity value of each pixel point; the brightness and detail information of the fusion image are improved through the brightness fusion and the detail layer fusion, the color information is kept, and the resolution of the fusion image is improved; the process has low complexity and less calculation amount, is convenient to integrate to front-end equipment, is easy to transplant to various platforms, and can meet the real-time processing requirement of video signals; the image fusion method is wide in applicability and can be applied to an image acquisition scene under low illumination; the embodiment of the invention solves the problems of large calculation amount and single use scene of the fusion method of the infrared image and the visible light image in the prior art.
In another aspect, an embodiment of the present invention further provides an electronic device, including a memory, a processor, a bus, and a computer program stored in the memory and capable of running on the processor, where the processor implements the steps in the image fusion method described above when executing the program.
For example, fig. 6 shows a schematic physical structure of an electronic device.
As shown in fig. 6, the electronic device may include: processor 610, communication interface (Communications Interface) 620, memory 630, and communication bus 640, wherein processor 610, communication interface 620, memory 630 communicate with each other via communication bus 640. The processor 610 may call logic instructions in the memory 630 to perform the following methods:
Obtaining a visible light image and an infrared image of a target image;
Respectively filling pixels of the visible light image and the infrared image to obtain a first visible light image and a first infrared image;
According to a preset fusion rule, sequentially carrying out brightness fusion and detail layer fusion on the first visible light image and the first infrared image to obtain a target brightness value and a target chromaticity value of each pixel point in the target image;
And obtaining a fusion image according to the target brightness value and the target chromaticity value of each pixel point.
Further, the logic instructions in the memory 630 may be implemented in the form of software functional units and stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In still another aspect, an embodiment of the present invention further provides a computer readable storage medium having stored thereon a computer program, which when executed by a processor is implemented to perform the image fusion method provided in the above embodiments, for example, including:
Obtaining a visible light image and an infrared image of a target image;
Respectively filling pixels of the visible light image and the infrared image to obtain a first visible light image and a first infrared image;
According to a preset fusion rule, sequentially carrying out brightness fusion and detail layer fusion on the first visible light image and the first infrared image to obtain a target brightness value and a target chromaticity value of each pixel point in the target image;
And obtaining a fusion image according to the target brightness value and the target chromaticity value of each pixel point.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. A method of image fusion, the method comprising:
Obtaining a visible light image and an infrared image of a target image;
Respectively filling pixels of the visible light image and the infrared image to obtain a first visible light image and a first infrared image;
According to a preset fusion rule, sequentially carrying out brightness fusion and detail layer fusion on the first visible light image and the first infrared image to obtain a target brightness value and a target chromaticity value of each pixel point in the target image; wherein,
The step of sequentially performing brightness fusion and detail layer fusion on the first visible light image and the first infrared image according to a preset fusion rule to obtain a target brightness value and a target chromaticity value of each pixel point in the target image comprises the following steps:
Sequentially carrying out brightness fusion and detail layer fusion on the first visible light image and the first infrared image to obtain a target brightness value of each pixel point in the target image;
Mapping the brightness change ratio to a chromaticity domain to obtain a target chromaticity value of each pixel point; wherein the brightness change ratio is a ratio between the target brightness value and an original brightness value of the pixel point in the first visible light image; wherein,
The step of sequentially performing brightness fusion and detail layer fusion on the first visible light image and the first infrared image to obtain a target brightness value of each pixel point in the target image comprises the following steps:
For each pixel point, calculating a first brightness fusion weight of the pixel point in the first visible light image and a second brightness fusion weight of the pixel point in the first infrared image; the brightness fusion weight is used for adjusting brightness of the pixel points in brightness fusion;
Calculating a first detail layer fusion weight of the pixel point in the first visible light image and a second detail layer fusion weight of the pixel point in the first infrared image; the detail layer fusion weight is used for adjusting brightness of the pixel points in detail layer fusion;
According to the first brightness fusion weight, the second brightness fusion weight, the first detail layer fusion weight and the second detail layer fusion weight, carrying out brightness fusion and detail layer fusion on the pixel points to obtain target brightness values of the pixel points; wherein,
The step of performing luminance fusion and detail layer fusion on the pixel point according to the first luminance fusion weight, the second luminance fusion weight, the first detail layer fusion weight and the second detail layer fusion weight to obtain a target luminance value of the pixel point comprises the following steps:
Respectively sharpening the first visible light image and the first infrared image to obtain a first detail layer image of the first visible light image and a second detail layer image of the first infrared image;
acquiring a first detail value of the pixel point in a first detail layer image and a second detail value of the pixel point in a second detail layer image;
Calculating the first brightness fusion weight, the second brightness fusion weight, the first detail layer fusion weight and the second detail layer fusion weight to obtain an initial target brightness value of the pixel point;
Performing gamma mapping on the initial target brightness value to obtain a target brightness value;
The step of sharpening the first visible light image and the first infrared image to obtain a first detail layer image of the first visible light image and a second detail layer image of the first infrared image respectively includes:
Performing two-dimensional Gaussian filtering on an original brightness value of an original image to obtain a Gaussian blur image, and determining the Gaussian brightness value of each pixel point in the Gaussian blur image; wherein the original image is the first visible light image or the first infrared image;
For each pixel point, obtaining a detail value according to the difference between the original brightness value and the Gaussian brightness value;
Performing two-dimensional average filtering on the original image to obtain an average blurred image, and determining an average brightness value of each pixel point in the average blurred image;
Determining an initial detail value corresponding to the mean value brightness value according to a first corresponding relation between the brightness value and the detail value; in the first corresponding relation, each detail value corresponds to a mean brightness value of a continuous numerical range;
Determining a detail value corresponding to the initial detail value; if the initial detail value is smaller than a preset detail threshold value, the detail value is zero; otherwise, the detail value is equal to the initial detail value;
Reinforcing the detail value through a preset reinforcing coefficient to obtain a reinforced detail value;
And obtaining a fusion image according to the target brightness value and the target chromaticity value of each pixel point.
2. The method of image fusion according to claim 1, wherein the step of performing pixel filling on the visible light image and the infrared image to obtain a first visible light image and a first infrared image respectively includes:
Sampling interpolation processing is carried out on the visible light image to obtain a first visible light image;
And performing super-resolution processing on the infrared image to obtain a first infrared image.
3. The image fusion method of claim 1, wherein the step of calculating a first luminance fusion weight of the pixel point in the first visible image and a second luminance fusion weight of the pixel point in the first infrared image comprises:
acquiring a first brightness value of the pixel point in the first visible light image and a second brightness value of the pixel point in the first infrared image;
And inputting the first brightness value and the second brightness value into a preset brightness weight calculation formula to obtain a first brightness fusion weight of the first brightness value and a second brightness fusion weight of the second brightness value.
4. The image fusion method of claim 1, wherein the step of calculating a first detail layer fusion weight of the pixel point in the first visible light image and a second detail layer fusion weight of the pixel point in the first infrared image comprises:
Carrying out neighborhood processing on the pixel points to obtain a first infrared brightness value of a neighborhood pixel of the pixel points in the first infrared image and a first visible brightness value of the neighborhood pixel in the first visible image;
And inputting the first visible light brightness value and the first infrared brightness value into a preset detail layer fusion weight calculation formula, and determining a first detail layer fusion weight of the pixel point and a second detail layer fusion weight of the pixel point.
5. An image fusion apparatus, the apparatus comprising:
The image acquisition module is used for acquiring a visible light image and an infrared image of the target image;
The pixel filling module is used for respectively filling pixels into the visible light image and the infrared image to obtain a first visible light image and a first infrared image;
The fusion processing module is used for sequentially carrying out brightness fusion and detail layer fusion on the first visible light image and the first infrared image according to a preset fusion rule to obtain a target brightness value and a target chromaticity value of each pixel point in the target image; wherein,
The fusion processing module comprises:
The processing sub-module is used for sequentially carrying out brightness fusion and detail layer fusion on the first visible light image and the first infrared image to obtain a target brightness value of each pixel point in the target image;
the mapping submodule is used for mapping the brightness change ratio to a chromaticity domain to obtain a target chromaticity value of each pixel point; wherein the brightness change ratio is a ratio between the target brightness value and an original brightness value of the pixel point in the first visible light image; wherein,
The processing sub-module comprises:
The weight calculation unit is used for calculating a first brightness fusion weight of the pixel point in the first visible light image and a second brightness fusion weight of the pixel point in the first infrared image for each pixel point; the brightness fusion weight is used for adjusting brightness of the pixel points in brightness fusion; calculating a first detail layer fusion weight of the pixel point in the first visible light image and a second detail layer fusion weight of the pixel point in the first infrared image; the detail layer fusion weight is used for adjusting brightness of the pixel points in detail layer fusion;
The fusion unit is used for carrying out brightness fusion and detail layer fusion on the pixel points according to the first brightness fusion weight, the second brightness fusion weight, the first detail layer fusion weight and the second detail layer fusion weight to obtain target brightness values of the pixel points; wherein,
The fusion unit includes:
the sharpening processing subunit is used for respectively carrying out sharpening processing on the first visible light image and the first infrared image to obtain a first detail layer image of the first visible light image and a second detail layer image of the first infrared image;
The second brightness acquisition subunit is used for acquiring a first detail value of the pixel point in the first detail layer image and a second detail value of the pixel point in the second detail layer image;
The initial fusion subunit is used for calculating the first brightness fusion weight, the second brightness fusion weight, the first detail layer fusion weight and the second detail layer fusion weight to obtain an initial target brightness value of the pixel point;
a mapping subunit, configured to perform gamma mapping on the initial target brightness value to obtain a target brightness value; wherein,
The sharpening processing subunit is configured to:
Performing two-dimensional Gaussian filtering on an original brightness value of an original image to obtain a Gaussian blur image, and determining the Gaussian brightness value of each pixel point in the Gaussian blur image; wherein the original image is the first visible light image or the first infrared image;
For each pixel point, obtaining a detail value according to the difference between the original brightness value and the Gaussian brightness value;
Performing two-dimensional average filtering on the original image to obtain an average blurred image, and determining an average brightness value of each pixel point in the average blurred image;
Determining an initial detail value corresponding to the mean value brightness value according to a first corresponding relation between the brightness value and the detail value; in the first corresponding relation, each detail value corresponds to a mean brightness value of a continuous numerical range;
Determining a detail value corresponding to the initial detail value; if the initial detail value is smaller than a preset detail threshold value, the detail value is zero; otherwise, the detail value is equal to the initial detail value;
Reinforcing the detail value through a preset reinforcing coefficient to obtain a reinforced detail value;
And the image fusion module is used for obtaining a fusion image according to the target brightness value and the target chromaticity value of each pixel point.
6. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, wherein the computer program when executed by the processor implements the steps of the image fusion method according to any one of claims 1 to 4.
7. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the image fusion method according to any one of claims 1 to 4.
CN201911019226.5A 2019-10-24 2019-10-24 Image fusion method and device Active CN112712485B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911019226.5A CN112712485B (en) 2019-10-24 2019-10-24 Image fusion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911019226.5A CN112712485B (en) 2019-10-24 2019-10-24 Image fusion method and device

Publications (2)

Publication Number Publication Date
CN112712485A CN112712485A (en) 2021-04-27
CN112712485B true CN112712485B (en) 2024-06-04

Family

ID=75540398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911019226.5A Active CN112712485B (en) 2019-10-24 2019-10-24 Image fusion method and device

Country Status (1)

Country Link
CN (1) CN112712485B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177905A (en) * 2021-05-21 2021-07-27 浙江大华技术股份有限公司 Image acquisition method, device, equipment and medium
CN113298177B (en) * 2021-06-11 2023-04-28 华南理工大学 Night image coloring method, device, medium and equipment
CN114240813A (en) * 2021-12-14 2022-03-25 成都微光集电科技有限公司 Image processing method, apparatus, device and storage medium thereof
CN114500850B (en) * 2022-02-22 2024-01-19 锐芯微电子股份有限公司 Image processing method, device, system and readable storage medium
CN114693581B (en) * 2022-06-02 2022-09-06 深圳市海清视讯科技有限公司 Image fusion processing method, device, equipment and storage medium
CN117152031A (en) * 2022-09-22 2023-12-01 深圳Tcl新技术有限公司 Image fusion method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006229752A (en) * 2005-02-18 2006-08-31 Ricoh Co Ltd Image processor, image processing method, program for making computer execute the method, and recording medium
CN102567977A (en) * 2011-12-31 2012-07-11 南京理工大学 Self-adaptive fusing method of infrared polarization image based on wavelets
WO2016205419A1 (en) * 2015-06-15 2016-12-22 Flir Systems Ab Contrast-enhanced combined image generation systems and methods
WO2017020595A1 (en) * 2015-08-05 2017-02-09 武汉高德红外股份有限公司 Visible light image and infrared image fusion processing system and fusion method
CN106952245A (en) * 2017-03-07 2017-07-14 深圳职业技术学院 A kind of processing method and system for visible images of taking photo by plane
CN108259774A (en) * 2018-01-31 2018-07-06 珠海市杰理科技股份有限公司 Image combining method, system and equipment
CN110136183A (en) * 2018-02-09 2019-08-16 华为技术有限公司 A kind of method and relevant device of image procossing
CN110363732A (en) * 2018-04-11 2019-10-22 杭州海康威视数字技术股份有限公司 A kind of image interfusion method and its device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006229752A (en) * 2005-02-18 2006-08-31 Ricoh Co Ltd Image processor, image processing method, program for making computer execute the method, and recording medium
CN102567977A (en) * 2011-12-31 2012-07-11 南京理工大学 Self-adaptive fusing method of infrared polarization image based on wavelets
WO2016205419A1 (en) * 2015-06-15 2016-12-22 Flir Systems Ab Contrast-enhanced combined image generation systems and methods
WO2017020595A1 (en) * 2015-08-05 2017-02-09 武汉高德红外股份有限公司 Visible light image and infrared image fusion processing system and fusion method
CN106952245A (en) * 2017-03-07 2017-07-14 深圳职业技术学院 A kind of processing method and system for visible images of taking photo by plane
CN108259774A (en) * 2018-01-31 2018-07-06 珠海市杰理科技股份有限公司 Image combining method, system and equipment
CN110136183A (en) * 2018-02-09 2019-08-16 华为技术有限公司 A kind of method and relevant device of image procossing
CN110363732A (en) * 2018-04-11 2019-10-22 杭州海康威视数字技术股份有限公司 A kind of image interfusion method and its device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Fusion of infrared-visible images using improved multi-scale top-hat transform and suitable fusion rules;Pan Zhu等;Infrared Physics & Technology;第81卷;282-295 *
基于NSST与IHS的红外与彩色可见光图像融合;杨晟炜;张志华;孔玲君;王茜;;包装工程;第40卷(第11期);194-202 *
基于色调映射和暗通道融合的弱光图像增强;杨爱萍等;天津大学学报(自然科学与工程技术版);第51卷(第07期);768-776 *

Also Published As

Publication number Publication date
CN112712485A (en) 2021-04-27

Similar Documents

Publication Publication Date Title
CN112712485B (en) Image fusion method and device
CN112767289B (en) Image fusion method, device, medium and electronic equipment
Galdran Image dehazing by artificial multiple-exposure image fusion
CN107845128B (en) Multi-exposure high-dynamic image reconstruction method with multi-scale detail fusion
CN110728648B (en) Image fusion method and device, electronic equipment and readable storage medium
CN108055452B (en) Image processing method, device and equipment
CN110519489B (en) Image acquisition method and device
CN108205796B (en) Multi-exposure image fusion method and device
CN110766639B (en) Image enhancement method and device, mobile equipment and computer readable storage medium
CN108154514B (en) Image processing method, device and equipment
CN107370958A (en) Image virtualization processing method, device and camera terminal
CN110490811B (en) Image noise reduction device and image noise reduction method
CN107509031A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN112734650A (en) Virtual multi-exposure fusion based uneven illumination image enhancement method
WO2007017835A2 (en) Adaptive exposure control
CN107493432A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN113129391B (en) Multi-exposure fusion method based on multi-exposure image feature distribution weight
CN112785534A (en) Ghost-removing multi-exposure image fusion method in dynamic scene
CN111209775B (en) Signal lamp image processing method, device, equipment and storage medium
CN110827225A (en) Non-uniform illumination underwater image enhancement method based on double exposure frame
WO2020099893A1 (en) Image enhancement system and method
CN110009574A (en) A kind of method that brightness, color adaptively inversely generate high dynamic range images with details low dynamic range echograms abundant
CN109325905B (en) Image processing method, image processing device, computer readable storage medium and electronic apparatus
CN110517210B (en) Multi-exposure welding area image fusion method based on Haar wavelet gradient reconstruction
CN116468636A (en) Low-illumination enhancement method, device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant