CN112767291A - Visible light image and infrared image fusion method and device and readable storage medium - Google Patents

Visible light image and infrared image fusion method and device and readable storage medium Download PDF

Info

Publication number
CN112767291A
CN112767291A CN202110003609.4A CN202110003609A CN112767291A CN 112767291 A CN112767291 A CN 112767291A CN 202110003609 A CN202110003609 A CN 202110003609A CN 112767291 A CN112767291 A CN 112767291A
Authority
CN
China
Prior art keywords
image
infrared
brightness
fusion
visible light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110003609.4A
Other languages
Chinese (zh)
Other versions
CN112767291B (en
Inventor
陈虹宇
俞克强
王松
张东
刘晓沐
杨志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Huagan Technology Co ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110003609.4A priority Critical patent/CN112767291B/en
Priority claimed from CN202110003609.4A external-priority patent/CN112767291B/en
Publication of CN112767291A publication Critical patent/CN112767291A/en
Application granted granted Critical
Publication of CN112767291B publication Critical patent/CN112767291B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a method for fusing a visible light image and an infrared image, which comprises the following steps: acquiring a visible light image and an infrared image, wherein the visible light image comprises a visible light brightness image and a visible light color image, and the infrared image comprises an infrared brightness image and an infrared pseudo color image; dividing the infrared pseudo color image/visible light color image into regions, respectively fusing the infrared pseudo color image and the visible light color image of each region, and merging the fused color images of each region to obtain a color fused image; fusing the visible light brightness image and the infrared brightness image to obtain a brightness fused image; and combining the color fusion image and the brightness fusion image to obtain a visible light-infrared fusion image. Through the mode, the invention can fully utilize visible light image information and infrared image information.

Description

Visible light image and infrared image fusion method and device and readable storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to a visible light image and infrared image fusion method, visible light image and infrared image fusion equipment and a readable storage medium.
Background
The infrared thermal imaging technology is widely applied to the fields of industrial detection, fire fighting, military affairs and the like. In some applications, in order to acquire more scene information, a binocular technique is often adopted, i.e., visible light imaging and infrared imaging are integrated on the same monitoring device. On visible light-infrared thermal imaging binocular equipment, in order to enable an infrared image to better reflect scene information, a commonly adopted method is an image fusion technology.
The image fusion technology can combine images formed by two different types of imaging sensors or the like under the conditions of different focal lengths, exposure and the like into a pair of images with richer information content, and the method is more suitable for post-processing and research. In recent years, most fusion methods do not consider the change of the use scene, and the information is not fully utilized, so that the fused image cannot obtain the optimal scene information of each region according to the local characteristics of the image.
Disclosure of Invention
The invention mainly solves the technical problem of providing a method, equipment and a readable storage medium for fusing a visible light image and an infrared image, which can fully utilize visible light image information and infrared image information.
In order to solve the technical problems, the invention adopts a technical scheme that: providing a method for fusing a visible light image and an infrared image, wherein the method for fusing the visible light image and the infrared image comprises the steps of obtaining the visible light image and the infrared image, the visible light image comprises a visible light brightness image and a visible light color image, and the infrared image comprises an infrared brightness image and an infrared pseudo color image; dividing the infrared pseudo color image/visible light color image into regions, respectively fusing the infrared pseudo color image and the visible light color image of each region, and merging the fused color images of each region to obtain a color fused image; fusing the visible light brightness image and the infrared brightness image to obtain a brightness fused image; and combining the color fusion image and the brightness fusion image to obtain a visible light-infrared fusion image.
Wherein, the area division of the infrared pseudo color image comprises the following steps: acquiring pixel values of all pixel points of an infrared brightness image corresponding to the infrared pseudo-color image; and determining a threshold interval where the pixel value is located, and dividing the infrared pseudo color image/visible color image into areas corresponding to the threshold interval.
The step of determining the threshold interval in which the pixel value is located includes: and setting a threshold interval based on the fusion rate, wherein the fusion rate comprises the proportion of the infrared image information and the visible light image information in the visible light-infrared fusion image.
Wherein, the area division of the infrared pseudo color image comprises the following steps: dividing the infrared pseudo color image into a first area and a second area, wherein the pixel value of each pixel point of the infrared brightness image corresponding to the infrared pseudo color image in the first area is smaller than a first threshold value, and the pixel value of each pixel point of the infrared brightness image corresponding to the infrared pseudo color image in the second area is larger than or equal to the first threshold value; the fusing of the infrared pseudo color image and the visible color image of each region respectively comprises the following steps: fusing the visible light color image and the infrared pseudo color image of the first area by utilizing a first fusion strategy; the first fusion strategy includes: and respectively determining the visible color image fusion weight and the infrared pseudo color image fusion weight of each pixel point, and performing weighted summation on the visible color image color value and the infrared pseudo color image color value of each pixel point in the first area to obtain a first color fusion image.
Wherein, respectively determining the visible light color image fusion weight and the infrared pseudo color image fusion weight of each pixel point comprises: the fusion weight of the infrared pseudo color image is the ratio of the pixel value of the infrared brightness image to the first threshold, and the sum of the fusion weight of the infrared pseudo color image and the fusion weight of the visible color image is 1.
Wherein, respectively determining the visible light color image fusion weight and the infrared pseudo color image fusion weight of each pixel point comprises: the smaller the pixel value of the infrared brightness image corresponding to the infrared pseudo-color image is, the smaller the fusion weight of the infrared pseudo-color image is; the sum of the visible light color image fusion weight and the infrared pseudo color image fusion weight is 1.
Wherein, the area division of the infrared pseudo color image comprises the following steps: dividing the infrared pseudo color image into a first area and a second area, wherein the pixel value of each pixel point of the infrared brightness image corresponding to the infrared pseudo color image in the first area is smaller than a first threshold value, and the pixel value of each pixel point of the infrared brightness image corresponding to the infrared pseudo color image in the second area is larger than or equal to the first threshold value; the fusing of the infrared pseudo color image and the visible color image of each region respectively comprises the following steps: fusing the visible light color image and the infrared pseudo color image of the second area by using a second fusion strategy; the second fusion strategy includes: and taking the color value of the infrared pseudo-color image of each pixel point in the second area as the color value of the color image after the pixel points are fused.
The first threshold value is set based on the fusion rate, the fusion rate comprises the proportion of infrared image information and visible light image information in the visible light-infrared fusion image, and the larger the fusion rate is, the smaller the first threshold value is.
The method for fusing the visible light brightness image and the infrared brightness image to obtain the brightness fused image comprises the following steps: respectively carrying out multi-scale decomposition on the visible light brightness image and the infrared brightness image; respectively fusing the visible light brightness image and the infrared brightness image of each scale; and combining the brightness fusion images of all scales to obtain a brightness fusion image.
The method comprises the following steps of respectively carrying out multi-scale decomposition on a visible light brightness image and an infrared brightness image, and fusing the visible light brightness image and the infrared brightness image of each scale, wherein the steps of: performing multi-scale decomposition on the visible light brightness image and the infrared brightness image respectively, and performing high-frequency extraction based on low-pass filtering on each scale to obtain a visible light brightness background image, visible light brightness detail images of multiple scales, an infrared brightness background image and infrared brightness detail images of multiple scales; fusing the visible light brightness background image and the infrared brightness background image to obtain a brightness background fused image; respectively fusing the visible light brightness detail images and the infrared brightness detail images of all scales to obtain a plurality of brightness detail fused images; and combining the brightness background fusion image and the brightness detail fusion image to obtain a brightness fusion image.
The method for fusing the visible light brightness background image and the infrared brightness background image to obtain the brightness background fusion image comprises the following steps: determining the fusion weight of the visible light brightness background image and the fusion weight of the infrared brightness background image; weighting and summing the brightness value of the visible light brightness background image and the brightness value of the infrared brightness background image to obtain a brightness background fusion image; the fusion rate comprises the proportion of infrared image information and visible light image information in the visible light-infrared fusion image, and the sum of the fusion weight of the visible light brightness background image and the fusion weight of the infrared brightness background image is 1.
The method for setting the fusion weight of the visible light brightness background image and the fusion weight of the infrared brightness background image based on the fusion rate comprises the following steps: the larger the fusion rate, the larger the fusion weight of the infrared luminance background image.
The method for fusing the visible light brightness detail images and the infrared brightness detail images of all scales to obtain a plurality of brightness detail fusion images comprises the following steps: respectively determining the visible light brightness detail image fusion weight and the infrared brightness detail image fusion weight of each pixel point in each scale; respectively carrying out weighted summation on the brightness value of the visible light brightness background image and the brightness value of the infrared brightness background image of each pixel point in each scale to obtain a brightness detail fusion image of each scale; and setting visible light brightness detail image fusion weight and infrared brightness detail image fusion weight of each pixel point in each scale based on the weight map, wherein the sum of the visible light brightness detail image fusion weight and the infrared brightness detail image fusion weight is 1.
The method for setting the visible light brightness detail image fusion weight and the infrared brightness detail image fusion weight of each pixel point in each scale based on the weight map comprises the following steps: comparing the visible light brightness detail image and the infrared brightness detail image of the same scale pixel by pixel, if the absolute value of the pixel value of the infrared brightness image is larger than that of the visible light brightness image, marking the corresponding coordinate in the weight map as 1, otherwise, marking the coordinate as 0; performing Gaussian filtering on the marked weight map; and taking the pixel value of each pixel point in the weight map as a fusion weight value of the infrared brightness detail image.
The combining the brightness background fusion image and the brightness detail fusion image to obtain the brightness fusion image comprises the following steps: and respectively multiplying the brightness detail fusion images of all scales by corresponding amplification coefficients to enhance the brightness detail fusion images, wherein each amplification coefficient is more than or equal to 1.
And setting an amplification factor based on a fusion rate, wherein the fusion rate comprises the proportion of visible light image information and infrared image information in the visible light-infrared fusion image.
Wherein setting the magnification factor based on the fusion ratio includes: the greater the fusion rate, the greater the magnification factor.
In order to solve the technical problem, the invention adopts another technical scheme that: a method for fusing a visible light image and an infrared image comprises the following steps: acquiring a visible light image and an infrared image, wherein the visible light image comprises a visible light brightness image and a visible light color image, and the infrared image comprises an infrared brightness image and an infrared pseudo color image; respectively carrying out multi-scale decomposition on the visible light brightness image and the infrared brightness image, fusing the visible light brightness image and the infrared brightness image of each scale, and merging the brightness fused images of each scale to obtain a brightness fused image, wherein the fusion weight of the visible light brightness image and the fusion weight of the infrared brightness image are set based on the fusion rate, and the fusion rate comprises the proportion of infrared image information and visible light image information in the visible light-infrared fused image; fusing the visible light color image and the infrared pseudo color image to obtain a color fused image; and combining the color fusion image and the brightness fusion image to obtain a visible light-infrared fusion image.
In order to solve the technical problem, the invention adopts another technical scheme that: a visible light image and infrared image fusion device comprises a processor, wherein the processor is used for executing instructions to realize the visible light image and infrared image fusion method.
In order to solve the technical problem, the invention adopts another technical scheme that: a computer readable storage medium for storing instructions/program data executable to implement the visible light image and infrared image fusion method described above.
The invention has the beneficial effects that: different from the prior art, the method and the device respectively perform color fusion and brightness fusion on the visible light image and the infrared image, divide the infrared pseudo color image/the visible light color image into a plurality of regions during color fusion, can perform color fusion on the images in different regions by adopting different fusion strategies, and can more fully and flexibly utilize visible light information and infrared information in the images to obtain the optimal scene information of each region.
Drawings
FIG. 1 is a schematic flow chart of a method for fusing a visible light image and an infrared image according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart illustrating a color image fusion method according to an embodiment of the present disclosure;
fig. 3 is a schematic view illustrating a region division process of a color image fusion method according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating a luminance image fusion method according to an embodiment of the present disclosure;
FIG. 5 is a schematic view illustrating a multi-scale decomposition process of a visible/infrared luminance image according to an embodiment of the present disclosure;
FIG. 6 is a schematic multi-scale fusion process of a visible light luminance image and an infrared luminance image according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram illustrating the fusion rate adjustment in an embodiment of the present application;
FIG. 8 is a schematic flow chart illustrating another method for fusing a visible light image and an infrared image according to an embodiment of the present disclosure;
FIG. 9 is a schematic flow chart illustrating a further method for fusing a visible light image and an infrared image according to an embodiment of the present disclosure;
FIG. 10 is a schematic structural diagram of a device for fusing a visible light image and an infrared image according to an embodiment of the present application;
FIG. 11 is a schematic structural diagram of a visible light image and infrared image fusion device in an embodiment of the present application;
fig. 12 is a schematic structural diagram of a computer-readable storage medium in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples.
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for fusing a visible light image and an infrared image according to an embodiment of the present disclosure, and it should be noted that, if substantially the same result is obtained, the present embodiment is not limited to the flow chart shown in fig. 1. As shown in fig. 1, this embodiment includes:
s120: and acquiring a visible light image and an infrared image, wherein the visible light image comprises a visible light brightness image and a visible light color image, and the infrared image comprises an infrared brightness image and an infrared pseudo color image.
In this embodiment, the device that captures the visible light image and the infrared image at the same time may be used to simultaneously acquire the visible light image and the infrared image, such as a thermal infrared imager. The visible light image and the infrared image may also be acquired separately using different devices, such as a visible light camera and an infrared camera. When different devices are used for acquiring the visible light image and the infrared image, the two devices can be placed at the same position, and the optical axes of the lenses are in the same direction and are parallel to acquire the visible light image and the infrared image at the same angle. The two devices can also be placed at different positions to acquire visible light images and infrared images at different angles. The embodiment does not limit the device used and the image acquisition angle.
The visible light image and the infrared image acquired by the equipment can be directly integrated and fused, and the visible light image and the infrared image can be sampled to acquire the image of the area to be fused. The visible light image and the infrared image to be fused need to have the same resolution, the same resolution can be set when the imaging device collects the images, and the image resolution can be adjusted after the images are obtained. The visible light image and the infrared image to be fused need to correspond to each feature point, the visible light image and the infrared image can be registered before fusion, and the feature points of the two images can be corresponded by adopting a registration method such as a calibration method, a feature point method and the like. When the visible light image and the infrared image are acquired by the same equipment or different equipment at the same angle, image registration can be directly carried out; when the visible light image and the infrared image are acquired at different angles, the angles need to be adjusted before registration.
The visible light image data and the infrared image data to be fused can be converted into YUV data, and the brightness image and the color image are fused respectively. YUV is a color coding method, in which "Y" represents brightness (Luma) and gray scale value, and "U" and "V" represent Chroma (Chroma) and saturation, which are used to describe the color and saturation of an image for specifying the color of a pixel. The visible light image data can be directly converted into YUV data to obtain a visible light brightness image and a visible light color image. The method can acquire the brightness data of the infrared image to obtain the infrared brightness image, and generate an infrared pseudo-color image from the infrared brightness data according to the mapping relation, wherein the color information in the pseudo-color image and the brightness information in the infrared brightness image form YUV data of the infrared image. The method for generating the infrared pseudo-color image can be implemented by any existing method, and is not limited herein.
S140: and carrying out region division on the infrared pseudo color image/visible light color image, respectively fusing the infrared pseudo color image and the visible light color image of each region, and merging the color images fused in each region to obtain a color fused image.
When the color image is divided into regions, the division may be performed based on the infrared pseudo color image, or may be performed based on the visible light color image. The method can only perform region division on the infrared pseudo color image, and then fuse the infrared pseudo color image of each region with the visible light color image corresponding to the region, at this time, although the region division is not performed on the visible light color image, the visible light color image is virtually fused in regions; similarly, only the visible color image may be divided into regions, and the visible color image of each region may be fused with the infrared pseudo color image corresponding to the region. By carrying out region division on the color images, different strategies can be adopted for fusing different regions to obtain fused images with different styles.
S160: and fusing the visible light brightness image and the infrared brightness image to obtain a brightness fused image.
Different fusion modes can be adopted for the brightness image under different scenes, and the method is not limited to segmentation fusion, multi-scale fusion and the like.
The order of step S140 and step S160 is not limited, and in another embodiment, step S160 may precede step S140, or color fusion and brightness fusion may be performed simultaneously and in parallel.
S180: and combining the color fusion image and the brightness fusion image to obtain a visible light-infrared fusion image.
In the embodiment, the visible light image and the infrared image are respectively subjected to color fusion and brightness fusion, when the colors are fused, the infrared pseudo color image/the visible light color image is divided into a plurality of areas, the images in different areas can be subjected to color fusion by adopting different fusion strategies, and the visible light information and the infrared information in the images can be more fully and flexibly utilized to obtain the optimal scene information of each area.
When the color image is divided into regions, different division rules can be adopted to obtain different division modes. For example, the infrared pseudo color image/visible color image may be divided into regions according to the color temperature of the color image, the pixel value of the brightness image, and the like.
Referring to fig. 2, fig. 2 is a schematic flow chart of a color image fusion method according to an embodiment of the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 2 is not limited in this embodiment. In this embodiment, the method for dividing the infrared pseudo color image into examples is described, and the specific method includes:
s241: and acquiring the pixel value of each pixel point of the infrared brightness image corresponding to the infrared pseudo-color image.
In this embodiment, the colors are divided into regions with reference to the pixel values of the luminance image. The infrared brightness image pixels can be traversed to obtain the pixel value of each pixel point, and the absolute value of the pixel value can be specifically taken.
S242: and determining a threshold interval where the pixel value is located, and dividing the infrared pseudo color image/visible color image into areas corresponding to the threshold interval.
One or more threshold values can be preset to form two or more threshold value intervals, the pixel value is compared with the threshold values respectively to see in which interval the pixel value is, and the pixel points in the same interval are divided into the same area. Two or more regions may be divided as needed, and the division of two regions is described as an example in this application, but is not limited thereto.
Referring to fig. 3, fig. 3 is a schematic view illustrating a region division process of a color image fusion method according to an embodiment of the present disclosure. In this embodiment, taking two threshold intervals as an example, the infrared pseudo color image is divided into a first region and a second region.
A first threshold value Thr is preset, and the acquired pixel values ir _ y (i) are respectively compared with the first threshold value Thr.
If the infrared brightness image pixel value ir _ y (i) at the current pixel point i is less than the threshold Thr, the pixel point is divided into a first area, which can also be called a low-temperature area, and if the infrared brightness image pixel value ir _ y (i) at the current pixel point i is greater than or equal to the threshold Thr, the pixel point is divided into a second area, which can also be called a high-temperature area; that is, the pixel value of each pixel point of the infrared luminance image corresponding to the infrared pseudo color image in the first region is smaller than the first threshold, and the pixel value of each pixel point of the infrared luminance image corresponding to the infrared pseudo color image in the second region is greater than or equal to the first threshold. Correspondingly, the visible color image area corresponding to the first area of the infrared pseudo color image can be divided into the first area, and the visible color image area corresponding to the second area of the infrared pseudo color image can be divided into the second area.
After the regions are divided, the color images of different regions can be fused by adopting the same fusion strategy, and can also be fused by adopting different fusion strategies. In the embodiment, different fusion strategies are adopted to fuse the visible light color image and the infrared pseudo color image of the two regions.
S243: and fusing the visible light color image and the infrared pseudo-color image of the first area.
And fusing the visible light color image and the infrared pseudo-color image of the first area by utilizing a first fusion strategy. And respectively determining the visible color image fusion weight and the infrared pseudo color image fusion weight of each pixel point, and performing weighted summation on the visible color image color value and the infrared pseudo color image color value of each pixel point in the first area to obtain a first color fusion image. The fusion weight of the infrared pseudo color image is the ratio of the pixel value of the infrared brightness image to the first threshold, and the sum of the fusion weight of the infrared pseudo color image and the fusion weight of the visible color image is 1.
The color values of the visible light color image and the infrared pseudo color image are divided into a U component and a V component, and the U component and the V component can be separately fused or integrally fused when the colors are fused. When separately fusing, the same image division mode can be adopted for two color components, different image division modes can be adopted, the same image fusion strategy can be adopted, and different image fusion strategies can be adopted. In this embodiment, the example of fusing the U component and the V component respectively will be described, that is, fusing the visible light color image and the infrared pseudo color image of the first region includes fusing the visible light color image U component and the infrared pseudo color image U component of the first region respectively, and fusing the visible light color image V component and the infrared pseudo color image V component of the first region.
Fusing the visible color image U component and the infrared pseudo color image U component of the first area by using a first fusion strategy comprises the following steps: respectively determining the visible light color image U component fusion weight and the infrared pseudo color image U component fusion weight of each pixel point, carrying out weighted summation on the visible light color image U component and the infrared pseudo color image U component of each pixel point in the first area, traversing each pixel point, and obtaining a first U component color fusion image. The fusion weight of the U component of the infrared pseudo-color image is the ratio of the pixel value of the infrared brightness image to the first threshold, and the sum of the fusion weight of the U component of the infrared pseudo-color image and the fusion weight of the U component of the visible color image is 1.
When the pixel value of the infrared brightness image is smaller, the weight of the U component of the infrared pseudo color image is smaller, and the weight of the U component of the visible color image is larger.
And weighting and summing the U component of the visible light color image and the U component of the infrared pseudo color image of each pixel point in the first area, namely summing the product of the fusion weight of the U component of the infrared pseudo color image and the product of the fusion weight of the U component of the visible light color image and the U component of the visible light color image to obtain the U component color image of the first area.
Similarly, a first fusion strategy is utilized to fuse the visible color image V component and the infrared pseudo color image V component in the first area, and each pixel point is traversed to obtain a first V component color fusion image.
S244: and fusing the visible light color image and the infrared pseudo-color image of the second area.
And fusing the visible light color image and the infrared pseudo-color image of the second area by using a second fusion strategy to obtain a second color fusion image. And taking the color value of the infrared pseudo-color image of each pixel point in the second area as the color value of the color image after the pixel point is fused, namely the color value of the second color fusion image.
In this embodiment, the example of fusing the U component and the V component respectively will be described, that is, fusing the visible light color image and the infrared pseudo color image of the second region includes fusing the visible light color image U component and the infrared pseudo color image U component of the second region respectively, and fusing the visible light color image V component and the infrared pseudo color image V component of the second region.
Fusing the visible color image U component and the infrared pseudo color image U component of the second area by using a second fusion strategy comprises the following steps: directly taking the U component of the infrared pseudo color image of each pixel point in the second area as the U component of the color image after the pixel point is fused, and traversing each pixel point to obtain a second U component color fusion image; and directly taking the V component of the infrared pseudo-color image of each pixel point in the second area as the V component of the color image after the pixel point is fused, and traversing each pixel point to obtain a second V component color fusion image fusion _ V.
The order of step S243 and step S244 is not limited, and in another embodiment, step S244 may be performed before step S243, or the first region fusion and the second region fusion may be performed simultaneously and in parallel.
In this embodiment, the U-component color image fusion _ U is divided into a first U-component color fusion image and a second U-component color fusion image, and the V-component color image fusion _ V is divided into a first V-component color fusion image and a second V-component color fusion image. The calculation formula of the fusion strategy of the U component color image and the V component color image is as follows:
Figure BDA0002882521120000111
Figure BDA0002882521120000112
wherein, ir _ U (i) is the U component of the infrared pseudo color image, vis _ U (i) is the U component of the visible color image, ir _ V (i) is the V component of the infrared pseudo color image, and vis _ V (i) is the V component of the visible color image.
S245: and combining the color images of the regions to obtain a color fusion image.
And combining the U component color image and the V component color image to obtain a color fusion image.
In the embodiment, the visible light image and the infrared image are subjected to color fusion, the infrared pseudo color image and the visible light color image are divided into two areas, different fusion strategies are adopted in the two areas, so that the color fusion image simultaneously has color information of the visible light image and pseudo color information of the infrared image, and when the two areas are fused in a segmented manner, more visible light color image color information is given to the low-temperature area while the pseudo color information of the infrared image in the high-temperature area is kept, the requirement that the real color of a scene object needs to be observed while the temperature change of a high-temperature object is concerned in partial application is met, the visible light information and the infrared information in the image can be more fully and flexibly utilized, and the optimal scene information of each area is obtained.
Referring to fig. 4, fig. 4 is a flowchart illustrating a luminance image fusion method according to an embodiment of the present disclosure. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 4 is not limited in this embodiment. As shown in fig. 4, in this embodiment, luminance image fusion is performed in a multi-scale manner, and the specific method includes:
s461: and respectively carrying out multi-scale decomposition on the visible light brightness image and the infrared brightness image.
And sequentially carrying out down-sampling and low-pass filtering on the visible light brightness image for multiple times to obtain a visible light brightness background image and visible light brightness detail images of multiple scales. And sequentially carrying out down-sampling and low-pass filtering on the infrared brightness image for multiple times to obtain an infrared brightness background image and infrared brightness detail images of multiple scales. Low-pass filtering (Low-pass filter) is a filtering method, in which a threshold value is set, Low-frequency signals within the threshold value can normally pass through, and high-frequency signals exceeding the threshold value are blocked.
Referring to fig. 5, fig. 5 is a schematic view illustrating a multi-scale decomposition process of a visible light/infrared luminance image according to an embodiment of the present disclosure. In this embodiment, the visible light luminance image and the infrared luminance image are subjected to low-pass filtering three times, respectively.
The first low-pass filter is set with a first critical value, the second low-pass filter is set with a second critical value, and the third low-pass filter is set with a third critical value. The first critical value is larger than the second critical value, and the second critical value is larger than the third critical value. The method comprises the steps of conducting first low-pass filtering on a visible light brightness image, reserving visible light brightness image information within a first critical value, conducting down-sampling and second low-pass filtering on the image, reserving visible light brightness image information within a second critical value, conducting down-sampling and third low-pass filtering on the image, reserving visible light brightness image information within a third critical value, and obtaining a visible light brightness background image. Subtracting the images before and after the first low-pass filtering to obtain a first-scale visible light brightness detail image, which can also be called a small-scale visible light brightness detail image; subtracting the images before and after the second low-pass filtering to obtain a second-scale visible light brightness detail image, which can also be called a medium-scale visible light brightness detail image; and subtracting the images before and after the third low-pass filtering to obtain a third-scale visible light brightness detail image, which can also be called a large-scale visible light brightness detail image.
And performing multi-scale decomposition on the infrared brightness image based on the method to obtain an infrared brightness background image, a first scale infrared brightness detail image, a second scale infrared brightness detail image and a third scale infrared brightness detail image.
Referring to fig. 6, fig. 6 is a schematic view illustrating a multi-scale fusion process of a visible light luminance image and an infrared luminance image according to an embodiment of the present disclosure.
S462: and respectively fusing the visible light brightness image and the infrared brightness image of each scale.
And fusing the visible light brightness background image and the infrared brightness background image to obtain a brightness background fused image. Specifically, the visible light brightness background image and the infrared brightness background image are integrally fused in proportion, that is, each pixel point is fused in the same proportion. Determining the fusion weight of the visible light brightness background image and the fusion weight of the infrared brightness background image, wherein the sum of the fusion weight of the visible light brightness background image and the fusion weight of the infrared brightness background image is 1, and weighting and summing the brightness value of the visible light brightness background image and the brightness value of the infrared brightness background image to obtain the brightness background fusion image.
And respectively fusing the visible light brightness detail images and the infrared brightness detail images of all scales to obtain a plurality of brightness detail fused images. And proportionally fusing each pixel point of the first-scale visible light brightness detail image and the first-scale infrared brightness detail image, and respectively determining the visible light brightness detail image fusion weight and the infrared brightness detail image fusion weight of each pixel point in each scale, wherein the sum of the fusion weight of the first-scale visible light brightness detail image and the fusion weight of the first-scale infrared brightness detail image is 1.
And setting a weight map based on each pixel point, wherein one coordinate position in the weight map corresponds to one pixel point. And comparing the first-scale visible light brightness detail image with the first-scale infrared brightness detail image pixel by pixel, and when the absolute value of the first-scale infrared brightness detail image pixel is greater than that of the first-scale visible light brightness detail image pixel, marking the position of the weight map coordinate corresponding to the pixel as 1, otherwise, marking the position as 0. And performing Gaussian filtering on the marked weight map. The gaussian filtering is a process of weighted average of the whole image, and the value of each pixel point is obtained by weighted average of the value of each pixel point and other pixel values in the neighborhood. And the value of each coordinate of the adjusted weight map is between 0 and 1, the coordinate value is used as the weight of the pixel point corresponding to the first-scale infrared brightness detail image, and the weight of the pixel point corresponding to the first-scale visible light brightness detail image is obtained by calculating according to the weight sum of 1. And weighting and summing the brightness value of the first-scale visible light brightness detail image and the brightness value of the first-scale infrared brightness detail image to obtain a first-scale brightness detail fusion image.
Based on the method, proportionally fusing each pixel point of the second-scale visible light brightness detail image and the second-scale infrared brightness detail image to obtain a second-scale brightness detail fused image; and proportionally fusing each pixel point of the third-scale visible light brightness detail image and the third-scale infrared brightness detail image to obtain a third-scale brightness detail fused image.
S463: and combining the brightness fusion images of all scales to obtain a brightness fusion image.
Before merging, the brightness detail images fused in all scales are enhanced, a group of amplification coefficients larger than or equal to 1 are set, the brightness detail fused images in all scales are respectively multiplied by the corresponding amplification coefficients, the same amplification coefficients can be adopted for the brightness detail fused images in different scales, and different amplification coefficients can also be adopted according to the fusion requirements.
And superposing the brightness background fusion image and the third scale brightness detail image, superposing the brightness background fusion image and the third scale brightness detail image after up-sampling, and superposing the brightness background fusion image and the third scale brightness detail image after up-sampling again to obtain a brightness fusion image. The up-sampling multiple is the same as the down-sampling multiple in multi-scale decomposition, so that the brightness fusion image is the same as the infrared brightness image and the visible light brightness image in size.
In the embodiment, the brightness fusion is carried out on the visible light image and the infrared image, the multi-scale decomposition is carried out on the visible light brightness image and the infrared brightness image, and the detail information of different scales in the scene can be well reserved. The integral fusion strategy with adjustable proportion is carried out on the brightness background image, so that the integral impression of the image can be changed through gray level change; and the brightness detail image adopts a multi-scale fusion strategy based on an attention mechanism, and meanwhile, details are enhanced, so that the detail information of the visible light image and the infrared image under different scenes is fully utilized, and the optimal scene information of each region is obtained.
Referring to fig. 7, fig. 7 is a schematic diagram illustrating fusion rate adjustment according to an embodiment of the present disclosure.
In the prior art, a visible light image and an infrared image are fused according to a determined proportion, the style of the fused visible light-infrared fused image is determined, and external parameters cannot be debugged, so that optimal scene information cannot be obtained in many scenes. In the embodiment, externally configured fusion rate is introduced, the fusion rate is the ratio of infrared image information to visible light image information in the visible light-infrared fusion image, and the fusion ratio is adjusted based on the fusion rate, so that the effect of uniformly regulating and controlling the image fusion style is achieved.
The fusion rate can act on the luminance image fusion 71, when the luminance background is fused, the fusion weight of the visible light luminance background image and the infrared luminance background image is adjusted based on the fusion rate to adjust the image fusion style, and the larger the fusion rate is, the larger the fusion weight of the infrared luminance background image is, the smaller the fusion weight of the visible light luminance background image is; and when the details after the brightness detail fusion are enhanced, adjusting the amplification factor based on the fusion rate to adjust the image fusion style, wherein the amplification factor is larger when the fusion rate is larger. Further, the fusion rate may also be applied to the color image fusion 72, during the color fusion, one or more thresholds are set based on the fusion rate to form two or more threshold intervals, the infrared pseudo color image is divided into regions corresponding to the threshold intervals, the image fusion style is adjusted based on the fusion rate, when the infrared pseudo color image is divided into two regions, a first threshold is set based on the fusion rate to divide the two regions, and the larger the fusion rate is, the smaller the first threshold is. And when the color fusion is carried out in the first area, determining the fusion weight of the infrared pseudo-color image and the visible light image based on the position of the current pixel value of the infrared brightness image between 0 and a first threshold value so as to adjust the image fusion style.
Further, the fusion rate may also be applied to both the luminance image fusion 71 and the color image fusion 72 to adjust the fusion style of the images. The fusion rate is simultaneously related to the fusion proportion of the brightness background image in the brightness fusion, the amplification factor for enhancing the details and the threshold value in the color fusion. The action mechanism is as follows: when more visible light information is expected to be obtained, controlling the background fusion proportion of the visible light brightness image to be larger, controlling the detail enhancement coefficient to be smaller and controlling the threshold value to be larger, thereby obtaining a fusion image which retains most of visible light brightness and color information and has pseudo color (temperature) information of a high-temperature object; when more infrared information is expected to be obtained, the background fusion proportion of the infrared brightness image is controlled to be larger, the detail enhancement coefficient is larger, the threshold value is smaller, and therefore the fusion image which retains most of infrared image brightness and pseudo-color information and has scene detail outline is obtained.
In the embodiment, the visible light image and the infrared image are respectively subjected to color fusion and brightness fusion, when the color fusion is carried out, the infrared pseudo-color image and the visible light color image are divided into two areas, different fusion strategies are adopted in the two areas, so that the color fusion image simultaneously has color information of the visible light image and pseudo-color information of the infrared image, and when the segmentation fusion is carried out, more color information of the visible light color image is given to the low-temperature area while the pseudo-color information of the infrared image in the high-temperature area is kept. When the brightness is fused, the visible light brightness image and the infrared brightness image are subjected to multi-scale decomposition, and the detail information of different scales in a scene can be well reserved. The integral fusion strategy with adjustable proportion is carried out on the brightness background image, so that the integral impression of the image can be changed through gray level change; and the brightness detail image adopts a multi-scale fusion strategy based on an attention mechanism, and meanwhile, details are enhanced, so that the detail information of the visible light and the infrared image under different scenes is fully utilized. Meanwhile, externally configured fusion rate is introduced, the overall fusion image style is adjusted by directly determining the fusion ratio of the low-frequency information of the visible light and infrared brightness images, adjusting the amplification ratio of details by associating detail enhancement parameters and adjusting the segmentation interval and the fusion ratio of the color fusion image by associating a threshold. When the fusion rate is small, the fused image has brightness and color information close to the visible light image, but can retain infrared pseudo color (temperature) information of an object in a high-temperature area; when the fusion rate is high, the fused image has brightness and pseudo color information close to the infrared image, and simultaneously has edge information of the visible image and color information of a low-temperature area. Based on the fusion rate adjustment, the visible light-infrared fusion image not only retains important information, but also can be adjusted between the visible light image and the infrared image.
Please refer to fig. 8, fig. 8 is a schematic flowchart illustrating another method for fusing a visible light image and an infrared image according to an embodiment of the present disclosure, and it should be noted that the present disclosure is not limited to the flowchart shown in fig. 8 if substantially the same result is obtained. As shown in fig. 8, this embodiment includes:
s820: and acquiring a visible light image and an infrared image, wherein the visible light image comprises a visible light brightness image and a visible light color image, and the infrared image comprises an infrared brightness image and an infrared pseudo color image.
S840: and respectively carrying out multi-scale decomposition on the visible light brightness image and the infrared brightness image, fusing the visible light brightness image and the infrared brightness image of each scale, and merging the brightness fused images of each scale to obtain a brightness fused image. The fusion weight of the visible light brightness image and the fusion weight of the infrared brightness image are set based on the fusion rate, and the fusion rate comprises the proportion of infrared image information and visible light image information in the visible light-infrared fusion image.
S860: and fusing the visible light color image and the infrared pseudo color image to obtain a color fused image.
Different fusion modes can be adopted for the color/pseudo color images under different scenes, and the method is not limited to segmented fusion, multi-scale fusion and the like.
The sequence of step S840 and step S860 is not limited, and in another embodiment, step S860 may precede step S840, or simultaneous parallel luminance fusion and color fusion may be performed.
S880: and combining the color fusion image and the brightness fusion image to obtain a visible light-infrared fusion image.
In the embodiment, the visible light image and the infrared image are subjected to brightness fusion and color fusion respectively, when the brightness is fused, the visible light brightness image and the infrared brightness image are subjected to multi-scale decomposition, and the visible light brightness image and the infrared brightness image of each scale are subjected to integral fusion with adjustable proportion based on the externally set fusion rate, so that the detail information of different scales in a scene can be well reserved, and the detail information of the visible light and the infrared image under different scenes can be fully utilized.
Please refer to fig. 9, fig. 9 is a flowchart illustrating a method for fusing a visible light image and an infrared image according to an embodiment of the present disclosure, and it should be noted that the present disclosure is not limited to the flowchart illustrated in fig. 9 if substantially the same result is obtained. As shown in fig. 9, this embodiment includes:
s920: and acquiring a visible light image and an infrared image, wherein the visible light image comprises a visible light brightness image and a visible light color image, and the infrared image comprises an infrared brightness image and an infrared pseudo color image.
S940: and respectively carrying out multi-scale decomposition on the visible light brightness image and the infrared brightness image, fusing the visible light brightness image and the infrared brightness image of each scale, and merging the brightness fused images of each scale to obtain a brightness fused image. The fusion weight of the visible light brightness image and the fusion weight of the infrared brightness image are set based on the fusion rate, and the fusion rate comprises the proportion of infrared image information and visible light image information in the visible light-infrared fusion image.
S960: and carrying out region division on the infrared pseudo color image/visible light color image, respectively fusing the infrared pseudo color image and the visible light color image of each region, and merging the color images fused in each region to obtain a color fused image.
The sequence of step S940 and step S960 is not limited, and in another embodiment, step S960 may precede step S940, and simultaneous parallel luminance fusion and color fusion may also be performed.
S980: and combining the color fusion image and the brightness fusion image to obtain a visible light-infrared fusion image.
The visible light image and infrared image fusion method comprises brightness fusion and color fusion, and can adopt a multi-scale brightness fusion method based on fusion rate adjustment and a color fusion method in the prior art; or adopting the brightness fusion method in the prior art, and adopting the segmentation color fusion method of the visible light color image and the infrared pseudo color image; or a visible light color image and infrared pseudo color image segmentation color fusion method can be adopted while a multi-scale brightness fusion method based on fusion rate adjustment is adopted. The invention adopts a multi-scale brightness fusion method based on fusion rate adjustment or adopts a visible light color image and infrared pseudo-color image segmentation fusion method, which is included in the protection scope of the patent of the invention.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a visible light image and infrared image fusion device according to an embodiment of the present disclosure. In this embodiment, the visible light image and infrared image fusion device includes an obtaining module 101, a color image fusion module 102, a brightness image fusion module 103, and a merging module 104.
The acquisition module 101 is configured to acquire a visible light image and an infrared image, where the visible light image includes a visible light brightness image and a visible light color image, and the infrared image includes an infrared brightness image and an infrared pseudo color image; the color image fusion module 102 is configured to fuse the visible light color image and the infrared color image to obtain a color fusion image; the brightness image fusion module 103 is used for fusing the visible light brightness image and the infrared brightness image to obtain a brightness fusion image; the merging module 104 is configured to merge the color-fused image and the brightness-fused image to obtain a visible light-infrared fused image. The visible light image and infrared image fusion device is used for respectively carrying out color fusion and brightness fusion on the visible light image and the infrared image.
In one embodiment, during color fusion, under different requirements, different region divisions are performed on the infrared pseudo color image/visible light color image, and color fusion is performed in different regions, so that visible light information and infrared information are fully utilized, and optimal scene information of each region is obtained.
Specifically, the color image fusion module 102 is configured to perform region division on the infrared pseudo color image/visible color image, fuse the infrared pseudo color image and the visible color image of each region respectively, and merge the color images obtained by fusing the regions to obtain a color fusion image.
In another embodiment, when the brightness is fused, the visible light brightness image and the infrared brightness image are subjected to multi-scale decomposition, so that the detail information of different scales in the scene can be well reserved. The brightness background image is subjected to an integral fusion strategy with adjustable proportion, a multi-scale fusion strategy based on an attention mechanism is adopted for a brightness detail image, and meanwhile, details are enhanced, so that the detail information of visible light and infrared images under different scenes is fully utilized, and the optimal scene information of each region is obtained.
Further, the luminance image fusion module 103 is configured to perform multi-scale decomposition on the visible light luminance image and the infrared luminance image, fuse the visible light luminance image and the infrared luminance image of each scale, and merge the luminance fusion images of each scale to obtain a luminance fusion image. The fusion weight of the visible light brightness image and the fusion weight of the infrared brightness image are set based on the fusion rate, and the fusion rate comprises the proportion of infrared image information and visible light image information in the visible light-infrared fusion image.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a visible light image and infrared image fusion device in an embodiment of the present application. In this embodiment, the visible light image and infrared image fusion device 10 includes a processor 11.
The processor 11 may also be referred to as a CPU (Central Processing Unit). The processor 11 may be an integrated circuit chip having signal processing capabilities. The processor 11 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 11 may be any conventional processor or the like.
The visible light image and infrared image fusion apparatus 10 may further include a memory (not shown in the figure) for storing instructions and data required for the processor 11 to operate.
The processor 11 is configured to execute instructions to implement the method provided by any embodiment and any non-conflicting combination of the visible light image and the infrared image fusion method of the present application.
Referring to fig. 12, fig. 12 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present disclosure. The computer readable storage medium 20 of the embodiments of the present application stores instructions/program data 21, which instructions/program data 21, when executed, implement the methods provided by any of the embodiments of the visible light image and infrared image fusion methods of the present application, and any non-conflicting combinations. The instructions/program data 21 may form a program file stored in the storage medium 20 in the form of a software product, so as to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods according to the embodiments of the present application. And the aforementioned storage medium 20 includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (19)

1. A method for fusing a visible light image and an infrared image is characterized by comprising the following steps:
acquiring a visible light image and an infrared image, wherein the visible light image comprises a visible light brightness image and a visible light color image, and the infrared image comprises an infrared brightness image and an infrared pseudo color image;
carrying out region division on the infrared pseudo color image/visible light color image, respectively fusing the infrared pseudo color image and the visible light color image of each region, and merging the color images fused in each region to obtain a color fused image; fusing the visible light brightness image and the infrared brightness image to obtain a brightness fused image;
and combining the color fusion image and the brightness fusion image to obtain a visible light-infrared fusion image.
2. The visible light image and infrared image fusion method according to claim 1,
the region division of the infrared pseudo color image comprises the following steps:
acquiring pixel values of all pixel points of the infrared brightness image corresponding to the infrared pseudo-color image;
and determining a threshold interval where the pixel value is located, and dividing the infrared pseudo color image/visible color image into areas corresponding to the threshold interval.
3. The visible light image and infrared image fusion method according to claim 2,
the determining the threshold interval in which the pixel value is located includes:
setting the threshold interval based on a fusion rate, wherein the fusion rate comprises the ratio of infrared image information to visible light image information in the visible light-infrared fusion image.
4. The visible light image and infrared image fusion method according to claim 2,
the region division of the infrared pseudo color image comprises the following steps: dividing the infrared pseudo color image into a first area and a second area, wherein the pixel value of each pixel point of the infrared brightness image corresponding to the infrared pseudo color image in the first area is smaller than a first threshold value, and the pixel value of each pixel point of the infrared brightness image corresponding to the infrared pseudo color image in the second area is larger than or equal to the first threshold value;
the fusing the infrared pseudo color image and the visible color image of each area respectively comprises the following steps: fusing the visible light color image and the infrared pseudo color image of the first area by utilizing a first fusion strategy;
the first fusion strategy comprises: and respectively determining the visible color image fusion weight and the infrared pseudo color image fusion weight of each pixel point, and performing weighted summation on the visible color image color value and the infrared pseudo color image color value of each pixel point in the first area to obtain a first color fusion image.
5. The visible light image and infrared image fusion method according to claim 4,
the determining the visible light color image fusion weight and the infrared pseudo color image fusion weight of each pixel point respectively comprises the following steps:
the infrared pseudo-color image fusion weight is the ratio of the pixel value of the infrared brightness image to the first threshold, and the sum of the infrared pseudo-color image fusion weight and the visible light color image fusion weight is 1.
6. The method of claim 4,
the determining the visible light color image fusion weight and the infrared pseudo color image fusion weight of each pixel point respectively comprises the following steps:
the smaller the pixel value of the infrared brightness image corresponding to the infrared pseudo-color image is, the smaller the fusion weight of the infrared pseudo-color image is; the sum of the visible light color image fusion weight and the infrared pseudo color image fusion weight is 1.
7. The visible light image and infrared image fusion method according to claim 2,
the region division of the infrared pseudo color image comprises the following steps: dividing the infrared pseudo color image into a first area and a second area, wherein the pixel value of each pixel point of the infrared brightness image corresponding to the infrared pseudo color image in the first area is smaller than a first threshold value, and the pixel value of each pixel point of the infrared brightness image corresponding to the infrared pseudo color image in the second area is larger than or equal to the first threshold value;
the fusing the infrared pseudo color image and the visible color image of each area respectively comprises the following steps: fusing the visible light color image and the infrared pseudo color image of the second area by using a second fusion strategy;
the second fusion strategy comprises: and taking the color value of the infrared pseudo-color image of each pixel point in the second area as the color value of the color image after the pixel point is fused.
8. The visible light image and infrared image fusion method according to any one of claims 4 to 7,
setting the first threshold value based on a fusion rate, wherein the fusion rate comprises the ratio of infrared image information to visible light image information in the visible light-infrared fusion image, and the larger the fusion rate is, the smaller the first threshold value is.
9. The visible light image and infrared image fusion method according to claim 1,
the fusing the visible light brightness image and the infrared brightness image to obtain a brightness fused image comprises the following steps:
respectively carrying out multi-scale decomposition on the visible light brightness image and the infrared brightness image;
respectively fusing the visible light brightness image and the infrared brightness image of each scale;
and combining the brightness fusion images of all scales to obtain the brightness fusion image.
10. The visible light image and infrared image fusion method according to claim 9,
the multi-scale decomposition of the visible light brightness image and the infrared brightness image is respectively carried out, and the fusion of the visible light brightness image and the infrared brightness image of each scale comprises the following steps:
performing multi-scale decomposition on the visible light brightness image and the infrared brightness image respectively, and performing high-frequency extraction based on low-pass filtering on each scale to obtain a visible light brightness background image, visible light brightness detail images of multiple scales, an infrared brightness background image and infrared brightness detail images of multiple scales;
fusing the visible light brightness background image and the infrared brightness background image to obtain a brightness background fused image; respectively fusing the visible light brightness detail images and the infrared brightness detail images of all scales to obtain a plurality of brightness detail fused images;
and merging the brightness background fusion image and the brightness detail fusion image to obtain the brightness fusion image.
11. The visible light image and infrared image fusion method according to claim 10,
the fusing the visible light brightness background image and the infrared brightness background image to obtain a brightness background fused image comprises the following steps:
determining the fusion weight of the visible light brightness background image and the fusion weight of the infrared brightness background image;
weighting and summing the brightness value of the visible light brightness background image and the brightness value of the infrared brightness background image to obtain a brightness background fusion image;
the fusion weight of the visible light brightness background image and the fusion weight of the infrared brightness background image are set based on a fusion rate, the fusion rate comprises the proportion of infrared image information and visible light image information in the visible light-infrared fusion image, and the sum of the fusion weight of the visible light brightness background image and the fusion weight of the infrared brightness background image is 1.
12. The visible light image and infrared image fusion method according to claim 11,
the setting of the fusion weight of the visible light brightness background image and the fusion weight of the infrared brightness background image based on the fusion rate includes:
the larger the fusion rate is, the larger the fusion weight of the infrared brightness background image is.
13. The visible light image and infrared image fusion method according to claim 10,
the fusing the visible light brightness detail images and the infrared brightness detail images of all scales to obtain a plurality of brightness detail fused images comprises the following steps:
respectively determining the visible light brightness detail image fusion weight and the infrared brightness detail image fusion weight of each pixel point in each scale;
respectively carrying out weighted summation on the brightness value of the visible light brightness background image and the brightness value of the infrared brightness background image of each pixel point in each scale to obtain a brightness detail fusion image of each scale;
the fusion weight of the visible light brightness detail image and the fusion weight of the infrared brightness detail image of each pixel point in each scale are set based on a weight map, and the sum of the fusion weight of the visible light brightness detail image and the fusion weight of the infrared brightness detail image is 1.
14. The visible light image and infrared image fusion method according to claim 10,
the merging the brightness background fusion image and the brightness detail fusion image to obtain the brightness fusion image comprises:
and respectively multiplying the brightness detail fusion images of all scales by corresponding amplification coefficients to enhance the brightness detail fusion images, wherein each amplification coefficient is greater than or equal to 1.
15. The visible light image and infrared image fusion method according to claim 14,
and setting the amplification coefficient based on a fusion rate, wherein the fusion rate comprises the proportion of visible light image information and infrared image information in the visible light-infrared fusion image.
16. The visible light image and infrared image fusion method according to claim 15,
the setting the magnification factor based on the fusion rate includes:
the greater the fusion rate, the greater the magnification factor.
17. A method for fusing a visible light image and an infrared image is characterized by comprising the following steps:
acquiring a visible light image and an infrared image, wherein the visible light image comprises a visible light brightness image and a visible light color image, and the infrared image comprises an infrared brightness image and an infrared pseudo color image;
respectively carrying out multi-scale decomposition on the visible light brightness image and the infrared brightness image, fusing the visible light brightness image and the infrared brightness image of each scale, and merging the brightness fused images of each scale to obtain a brightness fused image, wherein the fusion weight of the visible light brightness image and the fusion weight of the infrared brightness image are set based on the fusion rate, and the fusion rate comprises the ratio of infrared image information and visible light image information in the visible light-infrared fused image; fusing the visible light color image and the infrared pseudo color image to obtain a color fused image;
and combining the color fusion image and the brightness fusion image to obtain a visible light-infrared fusion image.
18. A visible light image and infrared image fusion device, characterized in that the visible light and infrared image fusion device comprises a processor for executing instructions to implement the visible light image and infrared image fusion method according to any one of claims 1 to 17.
19. A computer-readable storage medium for storing instructions/program data executable to implement the visible light image and infrared image fusion method according to any one of claims 1-17.
CN202110003609.4A 2021-01-04 Visible light image and infrared image fusion method, device and readable storage medium Active CN112767291B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110003609.4A CN112767291B (en) 2021-01-04 Visible light image and infrared image fusion method, device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110003609.4A CN112767291B (en) 2021-01-04 Visible light image and infrared image fusion method, device and readable storage medium

Publications (2)

Publication Number Publication Date
CN112767291A true CN112767291A (en) 2021-05-07
CN112767291B CN112767291B (en) 2024-05-28

Family

ID=

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113784026A (en) * 2021-08-30 2021-12-10 鹏城实验室 Method, apparatus, device and storage medium for calculating position information based on image
CN113792592A (en) * 2021-08-09 2021-12-14 深圳光启空间技术有限公司 Image acquisition processing method and image acquisition processing device
CN114255302A (en) * 2022-03-01 2022-03-29 北京瞭望神州科技有限公司 Wisdom country soil data processing all-in-one
CN115082434A (en) * 2022-07-21 2022-09-20 浙江华是科技股份有限公司 Multi-source feature-based magnetic core defect detection model training method and system
CN115239610A (en) * 2022-07-28 2022-10-25 爱芯元智半导体(上海)有限公司 Image fusion method, device, system and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366353A (en) * 2013-05-08 2013-10-23 北京大学深圳研究生院 Infrared image and visible-light image fusion method based on saliency region segmentation
JP2017011633A (en) * 2015-06-26 2017-01-12 キヤノン株式会社 Imaging device
CN106600572A (en) * 2016-12-12 2017-04-26 长春理工大学 Adaptive low-illumination visible image and infrared image fusion method
CN106780392A (en) * 2016-12-27 2017-05-31 浙江大华技术股份有限公司 A kind of image interfusion method and device
US20180227509A1 (en) * 2015-08-05 2018-08-09 Wuhan Guide Infrared Co., Ltd. Visible light image and infrared image fusion processing system and fusion method
CN109255774A (en) * 2018-09-28 2019-01-22 中国科学院长春光学精密机械与物理研究所 A kind of image interfusion method, device and its equipment
CN110136183A (en) * 2018-02-09 2019-08-16 华为技术有限公司 A kind of method and relevant device of image procossing
CN110796628A (en) * 2019-10-17 2020-02-14 浙江大华技术股份有限公司 Image fusion method and device, shooting device and storage medium
CN111489319A (en) * 2020-04-17 2020-08-04 电子科技大学 Infrared image enhancement method based on multi-scale bilateral filtering and visual saliency
CN111539902A (en) * 2020-04-16 2020-08-14 烟台艾睿光电科技有限公司 Image processing method, system, equipment and computer readable storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366353A (en) * 2013-05-08 2013-10-23 北京大学深圳研究生院 Infrared image and visible-light image fusion method based on saliency region segmentation
JP2017011633A (en) * 2015-06-26 2017-01-12 キヤノン株式会社 Imaging device
US20180227509A1 (en) * 2015-08-05 2018-08-09 Wuhan Guide Infrared Co., Ltd. Visible light image and infrared image fusion processing system and fusion method
CN106600572A (en) * 2016-12-12 2017-04-26 长春理工大学 Adaptive low-illumination visible image and infrared image fusion method
CN106780392A (en) * 2016-12-27 2017-05-31 浙江大华技术股份有限公司 A kind of image interfusion method and device
CN110136183A (en) * 2018-02-09 2019-08-16 华为技术有限公司 A kind of method and relevant device of image procossing
US20200357104A1 (en) * 2018-02-09 2020-11-12 Huawei Technologies Co., Ltd. Image processing method and related device
CN109255774A (en) * 2018-09-28 2019-01-22 中国科学院长春光学精密机械与物理研究所 A kind of image interfusion method, device and its equipment
CN110796628A (en) * 2019-10-17 2020-02-14 浙江大华技术股份有限公司 Image fusion method and device, shooting device and storage medium
CN111539902A (en) * 2020-04-16 2020-08-14 烟台艾睿光电科技有限公司 Image processing method, system, equipment and computer readable storage medium
CN111489319A (en) * 2020-04-17 2020-08-04 电子科技大学 Infrared image enhancement method based on multi-scale bilateral filtering and visual saliency

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XIAOGONG LIN; RONGHAO YANG: "Image Fusion Processing Method Based on Infrared and Visible Light", 2019 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION (ICMA), 29 August 2019 (2019-08-29) *
孔韦韦;雷英杰;雷阳;倪学亮;: "基于NSCT和IHS变换域的灰度可见光与红外图像融合方法", 系统工程与电子技术, vol. 32, no. 07, 31 July 2010 (2010-07-31) *
尹云飞,罗晓清,张战成: "基于梯度转移和显著性保持的红外可见光图像融合方法", 指挥信息系统与技术, vol. 11, no. 2, 30 April 2020 (2020-04-30) *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792592A (en) * 2021-08-09 2021-12-14 深圳光启空间技术有限公司 Image acquisition processing method and image acquisition processing device
CN113792592B (en) * 2021-08-09 2024-05-07 深圳光启空间技术有限公司 Image acquisition processing method and image acquisition processing device
CN113784026A (en) * 2021-08-30 2021-12-10 鹏城实验室 Method, apparatus, device and storage medium for calculating position information based on image
CN113784026B (en) * 2021-08-30 2023-04-18 鹏城实验室 Method, apparatus, device and storage medium for calculating position information based on image
CN114255302A (en) * 2022-03-01 2022-03-29 北京瞭望神州科技有限公司 Wisdom country soil data processing all-in-one
CN114255302B (en) * 2022-03-01 2022-05-13 北京瞭望神州科技有限公司 Wisdom country soil data processing all-in-one
CN115082434A (en) * 2022-07-21 2022-09-20 浙江华是科技股份有限公司 Multi-source feature-based magnetic core defect detection model training method and system
CN115082434B (en) * 2022-07-21 2022-12-09 浙江华是科技股份有限公司 Multi-source feature-based magnetic core defect detection model training method and system
CN115239610A (en) * 2022-07-28 2022-10-25 爱芯元智半导体(上海)有限公司 Image fusion method, device, system and storage medium
CN115239610B (en) * 2022-07-28 2024-01-26 爱芯元智半导体(上海)有限公司 Image fusion method, device, system and storage medium

Similar Documents

Publication Publication Date Title
CN112767289B (en) Image fusion method, device, medium and electronic equipment
CN107370958B (en) Image blurs processing method, device and camera terminal
WO2018082185A1 (en) Image processing method and device
CN1985274A (en) Methods, system and program modules for restoration of color components in an image model
US8913153B2 (en) Imaging systems and methods for generating motion-compensated high-dynamic-range images
WO2022042049A1 (en) Image fusion method, and training method and apparatus for image fusion model
CN108055452A (en) Image processing method, device and equipment
CN110660088A (en) Image processing method and device
US8351776B2 (en) Auto-focus technique in an image capture device
US9007488B2 (en) Systems and methods for generating interpolated high-dynamic-range images
JP2022071177A (en) Multiplexed high dynamic range image
CN107395991B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN110660090B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN111986129A (en) HDR image generation method and device based on multi-shot image fusion and storage medium
CN109493283A (en) A kind of method that high dynamic range images ghost is eliminated
CN109685853B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110569927A (en) Method, terminal and computer equipment for scanning and extracting panoramic image of mobile terminal
CN110796041B (en) Principal identification method and apparatus, electronic device, and computer-readable storage medium
CN112651911B (en) High dynamic range imaging generation method based on polarized image
Huo et al. Fast fusion-based dehazing with histogram modification and improved atmospheric illumination prior
CN110175967B (en) Image defogging processing method, system, computer device and storage medium
US20110141321A1 (en) Method and apparatus for transforming a lens-distorted image to a perspective image in bayer space
CN108122218B (en) Image fusion method and device based on color space
CN108702494B (en) Image processing apparatus, imaging apparatus, image processing method, and storage medium
CN105957020A (en) Image generator and image generation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230828

Address after: Room 201, Building A, Integrated Circuit Design Industrial Park, No. 858, Jianshe 2nd Road, Economic and Technological Development Zone, Xiaoshan District, Hangzhou City, Zhejiang Province, 311225

Applicant after: Zhejiang Huagan Technology Co.,Ltd.

Address before: No.1187 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: ZHEJIANG DAHUA TECHNOLOGY Co.,Ltd.

GR01 Patent grant