CN112767291B - Visible light image and infrared image fusion method, device and readable storage medium - Google Patents

Visible light image and infrared image fusion method, device and readable storage medium Download PDF

Info

Publication number
CN112767291B
CN112767291B CN202110003609.4A CN202110003609A CN112767291B CN 112767291 B CN112767291 B CN 112767291B CN 202110003609 A CN202110003609 A CN 202110003609A CN 112767291 B CN112767291 B CN 112767291B
Authority
CN
China
Prior art keywords
image
infrared
brightness
fusion
visible light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110003609.4A
Other languages
Chinese (zh)
Other versions
CN112767291A (en
Inventor
陈虹宇
俞克强
王松
张东
刘晓沐
杨志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Huagan Technology Co ltd
Original Assignee
Zhejiang Huagan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Huagan Technology Co ltd filed Critical Zhejiang Huagan Technology Co ltd
Priority to CN202110003609.4A priority Critical patent/CN112767291B/en
Publication of CN112767291A publication Critical patent/CN112767291A/en
Application granted granted Critical
Publication of CN112767291B publication Critical patent/CN112767291B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a method for fusing a visible light image and an infrared image, which comprises the following steps: obtaining a visible light image and an infrared image, wherein the visible light image comprises a visible light brightness image and a visible light color image, and the infrared image comprises an infrared brightness image and an infrared pseudo-color image; dividing the infrared pseudo-color image/visible light color image into areas, respectively fusing the infrared pseudo-color image and the visible light color image of each area, and merging the color images fused by each area to obtain a color fusion image; fusing the visible brightness image and the infrared brightness image to obtain a brightness fused image; and combining the color fusion image and the brightness fusion image to obtain a visible light-infrared fusion image. By the mode, the visible light image information and the infrared image information can be fully utilized.

Description

Visible light image and infrared image fusion method, device and readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and apparatus for fusing a visible light image and an infrared image, and a readable storage medium.
Background
Infrared thermal imaging technology is widely used in the fields of industrial detection, fire protection, military and the like. In some applications, binocular techniques, i.e. visible light imaging and infrared imaging are integrated into the same monitoring device, are often used in order to obtain more scene information. On visible light-infrared thermal imaging binocular devices, in order to make the infrared image better reflect scene information, a method commonly adopted is an image fusion technology.
The image fusion technology can synthesize images formed under the conditions of different focal lengths, exposure and the like of two different types of imaging sensors or similar sensors into an image with more abundant information quantity, and is more suitable for later processing and research, the advantages enable the image fusion to be widely developed in the fields of remote sensing, camera or mobile phone imaging, monitoring, investigation and the like, and particularly the infrared and visible light image fusion plays a very important role in the military field. In recent years, most fusion methods do not consider the change of the usage scene, and the information is not fully utilized, so that the fused image cannot obtain the optimal scene information of each region according to the local characteristics of the image.
Disclosure of Invention
The invention mainly solves the technical problem of providing a method, equipment and a readable storage medium for fusing a visible light image and an infrared image, which can fully utilize visible light image information and infrared image information.
In order to solve the technical problems, the invention adopts a technical scheme that: the method comprises the steps of obtaining a visible light image and an infrared image, wherein the visible light image comprises a visible light brightness image and a visible light color image, and the infrared image comprises an infrared brightness image and an infrared pseudo-color image; dividing the infrared pseudo-color image/visible light color image into areas, respectively fusing the infrared pseudo-color image and the visible light color image of each area, and merging the color images fused by each area to obtain a color fusion image; fusing the visible brightness image and the infrared brightness image to obtain a brightness fused image; and combining the color fusion image and the brightness fusion image to obtain a visible light-infrared fusion image.
Wherein, the region division of the infrared pseudo-color image comprises: acquiring pixel values of all pixel points of an infrared brightness image corresponding to the infrared pseudo-color image; and determining a threshold interval in which the pixel value is positioned, and dividing the infrared pseudo-color image/visible light color image into an area corresponding to the threshold interval.
Wherein determining the threshold interval in which the pixel value is located includes: and setting a threshold interval based on the fusion rate, wherein the fusion rate comprises the ratio of infrared image information to visible light image information in the visible light-infrared fusion image.
Wherein, the region division of the infrared pseudo-color image comprises: dividing the infrared pseudo-color image into a first area and a second area, wherein the pixel value of each pixel point of the infrared brightness image corresponding to the infrared pseudo-color image in the first area is smaller than a first threshold value, and the pixel value of each pixel point of the infrared brightness image corresponding to the infrared pseudo-color image in the second area is larger than or equal to the first threshold value; the infrared pseudo-color image and the visible light color image of each region are respectively fused, and the method comprises the following steps: fusing the visible light color image and the infrared pseudo color image of the first area by using a first fusion strategy; the first fusion strategy comprises: and respectively determining the fusion weight of the visible light color image and the fusion weight of the infrared pseudo color image of each pixel point, and carrying out weighted summation on the color value of the visible light color image and the color value of the infrared pseudo color image of each pixel point in the first area to obtain a first color fusion image.
The method for respectively determining the visible light color image fusion weight and the infrared pseudo color image fusion weight of each pixel point comprises the following steps: the infrared pseudo color image fusion weight is the ratio of the pixel value of the infrared brightness image to the first threshold value, and the sum of the infrared pseudo color image fusion weight and the visible light color image fusion weight is 1.
The method for respectively determining the visible light color image fusion weight and the infrared pseudo color image fusion weight of each pixel point comprises the following steps: the smaller the pixel value of the infrared brightness image corresponding to the infrared pseudo color image is, the smaller the fusion weight of the infrared pseudo color image is; the sum of the fusion weight of the visible light color image and the fusion weight of the infrared pseudo color image is 1.
Wherein, the region division of the infrared pseudo-color image comprises: dividing the infrared pseudo-color image into a first area and a second area, wherein the pixel value of each pixel point of the infrared brightness image corresponding to the infrared pseudo-color image in the first area is smaller than a first threshold value, and the pixel value of each pixel point of the infrared brightness image corresponding to the infrared pseudo-color image in the second area is larger than or equal to the first threshold value; the infrared pseudo-color image and the visible light color image of each region are respectively fused, and the method comprises the following steps: fusing the visible light color image and the infrared pseudo color image of the second area by using a second fusing strategy; the second fusion strategy comprises: and taking the color value of the infrared pseudo-color image of each pixel point in the second area as the color value of the color image after the pixel points are fused.
The first threshold is set based on the fusion rate, wherein the fusion rate comprises the ratio of infrared image information to visible light image information in the visible light-infrared fusion image, and the larger the fusion rate is, the smaller the first threshold is.
Wherein, fusing the visible brightness image and the infrared brightness image to obtain a brightness fusion image comprises: respectively carrying out multi-scale decomposition on the visible brightness image and the infrared brightness image; respectively fusing the visible brightness image and the infrared brightness image of each scale; and merging the brightness fusion images of all scales to obtain the brightness fusion image.
The method for fusing the visible light brightness image and the infrared brightness image comprises the following steps of: respectively carrying out multi-scale decomposition on the visible light brightness image and the infrared brightness image, and carrying out high-frequency extraction based on low-pass filtering on each scale to obtain a visible light brightness background image, a visible light brightness detail image of a plurality of scales, an infrared brightness background image and an infrared brightness detail image of a plurality of scales; fusing the visible brightness background image and the infrared brightness background image to obtain a brightness background fused image; respectively fusing the visible brightness detail images and the infrared brightness detail images of all scales to obtain a plurality of brightness detail fusion images; and combining the brightness background fusion image and the brightness detail fusion image to obtain a brightness fusion image.
The method for obtaining the brightness background fusion image by fusing the visible brightness background image and the infrared brightness background image comprises the following steps: determining the fusion weight of the background image with visible brightness and the fusion weight of the background image with infrared brightness; the brightness value of the visible brightness background image and the brightness value of the infrared brightness background image are weighted and summed to obtain a brightness background fusion image; the fusion weight of the visible light brightness background image and the fusion weight of the infrared brightness background image are set based on the fusion rate, wherein the fusion rate comprises the ratio of infrared image information to visible light image information in the visible light-infrared fusion image, and the sum of the fusion weight of the visible light brightness background image and the fusion weight of the infrared brightness background image is 1.
The setting of the fusion weight of the visible brightness background image and the fusion weight of the infrared brightness background image based on the fusion rate comprises the following steps: the larger the fusion rate is, the larger the fusion weight of the infrared brightness background image is.
The step of fusing the visible brightness detail image and the infrared brightness detail image of each scale to obtain a plurality of brightness detail fusion images comprises the following steps: respectively determining visible brightness detail image fusion weights and infrared brightness detail image fusion weights of all pixel points in all scales; respectively carrying out weighted summation on the brightness value of the visible brightness background image and the brightness value of the infrared brightness background image of each pixel point in each scale to obtain a brightness detail fusion image of each scale; the method comprises the steps of setting visible brightness detail image fusion weights and infrared brightness detail image fusion weights of pixel points in each scale based on a weight map, wherein the sum of the visible brightness detail image fusion weights and the infrared brightness detail image fusion weights is 1.
The setting of the visible brightness detail image fusion weight and the infrared brightness detail image fusion weight of each pixel point in each scale based on the weight map comprises the following steps: comparing the visible light brightness detail image and the infrared brightness detail image with the same scale pixel by pixel, if the absolute value of the pixel value of the infrared brightness image is larger than that of the pixel value of the visible light brightness image, marking the corresponding coordinate in the weight map as 1, otherwise marking as 0; gaussian filtering is carried out on the marked weight map; and taking the pixel value of each pixel point in the weight map as a fusion weight value of the infrared brightness detail image.
Wherein, merging the luminance background fusion image and the luminance detail fusion image, obtaining the luminance fusion image comprises: and multiplying the brightness detail fusion images of all scales by corresponding amplification factors respectively to enhance the brightness detail fusion images, wherein each amplification factor is greater than or equal to 1.
The magnification factor is set based on a fusion rate, wherein the fusion rate comprises the ratio of visible light image information to infrared image information in the visible light-infrared fusion image.
Wherein setting the magnification factor based on the fusion rate includes: the larger the fusion rate, the larger the magnification factor.
In order to solve the technical problems, the invention adopts another technical scheme that: the method for fusing the visible light image and the infrared image comprises the following steps: obtaining a visible light image and an infrared image, wherein the visible light image comprises a visible light brightness image and a visible light color image, and the infrared image comprises an infrared brightness image and an infrared pseudo-color image; respectively carrying out multi-scale decomposition on the visible light brightness image and the infrared brightness image, fusing the visible light brightness image and the infrared brightness image of each scale, and merging the brightness fusion images of each scale to obtain a brightness fusion image, wherein the fusion weight of the visible light brightness image and the fusion weight of the infrared brightness image are set based on the fusion rate, and the fusion rate comprises the proportion of infrared image information and visible light image information in the visible light-infrared fusion image; fusing the visible light color image and the infrared pseudo color image to obtain a color fused image; and combining the color fusion image and the brightness fusion image to obtain a visible light-infrared fusion image.
In order to solve the technical problems, the invention adopts another technical scheme that: the visible light image and infrared image fusion device comprises a processor, wherein the processor is used for executing instructions to realize the visible light image and infrared image fusion method.
In order to solve the technical problems, the invention adopts another technical scheme that: a computer readable storage medium storing instructions/program data executable to implement the above-described method of fusion of a visible light image with an infrared image.
The beneficial effects of the invention are as follows: different from the prior art, the invention respectively performs color fusion and brightness fusion on the visible light image and the infrared image, and when in color fusion, the infrared pseudo-color image/visible light color image is divided into a plurality of areas, and different fusion strategies can be adopted to perform color fusion on the images in different areas, so that the visible light information and the infrared information in the images can be utilized more fully and flexibly, and the optimal scene information of each area can be obtained.
Drawings
FIG. 1 is a flow chart of a method for fusing a visible light image and an infrared image according to an embodiment of the present application;
FIG. 2 is a flow chart of a color image fusion method according to an embodiment of the application;
FIG. 3 is a schematic block diagram illustrating a color image fusion method according to an embodiment of the present application;
FIG. 4 is a flowchart of a luminance image fusion method according to an embodiment of the application;
FIG. 5 is a schematic diagram of a multi-scale decomposition flow of a visible/infrared luminance image in an embodiment of the present application;
FIG. 6 is a schematic diagram of a multi-scale fusion process of a visible luminance image and an infrared luminance image in an embodiment of the present application;
FIG. 7 is a schematic diagram of fusion rate modulation in an embodiment of the application;
FIG. 8 is a flow chart of another method for fusing a visible light image and an infrared image according to an embodiment of the present application;
FIG. 9 is a flow chart of a method for fusing a visible light image and an infrared image according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a device for fusing visible light images and infrared images according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a visible light image and infrared image fusion apparatus according to an embodiment of the present application;
fig. 12 is a schematic diagram of a structure of a computer-readable storage medium in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and more specific, the present invention will be described in further detail below with reference to the accompanying drawings and examples.
Referring to fig. 1, fig. 1 is a flow chart of a method for fusing a visible light image and an infrared image according to an embodiment of the present application, and it should be noted that the embodiment is not limited to the flow chart shown in fig. 1 if there are substantially the same results. As shown in fig. 1, this embodiment includes:
s120: and obtaining a visible light image and an infrared image, wherein the visible light image comprises a visible light brightness image and a visible light color image, and the infrared image comprises an infrared brightness image and an infrared pseudo-color image.
In this embodiment, a device that simultaneously captures a visible light image and an infrared image, such as a thermal infrared imager, may be used to simultaneously capture the visible light image and the infrared image. Different devices may also be used to acquire the visible and infrared images, respectively, such as a visible camera and an infrared camera. When different devices are used for acquiring the visible light image and the infrared image, the two devices can be placed at the same position, and the optical axes of the lenses are in the same direction and parallel to acquire the visible light image and the infrared image at the same angle. The two devices may also be placed in different locations to acquire visible and infrared images at different angles. This embodiment is not limited to the apparatus used and the image acquisition angle.
The visible light image and the infrared image acquired by the equipment can be directly integrated, and the visible light image and the infrared image can be sampled to acquire the region image to be integrated. The visible light image and the infrared image to be fused need to have the same resolution, the same resolution can be set when the imaging device acquires the images, and the resolution of the images can be adjusted after the images are acquired. The visible light image and the infrared image to be fused are required to be corresponding to each characteristic point, the visible light image and the infrared image can be registered before fusion, and the characteristic points of the two images can be corresponding by adopting registration methods such as a calibration method, a characteristic point method and the like. When the visible light image and the infrared image are acquired by the same equipment or different equipment with the same angle, the image registration can be directly carried out; when the visible light image and the infrared image are acquired at different angles, the angle needs to be adjusted before registration.
The visible light image data and the infrared image data to be fused can be converted into YUV data, and the brightness image and the color image are fused respectively. YUV is a color coding method, in which "Y" represents brightness (luminence or Luma), that is, gray scale values, and "U" and "V" represent chromaticity (Chrominance or Chroma), which are used to describe image colors and saturation, and to specify the colors of pixels. The visible light image data can be directly converted into YUV data to obtain a visible light brightness image and a visible light color image. The brightness data of the infrared image can be obtained to obtain the infrared brightness image, an infrared pseudo-color image is generated by the infrared brightness data according to the mapping relation, and color information in the pseudo-color image and brightness information in the infrared brightness image form YUV data of the infrared image. The method for generating the infrared pseudo-color image can be implemented by any existing method, and is not limited herein.
S140: and carrying out region division on the infrared pseudo-color image/visible light color image, respectively fusing the infrared pseudo-color image and the visible light color image of each region, and merging the color images fused by each region to obtain a color fusion image.
When the color image is divided into areas, the color image may be divided with the infrared pseudo color image as a reference, or may be divided with the visible light color image as a reference. The method comprises the steps that only the infrared pseudo-color image is subjected to region division, the infrared pseudo-color image of each region is fused with the visible light color image corresponding to the region after the division, and at the moment, although the visible light color image is not subjected to region division, the virtual region fusion is realized; similarly, the visible light color image may be divided into only regions, and the visible light color image of each region and the infrared pseudo color image corresponding to the region may be fused after the division. By dividing the color images into areas, different strategies can be adopted for fusing different areas, and fused images with different styles can be obtained.
S160: and fusing the visible brightness image and the infrared brightness image to obtain a brightness fused image.
Different fusion modes can be adopted for the brightness images under different scenes, and the method is not limited to segmentation fusion, multi-scale fusion and the like.
In another embodiment, the step S160 may precede the step S140, or the step S140 and the step S160 may be performed simultaneously with color fusion and luminance fusion.
S180: and combining the color fusion image and the brightness fusion image to obtain a visible light-infrared fusion image.
In the embodiment, the visible light image and the infrared image are respectively subjected to color fusion and brightness fusion, the infrared pseudo-color image/visible light color image is divided into a plurality of areas during color fusion, the images in different areas can be subjected to color fusion by adopting different fusion strategies, and the visible light information and the infrared information in the images can be utilized more fully and flexibly to obtain the optimal scene information of each area.
When the color image is divided into areas, different dividing rules can be adopted to obtain different dividing modes. The infrared pseudo-color image/visible light color image may be regional divided according to the color temperature of the color image, the pixel value of the luminance image, and the like.
Referring to fig. 2, fig. 2 is a flowchart illustrating a color image fusion method according to an embodiment of the application. It should be noted that, if there are substantially the same results, the embodiment is not limited to the flow sequence shown in fig. 2. In this embodiment, an example of dividing an infrared pseudo-color image will be described, and the specific method includes:
s241: and obtaining pixel values of all pixel points of the infrared brightness image corresponding to the infrared pseudo-color image.
In this embodiment, the colors are divided into regions based on the pixel values of the luminance image as a reference. The infrared brightness image pixels can be traversed, the pixel values of all pixel points can be obtained, and the absolute values of the pixel values can be obtained specifically.
S242: and determining a threshold interval in which the pixel value is positioned, and dividing the infrared pseudo-color image/visible light color image into an area corresponding to the threshold interval.
One or more threshold values may be preset to form two or more threshold value sections, the pixel value is compared with the threshold value respectively, the pixel value is seen in which section, and the pixel points in the same section are divided into the same area. Two or more regions may be divided as needed, and the present application will be described by taking the division of two regions as an example, but is not limited thereto.
Referring to fig. 3, fig. 3 is a schematic block diagram illustrating a region division flow of a color image fusion method according to an embodiment of the application. In this embodiment, the infrared pseudo-color image is divided into a first region and a second region by taking two threshold intervals as an example.
A first threshold value Thr is preset, and the acquired pixel values ir_y (i) are compared with the first threshold value Thr respectively.
If the pixel value ir_y (i) of the infrared brightness image at the current pixel point i is smaller than the threshold value Thr, the pixel point is divided into a first area, which can be called a low temperature area, and if the pixel value ir_y (i) of the infrared brightness image at the current pixel point i is larger than or equal to the threshold value Thr, the pixel point is divided into a second area, which can be called a high temperature area; that is, the pixel value of each pixel point of the infrared brightness image corresponding to the infrared pseudo-color image in the first area is smaller than the first threshold value, and the pixel value of each pixel point of the infrared brightness image corresponding to the infrared pseudo-color image in the second area is larger than or equal to the first threshold value. Correspondingly, the visible light color image area corresponding to the first area of the infrared pseudo color image can be divided into the first area, and the visible light color image area corresponding to the second area of the infrared pseudo color image can be divided into the second area.
After the regions are divided, the color images of different regions can be fused by adopting the same fusion strategy, and can also be fused by adopting different fusion strategies. In the embodiment, different fusion strategies are adopted to fuse the visible light color image and the infrared pseudo color image of the two areas.
S243: and fusing the visible light color image and the infrared pseudo color image of the first area.
And fusing the visible light color image and the infrared pseudo color image of the first area by using a first fusion strategy. And respectively determining the fusion weight of the visible light color image and the fusion weight of the infrared pseudo color image of each pixel point, and carrying out weighted summation on the color value of the visible light color image and the color value of the infrared pseudo color image of each pixel point in the first area to obtain a first color fusion image. The infrared pseudo color image fusion weight is the ratio of the pixel value of the infrared brightness image to the first threshold value, and the sum of the infrared pseudo color image fusion weight and the visible light color image fusion weight is 1.
The color values of the visible light color image and the infrared pseudo color image are divided into a U component and a V component, and the U component and the V component can be separated and fused when the colors are fused, and can be integrally fused. When the two color components are fused separately, the same image division mode can be adopted, different image division modes can be adopted, the same image fusion strategy can be adopted, and different image fusion strategies can be adopted. In this embodiment, description will be given by taking the case of fusing the U component and the V component, respectively, that is, fusing the visible light color image and the infrared pseudo color image of the first area includes fusing the visible light color image U component and the infrared pseudo color image U component of the first area, respectively, and fusing the visible light color image V component and the infrared pseudo color image V component of the first area.
The fusing of the visible light color image U component and the infrared pseudo color image U component of the first region using the first fusing strategy includes: and respectively determining the fusion weight of the visible light color image U component and the infrared pseudo color image U component of each pixel point, carrying out weighted summation on the visible light color image U component and the infrared pseudo color image U component of each pixel point in the first region, and traversing each pixel point to obtain a first U component color fusion image. The U component fusion weight of the infrared pseudo color image is the ratio of the pixel value of the infrared brightness image to the first threshold value, and the sum of the U component fusion weight of the infrared pseudo color image and the U component fusion weight of the visible light color image is 1.
The smaller the pixel value of the infrared luminance image, the smaller the weight of the infrared pseudo color image U component, and the larger the weight of the visible light color image U component.
And carrying out weighted summation on the U component of the visible light color image and the U component of the infrared pseudo color image of each pixel point in the first region, namely, carrying out summation on the product of the fusion weight of the U component of the infrared pseudo color image and the product of the fusion weight of the U component of the visible light color image and the U component of the visible light color image, so as to obtain a U component color image of the first region.
And similarly, fusing the visible light color image V component and the infrared pseudo color image V component of the first area by using a first fusion strategy, and traversing each pixel point to obtain a first V component color fusion image.
S244: and fusing the visible light color image and the infrared pseudo color image of the second area.
And fusing the visible light color image and the infrared pseudo color image of the second area by using a second fusing strategy to obtain a second color fused image. And taking the color value of the infrared pseudo color image of each pixel point in the second area as the color value of the color image after the pixel points are fused, namely the color value of the second color fusion image.
In this embodiment, the description will be given taking the fusion of the U component and the V component, respectively, that is, the fusion of the visible light color image and the infrared pseudo color image of the second area includes the fusion of the visible light color image U component and the infrared pseudo color image U component of the second area, respectively, and the fusion of the visible light color image V component and the infrared pseudo color image V component of the second area.
Fusing the visible light color image U component and the infrared pseudo color image U component of the second area by using a second fusing strategy comprises: directly taking the U component of the infrared pseudo color image of each pixel point in the second area as the U component of the color image after the fusion of the pixel points, traversing each pixel point, and obtaining a second U component color fusion image; and directly taking the V component of the infrared pseudo color image of each pixel point in the second region as the V component of the color image after the fusion of the pixel points, traversing each pixel point, and obtaining a second V component color fusion image fusion_V.
In another embodiment, the order of step S243 and step S244 is not limited, and step S244 may be performed before step S243, or the first region fusion and the second region fusion may be performed simultaneously.
In this embodiment, the U-component color image fusion_u is divided into a first U-component color fusion image and a second U-component color fusion image, and the V-component color image fusion_v is divided into a first V-component color fusion image and a second V-component color fusion image. The calculation formula of the fusion strategy of the U-component color image and the V-component color image is as follows:
Wherein ir_u (i) is the U component of the infrared pseudo color image, vis_u (i) is the U component of the visible light color image, ir_v (i) is the V component of the infrared pseudo color image, and vis_v (i) is the V component of the visible light color image.
S245: and merging the color images of all the areas to obtain a color fusion image.
And combining the U-component color image and the V-component color image to obtain a color fusion image.
In the embodiment, the visible light image and the infrared image are subjected to color fusion, the infrared pseudo-color image and the visible light color image are divided into two areas, different fusion strategies are adopted in the two areas, so that the color fusion image simultaneously has color information of the visible light image and pseudo-color information of the infrared image, when the color fusion image is segmented and fused, the pseudo-color information of the infrared image in a high-temperature area is maintained, more visible light color image color information in a low-temperature area is endowed, the requirement that the temperature change of a high-temperature object is concerned in part of application and the real color of a scene object is required to be observed is met, and the visible light information and the infrared information in the image can be utilized more fully and flexibly, so that the optimal scene information of each area is obtained.
Referring to fig. 4, fig. 4 is a flowchart illustrating a luminance image fusion method according to an embodiment of the application. It should be noted that, if there are substantially the same results, the embodiment is not limited to the flow sequence shown in fig. 4. As shown in fig. 4, in this embodiment, the luminance image fusion is performed by using a multi-scale method, and the specific method includes:
s461: and respectively carrying out multi-scale decomposition on the visible brightness image and the infrared brightness image.
And sequentially carrying out downsampling and low-pass filtering on the visible light brightness image for a plurality of times to obtain a visible light brightness background image and visible light brightness detail images with a plurality of scales. And sequentially carrying out downsampling and low-pass filtering on the infrared brightness image for a plurality of times to obtain an infrared brightness background image and infrared brightness detail images with a plurality of scales. The Low-pass filtering (Low-passfilter) is a filtering mode, which is set with a critical value, the Low-frequency signals within the critical value can normally pass through, and the high-frequency signals exceeding the set critical value are blocked.
Referring to fig. 5, fig. 5 is a schematic diagram of a multi-scale decomposition flow of a visible/infrared brightness image according to an embodiment of the application. In this embodiment, the visible luminance image and the infrared luminance image are low-pass filtered three times, respectively.
The first low-pass filter is set with a first critical value, the second low-pass filter is set with a second critical value, and the third low-pass filter is set with a third critical value. The first threshold is greater than the second threshold, and the second threshold is greater than the third threshold. And (3) carrying out first low-pass filtering on the visible light brightness image, preserving visible light brightness image information within a first critical value, carrying out downsampling and second low-pass filtering on the image, preserving visible light brightness image information within a second critical value, carrying out downsampling and third low-pass filtering on the image, and preserving visible light brightness image information within a third critical value to obtain a visible light brightness background image. The first low-pass filtered front-back images are subtracted to obtain a first-scale visible light brightness detail image, which can also be called a small-scale visible light brightness detail image; subtracting the second low-pass filtered front and back images to obtain a second-scale visible light brightness detail image, which can also be called a middle-scale visible light brightness detail image; the third low-pass filtered front-to-back image is subtracted to obtain a third-scale visible light brightness detail image, which can also be called a large-scale visible light brightness detail image.
And carrying out multi-scale decomposition on the infrared brightness image based on the method to obtain an infrared brightness background image, a first-scale infrared brightness detail image, a second-scale infrared brightness detail image and a third-scale infrared brightness detail image.
Referring to fig. 6, fig. 6 is a schematic diagram of a multi-scale fusion process of a visible luminance image and an infrared luminance image according to an embodiment of the application.
S462: and respectively fusing the visible brightness image and the infrared brightness image of each scale.
And fusing the visible brightness background image and the infrared brightness background image to obtain a brightness background fused image. Specifically, the visible light brightness background image and the infrared brightness background image are integrated in proportion, that is, all pixels are integrated in the same proportion. And determining the fusion weight of the visible light brightness background image and the fusion weight of the infrared brightness background image, wherein the sum of the fusion weight of the visible light brightness background image and the fusion weight of the infrared brightness background image is 1, and carrying out weighted summation on the brightness value of the visible light brightness background image and the brightness value of the infrared brightness background image to obtain the brightness background fusion image.
And respectively fusing the visible brightness detail images and the infrared brightness detail images of all scales to obtain a plurality of brightness detail fusion images. And merging the first-scale visible light brightness detail image and the first-scale infrared brightness detail image according to a proportion, and respectively determining the visible light brightness detail image merging weight and the infrared brightness detail image merging weight of each pixel point in each scale, wherein the sum of the first-scale visible light brightness detail image merging weight and the first-scale infrared brightness detail image merging weight is 1.
And setting a weight map based on each pixel point, wherein one coordinate position in the weight map corresponds to one pixel point. And comparing the first-scale visible light brightness detail image with the first-scale infrared brightness detail image pixel by pixel, and marking the coordinate position of the weight map corresponding to the pixel point as 1 when the pixel absolute value of the first-scale infrared brightness detail image is larger than the pixel absolute value of the first-scale visible light brightness detail image, otherwise marking as 0. And carrying out Gaussian filtering on the marked weight map. The gaussian filtering is a process of performing weighted average on the whole image, and the value of each pixel point is obtained by performing weighted average on the pixel point and other pixel values in the neighborhood. The value of each coordinate of the adjusted weight map is between 0 and 1, the coordinate value is used as the weight of the corresponding pixel point of the first-scale infrared brightness detail image, and the weight of the corresponding pixel point of the first-scale visible brightness detail image is calculated according to the weight sum of 1. And carrying out weighted summation on the brightness value of the first-scale visible brightness detail image and the brightness value of the first-scale infrared brightness detail image to obtain a first-scale brightness detail fusion image.
Based on the method, the second-scale visible light brightness detail image and each pixel point of the second-scale infrared brightness detail image are fused in proportion, so that a second-scale brightness detail fusion image is obtained; and fusing each pixel point of the third-scale visible light brightness detail image and each pixel point of the third-scale infrared brightness detail image in proportion to obtain a third-scale brightness detail fused image.
S463: and merging the brightness fusion images of all scales to obtain the brightness fusion image.
Before merging, enhancing the brightness detail images after merging of all scales, setting a group of amplification factors which are larger than or equal to 1, respectively multiplying the brightness detail merged images of all scales by the corresponding amplification factors, wherein the brightness detail merged images of different scales can adopt the same amplification factor or different amplification factors according to the merging requirements.
And superposing the brightness background fusion image and the third-scale brightness detail image, and superposing the brightness background fusion image and the second-scale brightness detail image after upsampling, and superposing the brightness background fusion image and the first-scale brightness detail image after upsampling again to obtain the brightness fusion image. The up-sampling multiple is the same as the down-sampling multiple of the multi-scale decomposition, so that the brightness fusion image is the same as the infrared brightness image and the visible brightness image in size.
In the embodiment, the visible light image and the infrared image are subjected to brightness fusion, and the visible light brightness image and the infrared brightness image are subjected to multi-scale decomposition, so that detail information of different scales in a scene can be well reserved. The brightness background image is subjected to an integral fusion strategy with adjustable proportion, so that the integral look and feel of the image can be changed through gray level change; the brightness detail image adopts a multiscale fusion strategy based on an attention mechanism, and simultaneously enhances details, so that the detail information of visible light and infrared images in different scenes is guaranteed to be fully utilized, and the optimal scene information of each region is obtained.
Referring to fig. 7, fig. 7 is a schematic diagram of fusion rate adjustment according to an embodiment of the application.
In the prior art, a visible light image and an infrared image are fused in a determined proportion, the style of the fused visible light-infrared fused image is determined, and the fused visible light-infrared fused image cannot be debugged through external parameters, so that optimal scene information cannot be obtained in many scenes. In the embodiment, the fusion rate configured outside is introduced, the fusion rate is the ratio of infrared image information and visible light image information in the visible light-infrared fusion image, and the fusion rate is adjusted based on the fusion rate, so that the effect of uniformly regulating and controlling the image fusion style is achieved.
The fusion rate can act on the luminance image fusion 71, and when luminance background fusion is carried out, the fusion weight of the visible luminance background image and the infrared luminance background image is adjusted based on the fusion rate to adjust the image fusion style, and the larger the fusion rate is, the larger the fusion weight of the infrared luminance background image is, and the smaller the fusion weight of the visible luminance background image is; and when the details after the brightness details are fused are enhanced, adjusting the amplification factor based on the fusion rate to adjust the image fusion style, wherein the larger the fusion rate is, the larger the amplification factor is. Further, the fusion rate may also act on the color image fusion 72, and when color fusion, one or more thresholds are set based on the fusion rate, two or more threshold sections are formed, the infrared pseudo-color image is divided into areas corresponding to the threshold sections, the image fusion style is adjusted based on the fusion rate, and when the infrared pseudo-color image is divided into two areas, a first threshold is set based on the fusion rate to divide the two areas, and the larger the fusion rate, the smaller the first threshold. And when the first region is subjected to color fusion, determining fusion weights of the infrared pseudo-color image and the visible light image based on the position of the current pixel value of the infrared brightness image between 0 and a first threshold value so as to adjust the image fusion style.
Further, the fusion rate may also act on both the luminance image fusion 71 and the color image fusion 72, with the same adjusted image fusion style. The fusion rate is simultaneously related to the fusion proportion of the brightness background images in brightness fusion, the amplification factor of detail enhancement and the threshold value in color fusion. The action mechanism is as follows: when more visible light information is expected to be obtained, controlling the background fusion proportion of the visible light brightness image to be larger, the detail enhancement coefficient to be smaller and the threshold to be larger, so as to obtain a fusion image which retains most visible light brightness and color information and has pseudo color (temperature) information of a high-temperature object; when more infrared information is expected to be obtained, controlling the background fusion proportion of the infrared brightness image to be larger, the detail enhancement coefficient to be larger and the threshold to be smaller, so that a fusion image which retains most of the infrared image brightness and pseudo-color information and has scene detail outline is obtained.
In the embodiment, color fusion and brightness fusion are respectively carried out on the visible light image and the infrared image, the infrared pseudo-color image and the visible light color image are divided into two areas when the color fusion is carried out, different fusion strategies are adopted in the two areas, so that the color fusion image has color information of the visible light image and pseudo-color information of the infrared image at the same time, and when the pseudo-color information of the infrared image in a high temperature area is retained, more visible light color image color information in a low temperature area is endowed. When the brightness fusion is carried out, the visible brightness image and the infrared brightness image are subjected to multi-scale decomposition, so that the detail information of different scales in the scene can be well reserved. The brightness background image is subjected to an integral fusion strategy with adjustable proportion, so that the integral look and feel of the image can be changed through gray level change; the brightness detail image adopts a multiscale fusion strategy based on an attention mechanism, and simultaneously enhances details, so that the detail information of the visible light and infrared images in different scenes is fully utilized. Meanwhile, the fusion rate configured outside is introduced, the fusion proportion of the low-frequency information of the visible light and infrared brightness images is directly determined, the amplification proportion of details is adjusted through the association detail enhancement parameters, and the segmentation interval and the fusion proportion of the color fusion images are adjusted through the association threshold value, so that the style of the whole fusion image is adjusted. When the fusion rate is smaller, the fusion image has brightness and color information close to that of a visible light image, but infrared pseudo-color (temperature) information of the object can be reserved in a high-temperature area; when the fusion rate is large, the fusion image has brightness and pseudo color information of the near infrared image, and has edge information of the visible light image and color information of a low-temperature area. Based on fusion rate adjustment, the visible light-infrared fusion image can retain important information and can be adjusted between a visible light image and an infrared image in style.
Referring to fig. 8, fig. 8 is a flow chart of another method for fusing a visible light image and an infrared image according to the embodiment of the application, and it should be noted that the embodiment is not limited to the flow chart shown in fig. 8 if there are substantially the same results. As shown in fig. 8, this embodiment includes:
s820: and obtaining a visible light image and an infrared image, wherein the visible light image comprises a visible light brightness image and a visible light color image, and the infrared image comprises an infrared brightness image and an infrared pseudo-color image.
S840: and respectively carrying out multi-scale decomposition on the visible light brightness image and the infrared brightness image, fusing the visible light brightness image and the infrared brightness image of each scale, and merging the brightness fused images of each scale to obtain a brightness fused image. The fusion weight of the visible light brightness image and the fusion weight of the infrared brightness image are set based on the fusion rate, wherein the fusion rate comprises the ratio of infrared image information to visible light image information in the visible light-infrared fusion image.
S860: and fusing the visible light color image and the infrared pseudo color image to obtain a color fused image.
Different fusion modes can be adopted for the color/pseudo-color images under different scenes, and the method is not limited to segmentation fusion, multi-scale fusion and the like.
The order of step S840 and step S860 is not limited, and in another embodiment, step S860 may precede step S840, and the luminance fusion and the color fusion may be performed simultaneously.
S880: and combining the color fusion image and the brightness fusion image to obtain a visible light-infrared fusion image.
In the embodiment, the brightness fusion and the color fusion are respectively carried out on the visible light image and the infrared image, the multi-scale decomposition is carried out on the visible light brightness image and the infrared brightness image during the brightness fusion, the proportion-adjustable integral fusion is carried out on the visible light brightness image and the infrared brightness image of each scale based on the fusion rate set outside, the detail information of different scales in a scene can be well reserved, and the full utilization of the detail information of the visible light image and the infrared image in different scenes is ensured.
Referring to fig. 9, fig. 9 is a flow chart of another method for fusing a visible light image and an infrared image according to an embodiment of the application, and it should be noted that the embodiment is not limited to the flow chart shown in fig. 9 if there are substantially the same results. As shown in fig. 9, this embodiment includes:
S920: and obtaining a visible light image and an infrared image, wherein the visible light image comprises a visible light brightness image and a visible light color image, and the infrared image comprises an infrared brightness image and an infrared pseudo-color image.
S940: and respectively carrying out multi-scale decomposition on the visible light brightness image and the infrared brightness image, fusing the visible light brightness image and the infrared brightness image of each scale, and merging the brightness fused images of each scale to obtain a brightness fused image. The fusion weight of the visible light brightness image and the fusion weight of the infrared brightness image are set based on the fusion rate, wherein the fusion rate comprises the ratio of infrared image information to visible light image information in the visible light-infrared fusion image.
S960: and carrying out region division on the infrared pseudo-color image/visible light color image, respectively fusing the infrared pseudo-color image and the visible light color image of each region, and merging the color images fused by each region to obtain a color fusion image.
The order of step S940 and step S960 is not limited, and in another embodiment, step S960 may precede step S940, and may be the simultaneous parallel luminance fusion and color fusion.
S980: and combining the color fusion image and the brightness fusion image to obtain a visible light-infrared fusion image.
The fusion method of the visible light image and the infrared image comprises brightness fusion and color fusion, wherein a multi-scale brightness fusion method based on fusion rate adjustment can be adopted, and a color fusion method in the prior art is adopted; the brightness fusion method in the prior art can also be adopted, and the segmentation color fusion method of the visible light color image and the infrared pseudo color image can also be adopted; the multi-scale brightness fusion method based on fusion rate adjustment can be adopted, and the segmentation color fusion method of the visible light color image and the infrared pseudo color image can be adopted. The multi-scale brightness fusion method based on fusion rate adjustment or the visible light color image and infrared pseudo color image segmentation fusion method are included in the patent protection scope of the invention.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a visible light image and infrared image fusion device according to an embodiment of the application. In this embodiment, the visible light image and infrared image fusion apparatus includes an acquisition module 101, a color image fusion module 102, a luminance image fusion module 103, and a merging module 104.
The acquisition module 101 is configured to acquire a visible light image and an infrared image, where the visible light image includes a visible light brightness image and a visible light color image, and the infrared image includes an infrared brightness image and an infrared pseudo-color image; the color image fusion module 102 is used for fusing the visible light color image and the infrared color image to obtain a color fusion image; the brightness image fusion module 103 is used for fusing the visible brightness image and the infrared brightness image to obtain a brightness fusion image; the merging module 104 is configured to merge the color fusion image and the brightness fusion image to obtain a visible light-infrared fusion image. The visible light image and infrared image fusion device is used for respectively carrying out color fusion and brightness fusion on the visible light image and the infrared image.
In an embodiment, when the colors are fused, different areas of the infrared pseudo-color image/visible light color image are divided under different requirements, and the colors are fused in different areas respectively, so that full utilization of visible light information and infrared information is realized, and optimal scene information of each area is obtained.
Specifically, the color image fusion module 102 is configured to divide the infrared pseudo color image/visible light color image into regions, respectively fuse the infrared pseudo color image and the visible light color image of each region, and combine the color images fused by each region to obtain a color fusion image.
In another embodiment, when the brightness is fused, the visible brightness image and the infrared brightness image are subjected to multi-scale decomposition, so that detail information of different scales in a scene can be well reserved. And (3) carrying out a proportional adjustable integral fusion strategy on the brightness background image, wherein the brightness detail image adopts a multiscale fusion strategy based on an attention mechanism, and simultaneously, enhancing details, so that the detail information of the visible light and infrared images in different scenes is fully utilized, and the optimal scene information of each region is obtained.
Further, the luminance image fusion module 103 is configured to perform multi-scale decomposition on the visible luminance image and the infrared luminance image, fuse the visible luminance image and the infrared luminance image of each scale, and combine the luminance fusion images of each scale to obtain a luminance fusion image. The fusion weight of the visible light brightness image and the fusion weight of the infrared brightness image are set based on the fusion rate, wherein the fusion rate comprises the ratio of infrared image information to visible light image information in the visible light-infrared fusion image.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a visible light image and infrared image fusion apparatus according to an embodiment of the application. In this embodiment, the visible light image and infrared image fusion apparatus 10 includes a processor 11.
The processor 11 may also be referred to as a CPU (Central Processing Unit ). The processor 11 may be an integrated circuit chip with signal processing capabilities. The processor 11 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The general purpose processor may be a microprocessor or the processor 11 may be any conventional processor or the like.
The visible light image and infrared image fusion apparatus 10 may further include a memory (not shown) for storing instructions and data required for the operation of the processor 11.
The processor 11 is configured to execute instructions to implement the method provided by any of the embodiments and any non-conflicting combinations of the above-described method for fusing a visible light image and an infrared image of the present application.
Referring to fig. 12, fig. 12 is a schematic diagram of a computer readable storage medium according to an embodiment of the application. The computer readable storage medium 20 of the embodiments of the present application stores instruction/program data 21, which instructions/program data 21, when executed, implement the methods provided by any embodiment and any non-conflicting combination of the visible light image and infrared image fusion method of the present application. Wherein the instructions/program data 21 may be stored in the storage medium 20 as a software product to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods of the various embodiments of the application. And the aforementioned storage medium 20 includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes, or a terminal device such as a computer, a server, a mobile phone, a tablet, or the like.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing description is only of embodiments of the present invention, and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes using the descriptions and the drawings of the present invention or directly or indirectly applied to other related technical fields are included in the scope of the present invention.

Claims (15)

1. A method for fusing a visible light image and an infrared image, the method comprising:
Obtaining a visible light image and an infrared image, wherein the visible light image comprises a visible light brightness image and a visible light color image, and the infrared image comprises an infrared brightness image and an infrared pseudo-color image;
dividing the infrared pseudo-color image/visible light color image into areas, respectively fusing the infrared pseudo-color image and the visible light color image of each area, and merging the color images fused by each area to obtain a color fusion image; fusing the visible brightness image and the infrared brightness image to obtain a brightness fused image;
Combining the color fusion image and the brightness fusion image to obtain a visible light-infrared fusion image;
Wherein the fusing the visible brightness image and the infrared brightness image to obtain a brightness fused image comprises:
respectively carrying out multi-scale decomposition on the visible light brightness image and the infrared brightness image, fusing the visible light brightness image and the infrared brightness image of each scale, and merging brightness fusion images of each scale to obtain a brightness fusion image, wherein the fusion weight of the visible light brightness image and the fusion weight of the infrared brightness image are set based on a fusion rate, and the fusion rate comprises the proportion of infrared image information and visible light image information in the visible light-infrared fusion image;
Combining the brightness background fusion image and the brightness detail fusion image to obtain the brightness fusion image comprises the following steps:
And multiplying the brightness detail fusion images of all scales by corresponding amplification factors respectively to enhance the brightness detail fusion images, and setting the amplification factors based on the fusion rate, wherein each amplification factor is greater than or equal to 1.
2. The method of claim 1, wherein the method further comprises the steps of,
The performing region division on the infrared pseudo-color image comprises the following steps:
acquiring pixel values of all pixel points of the infrared brightness image corresponding to the infrared pseudo-color image;
And determining a threshold interval in which the pixel value is located, and dividing the infrared pseudo-color image/visible light color image into an area corresponding to the threshold interval.
3. The method of merging a visible light image with an infrared image as set forth in claim 2, wherein,
The determining the threshold interval where the pixel value is located includes:
and setting the threshold interval based on a fusion rate, wherein the fusion rate comprises the ratio of infrared image information to visible light image information in the visible light-infrared fusion image.
4. The method of merging a visible light image with an infrared image as set forth in claim 2, wherein,
The performing region division on the infrared pseudo-color image comprises the following steps: dividing the infrared pseudo-color image into a first area and a second area, wherein the pixel value of each pixel point of the infrared brightness image corresponding to the infrared pseudo-color image in the first area is smaller than a first threshold value, and the pixel value of each pixel point of the infrared brightness image corresponding to the infrared pseudo-color image in the second area is larger than or equal to the first threshold value;
The infrared pseudo-color image and the visible light color image which are respectively fused in each region comprise: fusing the visible light color image and the infrared pseudo color image of the first area by using a first fusion strategy;
The first fusion strategy comprises the following steps: and respectively determining the fusion weight of the visible light color image and the fusion weight of the infrared pseudo color image of each pixel point, and carrying out weighted summation on the color value of the visible light color image and the color value of the infrared pseudo color image of each pixel point in the first area to obtain a first color fusion image.
5. The method of merging a visible light image with an infrared image as set forth in claim 4, wherein,
The step of respectively determining the visible light color image fusion weight and the infrared pseudo color image fusion weight of each pixel point comprises the following steps:
the infrared pseudo color image fusion weight is the ratio of the pixel value of the infrared brightness image to the first threshold value, and the sum of the infrared pseudo color image fusion weight and the visible light color image fusion weight is 1.
6. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
The step of respectively determining the visible light color image fusion weight and the infrared pseudo color image fusion weight of each pixel point comprises the following steps:
the smaller the pixel value of the infrared brightness image corresponding to the infrared pseudo color image is, the smaller the fusion weight of the infrared pseudo color image is; and the sum of the fusion weight of the visible light color image and the fusion weight of the infrared pseudo color image is 1.
7. The method of merging a visible light image with an infrared image as set forth in claim 2, wherein,
The performing region division on the infrared pseudo-color image comprises the following steps: dividing the infrared pseudo-color image into a first area and a second area, wherein the pixel value of each pixel point of the infrared brightness image corresponding to the infrared pseudo-color image in the first area is smaller than a first threshold value, and the pixel value of each pixel point of the infrared brightness image corresponding to the infrared pseudo-color image in the second area is larger than or equal to the first threshold value;
The infrared pseudo-color image and the visible light color image which are respectively fused in each region comprise: fusing the visible light color image and the infrared pseudo color image of the second area by using a second fusing strategy;
the second fusion strategy comprises: and taking the color value of the infrared pseudo color image of each pixel point in the second area as the color value of the color image after the pixel points are fused.
8. The method for fusing a visible light image with an infrared image as defined in any one of claims 4 to 7,
The first threshold is set based on a fusion rate, wherein the fusion rate comprises the ratio of infrared image information to visible light image information in the visible light-infrared fusion image, and the larger the fusion rate is, the smaller the first threshold is.
9. The method of claim 1, wherein the method further comprises the steps of,
The multi-scale decomposition is performed on the visible light brightness image and the infrared brightness image respectively, and the fusion of the visible light brightness image and the infrared brightness image of each scale comprises:
respectively carrying out multi-scale decomposition on the visible light brightness image and the infrared brightness image, and carrying out high-frequency extraction based on low-pass filtering on each scale to obtain a visible light brightness background image, a visible light brightness detail image of a plurality of scales, an infrared brightness background image and an infrared brightness detail image of a plurality of scales;
Fusing the visible brightness background image and the infrared brightness background image to obtain a brightness background fused image; respectively fusing the visible brightness detail image and the infrared brightness detail image of each scale to obtain a plurality of brightness detail fusion images;
And merging the brightness background fusion image and the brightness detail fusion image to obtain the brightness fusion image.
10. The method of claim 9, wherein the method further comprises the steps of,
The fusing the visible brightness background image and the infrared brightness background image to obtain a brightness background fused image comprises the following steps:
determining the fusion weight of the visible brightness background image and the fusion weight of the infrared brightness background image;
The brightness value of the visible brightness background image and the brightness value of the infrared brightness background image are weighted and summed to obtain the brightness background fusion image;
the fusion weight of the visible light brightness background image and the fusion weight of the infrared brightness background image are set based on a fusion rate, wherein the fusion rate comprises the ratio of infrared image information to visible light image information in the visible light-infrared fusion image, and the sum of the fusion weight of the visible light brightness background image and the fusion weight of the infrared brightness background image is 1.
11. The method of claim 10, wherein the method further comprises the steps of,
The setting of the fusion weight of the visible light brightness background image and the fusion weight of the infrared brightness background image based on the fusion rate comprises the following steps:
The larger the fusion rate is, the larger the fusion weight of the infrared brightness background image is.
12. The method of claim 9, wherein the method further comprises the steps of,
The fusing the visible brightness detail image and the infrared brightness detail image of each scale to obtain a plurality of brightness detail fused images comprises the following steps:
respectively determining visible brightness detail image fusion weights and infrared brightness detail image fusion weights of all pixel points in all scales;
Respectively carrying out weighted summation on the brightness value of the visible brightness background image and the brightness value of the infrared brightness background image of each pixel point in each scale to obtain a brightness detail fusion image of each scale;
the visible light brightness detail image fusion weight and the infrared brightness detail image fusion weight of each pixel point in each scale are set based on a weight map, and the sum of the visible light brightness detail image fusion weight and the infrared brightness detail image fusion weight is 1.
13. The method of claim 1, wherein the method further comprises the steps of,
The setting the amplification factor based on the fusion rate includes:
the larger the fusion rate is, the larger the amplification factor is.
14. A visible light image and infrared image fusion apparatus, characterized in that the visible light and infrared image fusion apparatus comprises a processor for executing instructions to implement the visible light image and infrared image fusion method of any one of claims 1-13.
15. A computer readable storage medium storing instructions/program data executable to implement the visible light image and infrared image fusion method of any one of claims 1-13.
CN202110003609.4A 2021-01-04 2021-01-04 Visible light image and infrared image fusion method, device and readable storage medium Active CN112767291B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110003609.4A CN112767291B (en) 2021-01-04 2021-01-04 Visible light image and infrared image fusion method, device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110003609.4A CN112767291B (en) 2021-01-04 2021-01-04 Visible light image and infrared image fusion method, device and readable storage medium

Publications (2)

Publication Number Publication Date
CN112767291A CN112767291A (en) 2021-05-07
CN112767291B true CN112767291B (en) 2024-05-28

Family

ID=75699710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110003609.4A Active CN112767291B (en) 2021-01-04 2021-01-04 Visible light image and infrared image fusion method, device and readable storage medium

Country Status (1)

Country Link
CN (1) CN112767291B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792592B (en) * 2021-08-09 2024-05-07 深圳光启空间技术有限公司 Image acquisition processing method and image acquisition processing device
CN113784026B (en) * 2021-08-30 2023-04-18 鹏城实验室 Method, apparatus, device and storage medium for calculating position information based on image
CN114255302B (en) * 2022-03-01 2022-05-13 北京瞭望神州科技有限公司 Wisdom country soil data processing all-in-one
CN115082434B (en) * 2022-07-21 2022-12-09 浙江华是科技股份有限公司 Multi-source feature-based magnetic core defect detection model training method and system
CN115239610B (en) * 2022-07-28 2024-01-26 爱芯元智半导体(上海)有限公司 Image fusion method, device, system and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366353A (en) * 2013-05-08 2013-10-23 北京大学深圳研究生院 Infrared image and visible-light image fusion method based on saliency region segmentation
JP2017011633A (en) * 2015-06-26 2017-01-12 キヤノン株式会社 Imaging device
CN106600572A (en) * 2016-12-12 2017-04-26 长春理工大学 Adaptive low-illumination visible image and infrared image fusion method
CN106780392A (en) * 2016-12-27 2017-05-31 浙江大华技术股份有限公司 A kind of image interfusion method and device
CN109255774A (en) * 2018-09-28 2019-01-22 中国科学院长春光学精密机械与物理研究所 A kind of image interfusion method, device and its equipment
CN110136183A (en) * 2018-02-09 2019-08-16 华为技术有限公司 A kind of method and relevant device of image procossing
CN110796628A (en) * 2019-10-17 2020-02-14 浙江大华技术股份有限公司 Image fusion method and device, shooting device and storage medium
CN111489319A (en) * 2020-04-17 2020-08-04 电子科技大学 Infrared image enhancement method based on multi-scale bilateral filtering and visual saliency
CN111539902A (en) * 2020-04-16 2020-08-14 烟台艾睿光电科技有限公司 Image processing method, system, equipment and computer readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069768B (en) * 2015-08-05 2017-12-29 武汉高德红外股份有限公司 A kind of visible images and infrared image fusion processing system and fusion method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366353A (en) * 2013-05-08 2013-10-23 北京大学深圳研究生院 Infrared image and visible-light image fusion method based on saliency region segmentation
JP2017011633A (en) * 2015-06-26 2017-01-12 キヤノン株式会社 Imaging device
CN106600572A (en) * 2016-12-12 2017-04-26 长春理工大学 Adaptive low-illumination visible image and infrared image fusion method
CN106780392A (en) * 2016-12-27 2017-05-31 浙江大华技术股份有限公司 A kind of image interfusion method and device
CN110136183A (en) * 2018-02-09 2019-08-16 华为技术有限公司 A kind of method and relevant device of image procossing
CN109255774A (en) * 2018-09-28 2019-01-22 中国科学院长春光学精密机械与物理研究所 A kind of image interfusion method, device and its equipment
CN110796628A (en) * 2019-10-17 2020-02-14 浙江大华技术股份有限公司 Image fusion method and device, shooting device and storage medium
CN111539902A (en) * 2020-04-16 2020-08-14 烟台艾睿光电科技有限公司 Image processing method, system, equipment and computer readable storage medium
CN111489319A (en) * 2020-04-17 2020-08-04 电子科技大学 Infrared image enhancement method based on multi-scale bilateral filtering and visual saliency

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Xiaogong Lin ; Ronghao Yang.Image Fusion Processing Method Based on Infrared and Visible Light.2019 IEEE International Conference on Mechatronics and Automation (ICMA).2019,全文. *
基于NSCT和IHS变换域的灰度可见光与红外图像融合方法;孔韦韦;雷英杰;雷阳;倪学亮;;系统工程与电子技术;20100731;32(07);全文 *
基于梯度转移和显著性保持的红外可见光图像融合方法;尹云飞,罗晓清,张战成;指挥信息系统与技术;20200430;第11卷(第2期);全文 *

Also Published As

Publication number Publication date
CN112767291A (en) 2021-05-07

Similar Documents

Publication Publication Date Title
CN112767291B (en) Visible light image and infrared image fusion method, device and readable storage medium
CN112767289B (en) Image fusion method, device, medium and electronic equipment
CN108205796B (en) Multi-exposure image fusion method and device
KR102175164B1 (en) Apparatus and method for improving image quality
CN104683767B (en) Penetrating Fog image generating method and device
JP4234195B2 (en) Image segmentation method and image segmentation system
JP7077395B2 (en) Multiplexed high dynamic range image
JP5461568B2 (en) Modify color and full color channel CFA images
CN108154514B (en) Image processing method, device and equipment
WO2018082185A1 (en) Image processing method and device
CN1985274A (en) Methods, system and program modules for restoration of color components in an image model
CN110660090B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN110365961B (en) Image demosaicing device and method
CN108055452A (en) Image processing method, device and equipment
CN111986129A (en) HDR image generation method and device based on multi-shot image fusion and storage medium
CN111837155A (en) Image processing method and apparatus
CN107395991B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN109493283A (en) A kind of method that high dynamic range images ghost is eliminated
CN110796041B (en) Principal identification method and apparatus, electronic device, and computer-readable storage medium
JP7285791B2 (en) Image processing device, output information control method, and program
CN110163804A (en) Image defogging method, device, computer equipment and storage medium
RU2712436C1 (en) Rccb image processing method
Huo et al. Fast fusion-based dehazing with histogram modification and improved atmospheric illumination prior
CN112907497A (en) Image fusion method and image fusion device
US10621703B2 (en) Image processing apparatus, image processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230828

Address after: Room 201, Building A, Integrated Circuit Design Industrial Park, No. 858, Jianshe 2nd Road, Economic and Technological Development Zone, Xiaoshan District, Hangzhou City, Zhejiang Province, 311225

Applicant after: Zhejiang Huagan Technology Co.,Ltd.

Address before: No.1187 Bin'an Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: ZHEJIANG DAHUA TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant