CN114677316A - Real-time visible light image and infrared image multi-channel fusion method and device - Google Patents

Real-time visible light image and infrared image multi-channel fusion method and device Download PDF

Info

Publication number
CN114677316A
CN114677316A CN202210585859.8A CN202210585859A CN114677316A CN 114677316 A CN114677316 A CN 114677316A CN 202210585859 A CN202210585859 A CN 202210585859A CN 114677316 A CN114677316 A CN 114677316A
Authority
CN
China
Prior art keywords
visible light
image
matrix
infrared
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210585859.8A
Other languages
Chinese (zh)
Other versions
CN114677316B (en
Inventor
周振彬
娄珂
宾朝林
秦文礼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Dingjiang Technology Co ltd
Original Assignee
Shenzhen Dingjiang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Dingjiang Technology Co ltd filed Critical Shenzhen Dingjiang Technology Co ltd
Priority to CN202210585859.8A priority Critical patent/CN114677316B/en
Publication of CN114677316A publication Critical patent/CN114677316A/en
Application granted granted Critical
Publication of CN114677316B publication Critical patent/CN114677316B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a real-time visible light image and infrared image multi-channel fusion method and a device, which comprises the following steps: decomposing a visible light matrix corresponding to the visible light image to obtain a visible light low-frequency matrix and a visible light high-frequency matrix, and decomposing an infrared matrix corresponding to the infrared image to obtain an infrared low-frequency matrix and an infrared high-frequency matrix; fusing a visible light low-frequency matrix and an infrared low-frequency matrix according to the visible light low-frequency weight and the infrared low-frequency weight to obtain a low-frequency fusion matrix, and fusing a visible light high-frequency matrix and an infrared high-frequency matrix according to the visible light high-frequency weight and the infrared high-frequency weight to obtain a high-frequency fusion matrix; and fusing the low-frequency fusion matrix and the high-frequency fusion matrix based on a preset fusion mode to generate a fusion matrix corresponding to the visible light image and the infrared image. Therefore, the method and the device can fuse the visible light image and the infrared image, and are favorable for improving the discrimination of the main content and the background area in the image and improving the fineness of the image texture.

Description

Real-time visible light image and infrared image multi-channel fusion method and device
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for fusing a visible light image and an infrared image in a multi-channel mode in real time.
Background
In the field of image analysis, the information content of a visible light image is large, the pixel intensity is high, and more intuitive details can be provided for computer vision tasks, but the image quality of the visible light image is easily affected by the data collection environment (such as illumination and weather conditions), so that the difference between the main content (such as people) needing to be highlighted in the visible light image and a background area is not clear enough; although the infrared image generated based on the infrared thermal imaging technology can remarkably distinguish the main content and the background area in the infrared image according to the heat radiation difference of the surface of an object, the infrared image is usually a gray level image which is difficult to distinguish by human eyes, and the infrared image has low contrast and is blurred, so that the image texture of the infrared image is not fine enough. Therefore, how to improve the discrimination of the main content and the background area in the image and improve the fineness of the image texture is very important.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a method and an apparatus for real-time multi-channel fusion of a visible light image and an infrared image, which can fuse the visible light image and the infrared image, and further facilitate to improve the degree of detail of image texture while improving the discrimination of main content and background area in the image.
In order to solve the technical problem, a first aspect of the present invention discloses a method for real-time multi-channel fusion of a visible light image and an infrared image, wherein the method comprises:
decomposing a visible light matrix corresponding to a visible light image for image fusion to obtain a visible light low-frequency matrix corresponding to the visible light image and a visible light high-frequency matrix corresponding to the visible light image, and decomposing an infrared matrix corresponding to an infrared image for image fusion to obtain an infrared low-frequency matrix corresponding to the infrared image and an infrared high-frequency matrix corresponding to the infrared image, wherein each element value in the visible light matrix corresponds to a color quantized value of one of pixel points in the visible light image, and each element value in the infrared matrix corresponds to a temperature quantized value of one of pixel points in the infrared image;
fusing the visible light low-frequency matrix and the infrared low-frequency matrix according to the determined visible light low-frequency weight corresponding to the visible light image and the determined infrared low-frequency weight corresponding to the infrared image to obtain a low-frequency fusion matrix, and fusing the visible light high-frequency matrix and the infrared high-frequency matrix according to the determined visible light high-frequency weight corresponding to the visible light image and the determined infrared high-frequency weight corresponding to the infrared image to obtain a high-frequency fusion matrix;
and fusing the low-frequency fusion matrix and the high-frequency fusion matrix based on a preset fusion mode to generate a fusion matrix corresponding to the visible light image and the infrared image, wherein the fusion matrix is used for generating a fusion image of the visible light image and the infrared image.
As an optional implementation manner, in the first aspect of the present invention, decomposing the visible light matrix corresponding to the visible light image for image fusion to obtain the visible light low-frequency matrix corresponding to the visible light image and the visible light high-frequency matrix corresponding to the visible light image includes:
performing convolution operation processing on a visible light matrix corresponding to a visible light image for image fusion based on a predetermined mean filtering operator for mean convolution operation and a neighborhood window size corresponding to the mean convolution operation to obtain a visible light low-frequency matrix corresponding to the visible light image:
imgC_B=conv(avg,imgC),
wherein imgC _ B is used to represent the visible low frequency matrix, avg is used to represent the mean filter operator, imgC is used to represent the visible matrix, conv (av)gimgC) is used to represent a convolution algorithm for avg and imgC;
determining a visible light high-frequency matrix corresponding to the visible light image based on the visible light matrix and the visible light low-frequency matrix:
imgC_F=imgC-imgC_B,
wherein imgC _ F is used to represent the visible light high frequency matrix.
As an optional implementation manner, in the first aspect of the present invention, before the fusing the visible light high-frequency matrix and the infrared high-frequency matrix according to the determined visible light high-frequency weight corresponding to the visible light image and the determined infrared high-frequency weight corresponding to the infrared image to obtain a high-frequency fused matrix, the method further includes:
determining a visible light high-frequency weight corresponding to the visible light image and an infrared high-frequency weight corresponding to the infrared image;
wherein the determining the visible light high-frequency weight corresponding to the visible light image and the infrared high-frequency weight corresponding to the infrared image comprises:
calculating a visible light norm matrix corresponding to the visible light image according to a visible light Gaussian filter matrix obtained after the visible light matrix is subjected to Gaussian filtering and a visible light median filter matrix obtained after the visible light matrix is subjected to median filtering, and calculating an infrared norm matrix corresponding to the infrared image according to an infrared Gaussian filter matrix obtained after the infrared matrix is subjected to Gaussian filtering and an infrared median filter matrix obtained after the infrared matrix is subjected to median filtering:
imgC_D(i,j)=||imgC_G(i,j)-imgC_M(i,j)||p
imgT_D(i,j)=||imgT_G(i,j)-imgT_M(i,j)||p
wherein (i, j) is used for representing coordinate values corresponding to each pixel point in the visible light image and the infrared image, imgC _ D (i, j) is used for representing the visible light norm matrix, imgC _ G (i, j) is used for representing the visible light gaussian filter matrix, imgC _ M (i, j) is used for representing the visible light median filter matrix, imgT _ D (i, j) is used for representing the infrared norm matrix, imgT _ G (i, j) is used for representing the infrared gaussian filter matrix, and imgT _ M (i, j) is used for representing the infrared median filter matrix;
according to the visible light norm matrix and the infrared norm matrix, calculating a visible light high-frequency weight corresponding to the visible light image and an infrared high-frequency weight corresponding to the infrared image:
Figure 60939DEST_PATH_IMAGE001
Figure 144433DEST_PATH_IMAGE002
wherein imgC _ W (i, j) is used for representing the visible light high-frequency weight, imgT _ W (i, j) is used for representing the infrared high-frequency weight, a and b are preset weight values, and a + b = 1.
As an optional implementation manner, in the first aspect of the present invention, before the fusing the visible light low-frequency matrix and the infrared low-frequency matrix according to the determined visible light low-frequency weight corresponding to the visible light image and the determined infrared low-frequency weight corresponding to the infrared image to obtain a low-frequency fused matrix, the method further includes:
determining a color matrix corresponding to the infrared image;
determining a color infrared low-frequency matrix imgT _ B _ imgTC corresponding to the infrared image according to the color matrix and the infrared low-frequency matrix, and determining a color infrared high-frequency matrix imgT _ F _ imgTC corresponding to the infrared image according to the color matrix and the infrared high-frequency matrix, wherein imgT _ B is used for representing the infrared low-frequency matrix, imgT _ F is used for representing the infrared high-frequency matrix, and imgTC is used for representing the color matrix:
and updating the color infrared low-frequency matrix into the infrared low-frequency matrix, and updating the color infrared high-frequency matrix into the infrared high-frequency matrix.
As an optional implementation manner, in the first aspect of the present invention, the determining a color matrix corresponding to the infrared image includes:
calculating a channel component of the infrared image in each color channel of the visible light image;
determining a color matrix corresponding to the infrared image according to the channel component of the infrared image in each color channel;
when the visible light image is a three-channel image based on an RGB space, the calculating a channel component of the infrared image in each color channel of the visible light image includes:
determining a first mark value l corresponding to the infrared image according to the temperature distribution information represented by the infrared matrix1A second flag value l2And a third flag value l3Wherein l is1<l2<l3
According to the infrared matrix and the first mark value l1The second flag value l2And said third flag value l3And calculating the channel component of the infrared image in each color channel of the visible light image:
Figure DEST_PATH_IMAGE003
Figure 503739DEST_PATH_IMAGE004
Figure DEST_PATH_IMAGE005
wherein imgTC _ r (i, j), imgTC _ g (i, j), and imgTC _ b (i, j) are respectively used to represent channel components of the infrared image at each of the color channels, and imgT (i, j) is used to represent the infrared matrix.
As an optional implementation manner, in the first aspect of the present invention, before decomposing the visible light matrix corresponding to the visible light image for image fusion to obtain the visible light decomposition matrix corresponding to the visible light image, the method further includes:
normalizing each element value in a visible light matrix corresponding to a visible light image for image fusion to obtain a normalized visible light matrix corresponding to the visible light image, and updating the normalized visible light matrix into the visible light matrix;
before decomposing the infrared matrix corresponding to the infrared image for image fusion to obtain the infrared decomposition matrix corresponding to the infrared image, the method further comprises:
and carrying out normalization processing on each element value in an infrared matrix corresponding to the infrared image for image fusion to obtain a normalized infrared matrix corresponding to the infrared image, and updating the normalized infrared matrix into the infrared matrix.
As an optional implementation manner, in the first aspect of the present invention, before performing normalization processing on each element value in the visible light matrix corresponding to the visible light image for image fusion to obtain a normalized visible light matrix corresponding to the visible light image, the method further includes:
judging whether the visible light image is a three-channel image based on an RGB space or not according to a visible light matrix corresponding to the visible light image for image fusion;
and when the judgment result is negative, converting the visible light matrix into a three-channel visible light matrix corresponding to the RGB space based on a preset image conversion method, and updating the three-channel visible light matrix into the visible light matrix.
The invention discloses a real-time visible light image and infrared image multi-channel fusion device in a second aspect, which comprises:
the decomposition module is used for decomposing a visible light matrix corresponding to a visible light image for image fusion to obtain a visible light low-frequency matrix corresponding to the visible light image and a visible light high-frequency matrix corresponding to the visible light image, and decomposing an infrared matrix corresponding to an infrared image for image fusion to obtain an infrared low-frequency matrix corresponding to the infrared image and an infrared high-frequency matrix corresponding to the infrared image, wherein each element value in the visible light matrix corresponds to a color quantized value of one pixel point in the visible light image, and each element value in the infrared matrix corresponds to a temperature quantized value of one pixel point in the infrared image;
the fusion module is used for fusing the visible light low-frequency matrix and the infrared low-frequency matrix according to the determined visible light low-frequency weight corresponding to the visible light image and the determined infrared low-frequency weight corresponding to the infrared image to obtain a low-frequency fusion matrix, and fusing the visible light high-frequency matrix and the infrared high-frequency matrix according to the determined visible light high-frequency weight corresponding to the visible light image and the determined infrared high-frequency weight corresponding to the infrared image to obtain a high-frequency fusion matrix;
the generating module is used for fusing the low-frequency fusion matrix and the high-frequency fusion matrix based on a preset fusion mode to generate a fusion matrix corresponding to the visible light image and the infrared image, and the fusion matrix is used for generating a fusion image of the visible light image and the infrared image.
As an optional implementation manner, in the second aspect of the present invention, a specific manner of decomposing the visible light matrix corresponding to the visible light image for image fusion by the decomposition module to obtain the visible light low-frequency matrix corresponding to the visible light image and the visible light high-frequency matrix corresponding to the visible light image includes:
performing convolution operation processing on a visible light matrix corresponding to a visible light image for image fusion based on a predetermined mean filtering operator for mean convolution operation and a neighborhood window size corresponding to the mean convolution operation to obtain a visible light low-frequency matrix corresponding to the visible light image:
imgC_B=conv(avg,imgC),
wherein imgC _ B is used to represent the visible low frequency matrix, avg is used to represent the mean filter operator, imgC is used to represent the visible matrix, conv (av)gimgC) is used to represent a convolution algorithm for avg and imgC;
determining a visible light high-frequency matrix corresponding to the visible light image based on the visible light matrix and the visible light low-frequency matrix:
imgC_F=imgC-imgC_B,
wherein imgC _ F is used to represent the visible light high frequency matrix.
As an alternative embodiment, in the second aspect of the present invention, the apparatus further comprises:
the first determining module is used for determining the visible light high-frequency weight corresponding to the visible light image and the infrared high-frequency weight corresponding to the infrared image before the fusing module fuses the visible light high-frequency matrix and the infrared high-frequency matrix according to the determined visible light high-frequency weight corresponding to the visible light image and the determined infrared high-frequency weight corresponding to the infrared image to obtain a high-frequency fused matrix;
the specific way of determining the visible light high-frequency weight corresponding to the visible light image and the infrared high-frequency weight corresponding to the infrared image by the first determining module includes:
calculating a visible light norm matrix corresponding to the visible light image according to a visible light Gaussian filter matrix obtained after the visible light matrix is subjected to Gaussian filtering and a visible light median filter matrix obtained after the visible light matrix is subjected to median filtering, and calculating an infrared norm matrix corresponding to the infrared image according to an infrared Gaussian filter matrix obtained after the infrared matrix is subjected to Gaussian filtering and an infrared median filter matrix obtained after the infrared matrix is subjected to median filtering:
imgC_D(i,j)=||imgC_G(i,j)-imgC_M(i,j)||p
imgT_D(i,j)=||imgT_G(i,j)-imgT_M(i,j)||p
wherein (i, j) is used for representing coordinate values corresponding to each pixel point in the visible light image and the infrared image, imgC _ D (i, j) is used for representing the visible light norm matrix, imgC _ G (i, j) is used for representing the visible light gaussian filter matrix, imgC _ M (i, j) is used for representing the visible light median filter matrix, imgT _ D (i, j) is used for representing the infrared norm matrix, imgT _ G (i, j) is used for representing the infrared gaussian filter matrix, and imgT _ M (i, j) is used for representing the infrared median filter matrix;
according to the visible light norm matrix and the infrared norm matrix, calculating a visible light high-frequency weight corresponding to the visible light image and an infrared high-frequency weight corresponding to the infrared image:
Figure 360574DEST_PATH_IMAGE006
Figure 646193DEST_PATH_IMAGE007
wherein imgC _ W (i, j) is used for representing the visible light high-frequency weight, imgT _ W (i, j) is used for representing the infrared high-frequency weight, a and b are preset weight values, and a + b = 1.
As an alternative embodiment, in the second aspect of the present invention, the apparatus further comprises:
the second determining module is used for determining a color matrix corresponding to the infrared image before the fusing module fuses the visible light low-frequency matrix and the infrared low-frequency matrix according to the determined visible light low-frequency weight corresponding to the visible light image and the determined infrared low-frequency weight corresponding to the infrared image to obtain a low-frequency fused matrix; determining a color infrared low-frequency matrix imgT _ B imgTC corresponding to the infrared image according to the color matrix and the infrared low-frequency matrix, and determining a color infrared high-frequency matrix imgT _ F imgTC corresponding to the infrared image according to the color matrix and the infrared high-frequency matrix, wherein imgT _ B is used for representing the infrared low-frequency matrix, imgT _ F is used for representing the infrared high-frequency matrix, and imgTC is used for representing the color matrix:
and the updating module is used for updating the color infrared low-frequency matrix into the infrared low-frequency matrix and updating the color infrared high-frequency matrix into the infrared high-frequency matrix.
As an optional implementation manner, in the second aspect of the present invention, a specific manner of determining the color matrix corresponding to the infrared image by the second determining module includes:
calculating a channel component of the infrared image in each color channel of the visible light image;
determining a color matrix corresponding to the infrared image according to the channel component of the infrared image in each color channel;
when the visible light image is a three-channel image based on an RGB space, the specific way of calculating the channel component of the infrared image in each color channel of the visible light image by the second determining module includes:
determining a first mark value l corresponding to the infrared image according to the temperature distribution information represented by the infrared matrix1A second flag value l2And a third flag value l3Wherein l is1<l2<l3
According to the infrared matrix and the first mark value l1The second flag value l2And said third flag value l3And calculating the channel component of the infrared image in each color channel of the visible light image:
Figure 821960DEST_PATH_IMAGE003
Figure 386671DEST_PATH_IMAGE004
Figure 610979DEST_PATH_IMAGE005
wherein imgTC _ r (i, j), imgTC _ g (i, j), and imgTC _ b (i, j) are respectively used to represent channel components of the infrared image at each of the color channels, and imgT (i, j) is used to represent the infrared matrix.
As an alternative embodiment, in the second aspect of the present invention, the apparatus further comprises:
the normalization module is used for normalizing each element value in the visible light matrix corresponding to the visible light image for image fusion before the decomposition module decomposes the visible light matrix corresponding to the visible light image to obtain a visible light decomposition matrix corresponding to the visible light image, and is also used for normalizing each element value in the infrared matrix corresponding to the infrared image for image fusion before the decomposition module decomposes the infrared matrix corresponding to the infrared image for image fusion to obtain an infrared decomposition matrix corresponding to the infrared image to obtain a normalized infrared matrix corresponding to the infrared image;
the updating module is further configured to update the normalized visible light matrix to the visible light matrix and update the normalized infrared matrix to the infrared matrix.
As an optional embodiment, in the second aspect of the present invention, the apparatus further comprises:
the judging module is used for judging whether the visible light image is a three-channel image based on an RGB space or not according to the visible light matrix corresponding to the visible light image for image fusion before the normalization module performs normalization processing on each element value in the visible light matrix corresponding to the visible light image for image fusion to obtain the normalized visible light matrix corresponding to the visible light image;
the conversion module is used for converting the visible light matrix into a three-channel visible light matrix corresponding to the RGB space based on a preset image conversion method when the judgment module judges that the visible light image is not the three-channel image;
the updating module is further configured to update the three-channel visible light matrix to the visible light matrix.
The third aspect of the invention discloses another multi-channel fusion device for visible light images and infrared images, which comprises:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the real-time visible light image and infrared image multi-channel fusion method disclosed by the first aspect of the invention.
The fourth aspect of the present invention discloses a computer storage medium, which stores computer instructions, and when the computer instructions are called, the computer storage medium is used for executing the method for fusing the visible light image and the infrared image in the multi-channel manner, which is disclosed by the first aspect of the present invention, in real time.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, a visible light matrix corresponding to a visible light image for image fusion is decomposed to obtain a visible light low-frequency matrix corresponding to the visible light image and a visible light high-frequency matrix corresponding to the visible light image, and an infrared matrix corresponding to an infrared image for image fusion is decomposed to obtain an infrared low-frequency matrix corresponding to the infrared image and an infrared high-frequency matrix corresponding to the infrared image, wherein each element value in the visible light matrix corresponds to a color quantization value of one pixel point in the visible light image, and each element value in the infrared matrix corresponds to a temperature quantization value of one pixel point in the infrared image; fusing a visible light low-frequency matrix and an infrared low-frequency matrix according to the visible light low-frequency weight corresponding to the determined visible light image and the infrared low-frequency weight corresponding to the infrared image to obtain a low-frequency fusion matrix, and fusing a visible light high-frequency matrix and an infrared high-frequency matrix according to the visible light high-frequency weight corresponding to the determined visible light image and the infrared high-frequency weight corresponding to the infrared image to obtain a high-frequency fusion matrix; and fusing the low-frequency fusion matrix and the high-frequency fusion matrix based on a preset fusion mode to generate a fusion matrix corresponding to the visible light image and the infrared image, wherein the fusion matrix is used for generating a fusion image of the visible light image and the infrared image. Therefore, the invention can be implemented by fusing the visible light image and the infrared image, so that the generated fused image has the advantages of large information amount in the visible light image, high pixel intensity and intuitive image details, has the advantages of stable image quality in the infrared image and higher distinguishing degree of image main content and background area, and reduces the influence degree of the environmental condition of image data acquisition on the image quality, thereby being beneficial to improving the distinguishing degree of the main content and the background area in the image and simultaneously improving the fineness of image textures, further improving the layering sense of the image and reducing the identification difficulty of the target needing to be highlighted in the image, in addition, the low-frequency components of the visible light image and the infrared image are fused, and the high-frequency components of the visible light image and the infrared image are fused, so that the frequency bands of the visible light image and the infrared image used for fusion are more similar, and then the transition between the visible light image and the infrared image is smoother, and the image fusion quality is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flowchart of a method for real-time multi-channel fusion of a visible light image and an infrared image according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another real-time visible light image and infrared image multi-channel fusion method disclosed in the embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a real-time visible light image and infrared image multi-channel fusion device according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of another real-time visible light image and infrared image multi-channel fusion device disclosed in the embodiment of the present invention;
fig. 5 is a schematic structural diagram of another real-time visible light image and infrared image multi-channel fusion device disclosed in the embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and the like in the description and claims of the present invention and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, apparatus, article, or article that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or article.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The invention discloses a real-time visible light image and infrared image multi-channel fusion method and a device, which can ensure that a generated fusion image simultaneously has the advantages of large information amount, high pixel intensity and intuitive image details in a visible light image, the advantages of stable image quality and higher discrimination of image main content and a background area in an infrared image and reduces the influence degree of the environmental condition of image data acquisition on the image quality, thereby being beneficial to improving the discrimination of the main content and the background area in the image and simultaneously improving the fineness of image textures, further improving the layering sense of the image and reducing the identification difficulty of a target needing to be highlighted in the image, in addition, the low-frequency components of the visible light image and the infrared image are respectively fused, and the high-frequency components of the visible light image and the infrared image are fused, the image frequency bands of the visible light image and the infrared image for fusion are closer, so that the transition between the visible light image and the infrared image is smoother, and the image fusion quality is improved. The following are detailed below.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of a real-time visible light image and infrared image multi-channel fusion method according to an embodiment of the present invention. The method for fusing a visible light image and an infrared image in real time described in fig. 1 may be applied to a fusion process of a visible light image and an infrared image, including a fusion process of a visible light image and an infrared image that are acquired at the same time and in the same field of view in real time, for example, a real-time fusion process of an infrared detection video and a visible light detection video, which is not limited in the embodiment of the present invention. As shown in fig. 1, the method for multi-channel fusion of a visible light image and an infrared image in real time may include the following operations:
101. and decomposing the visible light matrix corresponding to the visible light image for image fusion to obtain a visible light low-frequency matrix corresponding to the visible light image and a visible light high-frequency matrix corresponding to the visible light image.
In the embodiment of the invention, each element value in the visible light matrix corresponds to the color quantization value of one pixel point in the visible light image. Specifically, when the image size of the visible light image is i × j and the number of the corresponding color channels is k, the matrix form of the visible light matrix is i × j × k, and each element value in the visible light matrix corresponds to a color quantization value corresponding to one pixel point in the visible light image in one color channel. For example, when the visible light image is a three-channel image based on RGB (Red-Green-Blue ) space, that is, k =3, each element value in the visible light matrix corresponds to a color quantization value corresponding to one of the pixel points in the visible light image in an R channel, a G channel, or a B channel of the RGB space. Optionally, each element value in the visible light matrix is a normalized color quantization value obtained after the color quantization value corresponding to one color channel of the pixel point corresponding to the element value is normalized.
In the embodiment of the present invention, optionally, the visible light matrix may be decomposed based on a low-pass filter or a high-pass filter, which is not limited in the embodiment of the present invention.
As an optional implementation, decomposing a visible light matrix corresponding to a visible light image for image fusion to obtain a visible light low-frequency matrix corresponding to the visible light image and a visible light high-frequency matrix corresponding to the visible light image may include:
performing convolution operation processing on a visible light matrix corresponding to the visible light image for image fusion based on a predetermined mean filtering operator for mean convolution operation and a neighborhood window size corresponding to the mean convolution operation to obtain a visible light low-frequency matrix corresponding to the visible light image:
imgC_B=conv(avg,imgC),
where imgC _ B is used to represent the visible low frequency matrix, avg is used to represent the mean filter operator, imgC is used to represent the visible matrix, conv (av)gimgC) is used to represent a convolution algorithm for avg and imgC;
determining a visible light high-frequency matrix corresponding to the visible light image based on the visible light matrix and the visible light low-frequency matrix:
imgC_F=imgC-imgC_B,
wherein imgC _ F is used to represent the visible light high frequency matrix.
Therefore, the optional implementation method can decompose the visible light matrix into the visible light low-frequency matrix and the visible light high-frequency matrix based on the mean value filtering algorithm, so that the accuracy and the reliability of decomposition of the visible light matrix are improved, and the fusion quality of the visible light image and the infrared image is favorably improved.
In this optional embodiment, the size of the neighborhood window represents the number of pixels in the neighborhood window selected during convolution operation, and optionally, the size of the neighborhood window is a first neighborhood window size range (e.g., [ m ] window size range predetermined according to white noise information of the visible light imagea,mb]) And preferably, the size of the neighborhood window is the optimal window size of the denoising result obtained by denoising processing aiming at the visible light image in the first neighborhood window size range. Therefore, the noise reduction effect in the visible light image decomposition can be improved, and the image fusion quality is improved.
102. And decomposing the infrared matrix corresponding to the infrared image for image fusion to obtain an infrared low-frequency matrix corresponding to the infrared image and an infrared high-frequency matrix corresponding to the infrared image.
In the embodiment of the invention, each element value in the infrared matrix corresponds to the temperature quantization value of one pixel point in the infrared image. Specifically, when the image size of the infrared image is i × j, the matrix form of the visible light matrix is i × j. Optionally, each element value in the infrared matrix is a normalized temperature quantization value obtained after the temperature quantization value of the pixel point corresponding to the element value is normalized.
As an optional implementation manner, decomposing the infrared matrix corresponding to the infrared image used for image fusion to obtain an infrared low-frequency matrix corresponding to the infrared image and an infrared high-frequency matrix corresponding to the infrared image may include:
performing convolution operation processing on an infrared matrix corresponding to the infrared image for image fusion based on a predetermined target mean filtering operator for mean convolution operation and a target neighborhood window size corresponding to the mean convolution operation to obtain an infrared low-frequency matrix corresponding to the infrared image:
imgT_B=conv(avg′,imgT),
wherein imgT _ B is used for representing the infrared low-frequency matrix, avg' is used for representing the target mean value filter operator, imgT is used for representing the infrared matrix, conv (av)g', imgT) is used to denote a convolution algorithm for avg' and imgT;
determining an infrared high-frequency matrix corresponding to the infrared image based on the infrared matrix and the infrared low-frequency matrix:
imgT_F=imgT-imgT_B,
wherein imgT _ F is used to represent the infrared high frequency matrix.
Therefore, the optional implementation method can decompose the infrared matrix into the infrared low-frequency matrix and the infrared high-frequency matrix based on the mean value filtering algorithm, and improves the accuracy and reliability of infrared matrix decomposition, thereby being beneficial to improving the fusion quality of the visible light image and the infrared image.
In this optional embodiment, optionally, the size of the target neighborhood window is a second neighborhood window predetermined according to white noise information of the infrared imageMouth size range (e.g., [ m ]a′,mb′]) And preferably, the size of the target neighborhood window is the optimal window size of a denoising result obtained by denoising processing aiming at the infrared image in the second neighborhood window size range. Further optionally, the size of the neighborhood window used when performing convolution operation on the visible light matrix is equal to the size of the target neighborhood window used when performing convolution operation on the infrared matrix. It should be noted that the "size of the neighborhood window" and the "size of the target neighborhood window" are only used to distinguish parameters used when performing convolution operation on the visible light matrix and the infrared matrix. Therefore, the noise reduction effect in the visible light image decomposition can be improved, and the image fusion quality is improved.
103. And fusing the visible light low-frequency matrix and the infrared low-frequency matrix according to the determined visible light low-frequency weight corresponding to the visible light image and the determined infrared low-frequency weight corresponding to the infrared image to obtain a low-frequency fusion matrix.
In this embodiment of the present invention, optionally, the visible light low-frequency weight and the infrared low-frequency weight may be preset (for example, both the visible light low-frequency weight and the infrared low-frequency weight are 0.5), and may also be calculated based on the visible light matrix and the infrared matrix, which is not limited in this embodiment of the present invention.
As an optional implementation manner, fusing the visible light low-frequency matrix and the infrared low-frequency matrix according to the determined visible light low-frequency weight corresponding to the visible light image and the determined infrared low-frequency weight corresponding to the infrared image to obtain a low-frequency fusion matrix, which may include:
imgO_B=m1*imgC_B+m2*imgT_B,
wherein imgO _ B is used to represent the low frequency fusion matrix, m1For representing visible low-frequency weight, m2The infrared low-frequency weight is represented, imgC _ B is used for representing a visible light low-frequency matrix, and imgT _ B is used for representing an infrared low-frequency matrix.
In this alternative embodiment, preferably, m1=m2=0.5。
Therefore, the implementation of the optional implementation mode can improve the fusion accuracy and reliability of the low-frequency fusion matrix and improve the image fusion quality.
104. And fusing the visible light high-frequency matrix and the infrared high-frequency matrix according to the determined visible light high-frequency weight corresponding to the visible light image and the determined infrared high-frequency weight corresponding to the infrared image to obtain a high-frequency fusion matrix.
In the embodiment of the present invention, optionally, the visible light high-frequency weight and the infrared high-frequency weight may be preset, or may be calculated based on the visible light matrix and the infrared matrix, which is not limited in the embodiment of the present invention.
As an optional implementation manner, fusing the visible light high-frequency matrix and the infrared high-frequency matrix according to the determined visible light high-frequency weight corresponding to the visible light image and the determined infrared high-frequency weight corresponding to the infrared image to obtain a high-frequency fusion matrix, which may include:
imgO_F=n1*imgC_F+n2*imgT_F,
where imgO _ F is used to represent the high frequency fusion matrix, n1For representing visible high-frequency weight, n2The weight is used for representing infrared high-frequency weight, imgC _ F is used for representing a visible light high-frequency matrix, and imgT _ F is used for representing the infrared high-frequency matrix.
Therefore, the implementation of the optional implementation mode can improve the fusion accuracy and reliability of the high-frequency fusion matrix and improve the image fusion quality.
105. And fusing the low-frequency fusion matrix and the high-frequency fusion matrix based on a preset fusion mode to generate a fusion matrix corresponding to the visible light image and the infrared image, wherein the fusion matrix is used for generating a fusion image of the visible light image and the infrared image.
In the embodiment of the present invention, optionally, the fused image may be applied to a computer vision task, for example: the images correspond to target detection, recognition and tracking in the environment.
As an optional implementation manner, fusing the low-frequency fusion matrix and the high-frequency fusion matrix based on a preset fusion manner to generate a fusion matrix corresponding to the visible light image and the infrared image, which may include:
imgO=imgO_B+imgO_F,
wherein imgO is used to represent the fusion matrix.
Therefore, the implementation of the optional implementation mode can improve the accuracy and reliability of image fusion, and further improve the image fusion quality.
Therefore, by fusing the visible light image and the infrared image, the embodiment of the invention can ensure that the generated fused image simultaneously has the advantages of large information amount, high pixel intensity and intuitive image details in the visible light image, has the advantages of stable image quality and higher distinguishing degree of image main content and background area in the infrared image, reduces the influence degree of the environmental condition for collecting image data on the image quality, is favorable for improving the distinguishing degree of the main content and the background area in the image and simultaneously improving the fineness of image textures, further improves the layering sense of the image, reduces the identification difficulty of a target needing to be highlighted in the image, and in addition, by respectively fusing the low-frequency components of the visible light image and the infrared image and fusing the high-frequency components of the visible light image and the infrared image, the frequency bands of the visible light image and the infrared image used for fusion are more similar, and then the transition between the visible light image and the infrared image is smoother, and the image fusion quality is improved.
It should be noted that, in other embodiments, the execution order of step 101 and step 102 has no precedence relationship, and the execution order of step 103 and step 104 also has no precedence relationship.
In an optional embodiment, before fusing the visible light low-frequency matrix and the infrared low-frequency matrix according to the determined visible light low-frequency weight corresponding to the visible light image and the determined infrared low-frequency weight corresponding to the infrared image to obtain the low-frequency fusion matrix, the method may further include:
determining a color matrix corresponding to the infrared image;
determining a color infrared low-frequency matrix imgT _ B imgTC corresponding to the infrared image according to the color matrix and the infrared low-frequency matrix, and determining a color infrared high-frequency matrix imgT _ F imgTC corresponding to the infrared image according to the color matrix and the infrared high-frequency matrix, wherein imgT _ B is used for representing the infrared low-frequency matrix, imgT _ F is used for representing the infrared high-frequency matrix, and imgTC is used for representing the color matrix:
and updating the color infrared low-frequency matrix into an infrared low-frequency matrix, and updating the color infrared high-frequency matrix into an infrared high-frequency matrix.
In this alternative embodiment, the color matrix is used to match the visible light component to the number and type of channels of the infrared component during the acquisition of the low frequency fusion matrix and the high frequency fusion matrix.
Therefore, the optional embodiment can be implemented to convert the infrared low-frequency matrix and the infrared high-frequency matrix corresponding to the infrared image of the single channel into the color infrared low-frequency matrix and the color infrared high-frequency matrix of the multiple channels, so that the matching degree of the visible light image and the infrared image for image fusion on the image type is improved, and further, the success rate of image fusion and the fusion quality of the image are improved.
In this optional embodiment, as an optional implementation manner, fusing the visible light low-frequency matrix and the infrared low-frequency matrix according to the determined visible light low-frequency weight corresponding to the visible light image and the determined infrared low-frequency weight corresponding to the infrared image to obtain a low-frequency fusion matrix, which may include:
imgO_B=m1*imgC_B+m2*(imgT_B*imgTC);
and fusing the visible light high-frequency matrix and the infrared high-frequency matrix according to the determined visible light high-frequency weight corresponding to the visible light image and the determined infrared high-frequency weight corresponding to the infrared image to obtain a high-frequency fusion matrix, which may include:
imgO_F=n1*imgC_F+n2*(imgT_F*imgTC)。
therefore, the fusion accuracy and reliability of the low-frequency fusion matrix and the high-frequency fusion matrix can be improved, and the image fusion quality is further improved.
In this alternative embodiment, as another alternative implementation, determining a color matrix corresponding to the infrared image may include:
calculating the channel component of the infrared image in each color channel of the visible light image;
determining a color matrix corresponding to the infrared image according to the channel component of the infrared image in each color channel;
when the visible light image is a three-channel image based on an RGB space, calculating a channel component of the infrared image in each color channel of the visible light image, wherein the method comprises the following steps:
according to the temperature distribution information represented by the infrared matrix, determining a first mark value l corresponding to the infrared image1A second flag value l2And a third flag value l3Wherein l is1<l2<l3
According to the infrared matrix, the first mark value l1A second flag value l2And a third flag value l3And calculating the channel component of the infrared image in each color channel of the visible light image:
Figure 599707DEST_PATH_IMAGE003
Figure 946375DEST_PATH_IMAGE004
Figure 171951DEST_PATH_IMAGE005
wherein, imgTC _ r (i, j), imgTC _ g (i, j) and imgTC _ b (i, j) are respectively used for representing the channel component of the infrared image in each color channel, and imgT (i, j) is used for representing the infrared matrix.
Taking the human body temperature as an example, the human body temperature is in a low heat state at 37.3-38 ℃, the human body temperature is in a high fever state at 38-40 ℃, the human body temperature is in a life risk state above 40 ℃, and therefore, the normalized temperature values corresponding to 37.3 ℃, 38 ℃ and 40 ℃ can be respectively determined as the first mark value l1A second flag value l2And a third flag value l3
In this alternative embodiment, in particular, imgTC _ R (i, j), imgTC _ G (i, j), and imgTC _ B (i, j) are used to represent the channel components of the R channel, G channel, and B channel of the infrared image in RGB space, respectively.
Therefore, the optional implementation mode can determine the color infrared matrix by calculating the channel component of the infrared image in each color channel of the visible light image, so that the matching degree of the visible light image and the infrared image for image fusion on the number and the type of the channels is improved, the success rate of image fusion and the fusion quality of the images are further improved, in addition, the channel component of the infrared image in each color channel of the visible light image is calculated according to a plurality of mark values determined by the temperature representation information of the infrared image, the matching degree of the color matrix and the temperature information is favorably improved, and the discrimination of the main content and the background area in the fused image is favorably improved through the temperature information.
In this optional embodiment, optionally, determining a color matrix corresponding to the infrared image according to the channel component of the infrared image in each color channel may include:
imgTC(i,j)=[imgTC_r(i,j),imgTC_g(i,j),imgTC_b(i,j)]。
this can improve the accuracy and reliability of determining the color matrix.
In another optional embodiment, before decomposing the visible light matrix corresponding to the visible light image for image fusion to obtain the visible light decomposition matrix corresponding to the visible light image, the method may further include:
normalizing each element value in a visible light matrix corresponding to the visible light image for image fusion to obtain a normalized visible light matrix corresponding to the visible light image, and updating the normalized visible light matrix into a visible light matrix;
before decomposing the infrared matrix corresponding to the infrared image for image fusion to obtain the infrared decomposition matrix corresponding to the infrared image, the method may further include:
and carrying out normalization processing on each element value in the infrared matrix corresponding to the infrared image for image fusion to obtain a normalized infrared matrix corresponding to the infrared image, and updating the normalized infrared matrix into an infrared matrix.
Therefore, by implementing the optional embodiment, each element value in the visible light matrix and the infrared matrix can be subjected to normalization processing, so that the matching degree of the measurement units of the visible light component and the infrared component in the image fusion process is improved, the situation that the characteristics of one image are lost due to the fact that the visible light component and the infrared component are unbalanced because the statistical distribution of the visible light matrix and the infrared matrix is not uniform is reduced, and the image fusion quality is improved.
In this optional embodiment, as an optional implementation manner, before performing normalization processing on each element value in the visible light matrix corresponding to the visible light image for image fusion to obtain a normalized visible light matrix corresponding to the visible light image, the method may further include:
judging whether the visible light image is a three-channel image based on an RGB space or not according to a visible light matrix corresponding to the visible light image for image fusion;
and if not, converting the visible light matrix into a three-channel visible light matrix corresponding to the RGB space based on a preset image conversion method, and updating the three-channel visible light matrix into the visible light matrix.
In this alternative embodiment, the image conversion method may optionally include a software conversion method and/or an interface conversion method for converting the image channel format.
Therefore, the optional implementation method can convert the three-channel image not based on the RGB space into the three-channel image based on the RGB space before the normalization processing is carried out on the visible light matrix, so that the matching degree of the image type of the visible light image used for image fusion and the actual requirement is favorably improved, and the accuracy and the reliability of the image fusion are improved.
In this optional embodiment, as another optional implementation, before performing normalization processing on each element value in the infrared matrix corresponding to the infrared image used for image fusion to obtain a normalized infrared matrix corresponding to the infrared image, the method may further include:
determining a temperature value corresponding to the pixel point according to a pixel value corresponding to each pixel point in the infrared image for image fusion, wherein the temperature value is used as an element value corresponding to the pixel point;
and determining an infrared matrix corresponding to the infrared image according to the element value corresponding to each pixel point in the infrared image.
In this optional embodiment, optionally, a pixel value corresponding to each pixel point in the infrared image may include any one of a gray value, a brightness value, and an infrared thermal radiation value corresponding to the pixel point, which is not limited in the embodiment of the present invention.
Therefore, the implementation of the optional implementation mode can convert the pixel value of each pixel point in the infrared image into a temperature value, so that the matching degree of the infrared component for image fusion and the temperature information is favorably improved, and the discrimination of the main content and the background area in the fusion image is favorably improved through the temperature information.
In yet another optional embodiment, the method may further comprise:
and performing data conversion and packaging processing on the fusion matrix based on a preset data conversion mode to obtain a fusion image with a data format of a preset format.
Therefore, the optional embodiment can also convert the data format of the fused fusion matrix into a preset format, and the matching degree of the image format and the actual requirement is improved.
Example two
Referring to fig. 2, fig. 2 is a schematic flow chart of another real-time visible light image and infrared image multi-channel fusion method disclosed in the embodiment of the present invention. The method for fusing a visible light image and an infrared image in real time described in fig. 2 may be applied to a fusion process of a visible light image and an infrared image, including a fusion process of a visible light image and an infrared image that are acquired in real time at the same time and in the same field of view, for example, a real-time fusion process of an infrared detection video and a visible light detection video, which is not limited in the embodiment of the present invention. As shown in fig. 2, the method for multi-channel fusion of a visible light image and an infrared image in real time may include the following operations:
201. and decomposing the visible light matrix corresponding to the visible light image for image fusion to obtain a visible light low-frequency matrix corresponding to the visible light image and a visible light high-frequency matrix corresponding to the visible light image.
202. And decomposing the infrared matrix corresponding to the infrared image for image fusion to obtain an infrared low-frequency matrix corresponding to the infrared image and an infrared high-frequency matrix corresponding to the infrared image.
203. And fusing the visible light low-frequency matrix and the infrared low-frequency matrix according to the determined visible light low-frequency weight corresponding to the visible light image and the determined infrared low-frequency weight corresponding to the infrared image to obtain a low-frequency fusion matrix.
204. And determining the visible light high-frequency weight corresponding to the visible light image and the infrared high-frequency weight corresponding to the infrared image.
As an alternative embodiment, determining the visible light high-frequency weight corresponding to the visible light image and the infrared high-frequency weight corresponding to the infrared image may include:
calculating a visible light norm matrix corresponding to the visible light image according to a visible light Gaussian filter matrix obtained after the visible light matrix is subjected to Gaussian filtering and a visible light median filter matrix obtained after the visible light matrix is subjected to median filtering, and calculating an infrared norm matrix corresponding to the infrared image according to an infrared Gaussian filter matrix obtained after the infrared matrix is subjected to Gaussian filtering and an infrared median filter matrix obtained after the infrared matrix is subjected to median filtering:
imgC_D(i,j)=||imgC_G(i,j)-imgC_M(i,j)||p
imgT_D(i,j)=||imgT_G(i,j)-imgT_M(i,j)||p
the infrared image processing method comprises the following steps that (i, j) coordinate values corresponding to each pixel point in a visible light image and an infrared image are expressed, imgC _ D (i, j) is used for expressing a visible light norm matrix, imgC _ G (i, j) is used for expressing a visible light Gaussian filter matrix, imgC _ M (i, j) is used for expressing a visible light median filter matrix, imgT _ D (i, j) is used for expressing an infrared norm matrix, imgT _ G (i, j) is used for expressing an infrared Gaussian filter matrix, and imgT _ M (i, j) is used for expressing an infrared median filter matrix;
according to the visible light norm matrix and the infrared norm matrix, calculating visible light high-frequency weight corresponding to the visible light image and infrared high-frequency weight corresponding to the infrared image:
Figure 573851DEST_PATH_IMAGE001
Figure 817750DEST_PATH_IMAGE002
wherein imgC _ W (i, j) is used for representing visible light high-frequency weight, imgT _ W (i, j) is used for representing infrared high-frequency weight, a and b are preset weight values, and a + b = 1.
In this alternative embodiment, preferably, a = b = 0.5.
In this optional embodiment, optionally, the p value in the norm matrix calculation process may be preset, or may be determined by image information of the visible light image and the infrared image, and further, the p value in the visible light norm matrix calculation process and the p value in the infrared norm matrix calculation process may be equal or unequal, which is not limited in this optional embodiment.
Therefore, by implementing the optional implementation mode, the visible light high-frequency weight and the infrared high-frequency weight can be calculated based on the subtraction result of the gaussian filter matrix and the median filter matrix corresponding to the visible light image and the subtraction result of the gaussian filter matrix and the median filter matrix corresponding to the infrared image, so that the image frequency bands used for calculating the visible light high-frequency weight and the infrared high-frequency weight in the visible light image and the infrared image are closer, the accuracy of weight calculation and the adaptation degree of the calculated weight and the fusion requirement are improved, the transition between the visible light image and the infrared image is smoother, the image fusion quality is improved, in addition, the visible light high-frequency weight and the infrared high-frequency weight are calculated based on the norm matrix of the subtraction result, the relevant weight corresponding to each pixel point in the multi-channel image can be simplified into a scalar, and the calculation power in the image fusion process is reduced, and the image fusion efficiency is improved.
205. And fusing the visible light high-frequency matrix and the infrared high-frequency matrix according to the determined visible light high-frequency weight corresponding to the visible light image and the determined infrared high-frequency weight corresponding to the infrared image to obtain a high-frequency fusion matrix.
In the embodiment of the present invention, as an optional implementation manner, fusing the visible light high-frequency matrix and the infrared high-frequency matrix according to the determined visible light high-frequency weight corresponding to the visible light image and the determined infrared high-frequency weight corresponding to the infrared image to obtain a high-frequency fusion matrix, which may include:
imgO_F=imgC_W*imgC_F+imgT_W*imgT_F*imgTC;
wherein imgO _ F is used for representing a high-frequency fusion matrix, imgC _ F is used for representing a visible light high-frequency matrix, imgT _ F is used for representing an infrared high-frequency matrix, and imgTC is used for representing a color infrared matrix.
206. And fusing the low-frequency fusion matrix and the high-frequency fusion matrix based on a preset fusion mode to generate a fusion matrix corresponding to the visible light image and the infrared image, wherein the fusion matrix is used for generating a fusion image of the visible light image and the infrared image.
Therefore, by fusing the visible light image and the infrared image, the embodiment of the invention can ensure that the generated fused image has the advantages of large information amount, high pixel intensity and intuitive image details in the visible light image, has the advantages of stable image quality and higher distinguishing degree of image main content and a background area in the infrared image, reduces the influence degree of the environmental condition of image data acquisition on the image quality, is beneficial to improving the distinguishing degree of the main content and the background area in the image and simultaneously improving the fineness of image textures, further improves the layering sense of the image, reduces the identification difficulty of a target needing to be highlighted in the image, and in addition, by respectively fusing the low-frequency components of the visible light image and the infrared image and fusing the high-frequency components of the visible light image and the infrared image, the frequency bands of the visible light image and the infrared image for fusion can be more similar, and then the transition between the visible light image and the infrared image is smoother, the image fusion quality is improved, in addition, the rationality and the accuracy of image fusion can be improved by determining the weight corresponding to the visible light component and the infrared component, and the image fusion quality is further improved.
It should be noted that, in other embodiments, the execution order of step 201 and step 202 has no precedence relationship, the execution order of step 203 and step 205 has no precedence relationship, and the execution order of any step in step 201 to step 203 and step 204 has no precedence relationship.
In an optional embodiment, before calculating the visible light norm matrix corresponding to the visible light image according to the visible light gaussian filter matrix obtained after the visible light matrix is subjected to the gaussian filter processing and the visible light median filter matrix obtained after the visible light matrix is subjected to the median filter processing, the method further includes:
performing convolution operation processing on the visible light matrix based on a predetermined first Gaussian filter operator for Gaussian convolution operation and a first Gaussian neighborhood window size corresponding to the Gaussian convolution operation to obtain a visible light Gaussian filter matrix corresponding to the visible light image:
imgC_G=conv(gas1,imgC),
wherein imgC _ G is used to represent a visible Gaussian filter matrix, gas1For representing the first gaussian filter operator, imgC for representing the visible light matrix, conv (gas)1imgC) is used to denote against gas1And imgC;
and performing median filtering processing on the visible light matrix based on a predetermined first median neighborhood window size used for median filtering operation to obtain a visible light median filtering matrix corresponding to the visible light image:
imgC_M=medf(imgC,u1),
wherein imgC _ M is used to represent the visible median filter matrix, u1For representing a first median neighborhood window size, medf (imgC, u)1) For representing a median filtering algorithm for imgC;
before calculating the infrared norm matrix corresponding to the infrared image according to the infrared gaussian filter matrix obtained after the infrared matrix is subjected to gaussian filter processing and the infrared median filter matrix obtained after the infrared matrix is subjected to median filter processing, the method may further include:
performing convolution operation processing on the infrared matrix based on a predetermined second Gaussian filter operator for Gaussian convolution operation and a second Gaussian neighborhood window size corresponding to the Gaussian convolution operation to obtain an infrared Gaussian filter matrix corresponding to the infrared image:
imgT_G=conv(gas2,imgT),
wherein imgT _ G is used to represent an infrared Gaussian filter matrix, gas2For representing the second gaussian filter operator, imgT for representing the infrared matrix, conv (gas)2imgT) for gas2And imgT convolution algorithm;
and performing median filtering processing on the infrared matrix based on a predetermined second median neighborhood window size used for median filtering operation to obtain an infrared median filtering matrix corresponding to the infrared image:
imgT_M=medf(imgT,u2),
wherein imgT _ M is used to represent the infrared median filter matrix, u2For expressing the second median neighborhood window size, medf (imgT, u)2) For representing a median filtering algorithm for imgT.
In this optional embodiment, optionally, the first gaussian neighborhood window size, the second gaussian neighborhood window operator, the first median neighborhood window size, and the second median neighborhood window size are odd integers, further optionally, the first gaussian neighborhood window size matches the second gaussian neighborhood window size, the first median neighborhood window size matches the second median neighborhood window size, and the first gaussian filter operator matches the second gaussian filter operator. The above-mentioned "first" and "second" are used only for distinguishing parameters related to the operation processing corresponding to the visible light matrix and the infrared matrix, and are not used for describing a specific order.
Therefore, the implementation of the optional embodiment can perform Gaussian filtering processing and median filtering processing on the visible light matrix and the infrared matrix based on the preset algorithm, and the accuracy and reliability of the Gaussian filtering processing and the median filtering processing of the visible light matrix and the infrared matrix are improved.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic structural diagram of a real-time visible light image and infrared image multi-channel fusion device according to an embodiment of the present invention. The real-time visible light image and infrared image multi-channel fusion device described in fig. 3 may be applied to a fusion process of a visible light image and an infrared image, including a fusion process of a visible light image and an infrared image that are acquired in real time at the same time and in the same field of view, for example, a real-time fusion process of an infrared detection video and a visible light detection video, which is not limited in the embodiment of the present invention. As shown in fig. 3, the device for fusing a visible light image and an infrared image in real time by multiple channels may include:
the decomposition module 301 is configured to decompose a visible light matrix corresponding to a visible light image for image fusion to obtain a visible light low-frequency matrix corresponding to the visible light image and a visible light high-frequency matrix corresponding to the visible light image, and decompose an infrared matrix corresponding to an infrared image for image fusion to obtain an infrared low-frequency matrix corresponding to the infrared image and an infrared high-frequency matrix corresponding to the infrared image, where each element value in the visible light matrix corresponds to a color quantization value of one of pixel points in the visible light image, and each element value in the infrared matrix corresponds to a temperature quantization value of one of pixel points in the infrared image;
the fusion module 302 is configured to fuse the visible light low-frequency matrix and the infrared low-frequency matrix according to the determined visible light low-frequency weight corresponding to the visible light image and the determined infrared low-frequency weight corresponding to the infrared image to obtain a low-frequency fusion matrix, and fuse the visible light high-frequency matrix and the infrared high-frequency matrix according to the determined visible light high-frequency weight corresponding to the visible light image and the determined infrared high-frequency weight corresponding to the infrared image to obtain a high-frequency fusion matrix;
the generating module 303 is configured to fuse the low-frequency fusion matrix and the high-frequency fusion matrix based on a preset fusion mode to generate a fusion matrix corresponding to the visible light image and the infrared image, where the fusion matrix is used to generate a fusion image of the visible light image and the infrared image.
It can be seen that the implementation of the apparatus described in fig. 3 can make the generated fusion image simultaneously have the advantages of large information amount, high pixel intensity and intuitive image details in the visible light image, the advantages of stable image quality and high discrimination between the image main content and the background area in the infrared image, and reduce the influence degree of the environmental condition of image data acquisition on the image quality, thereby being beneficial to improving the discrimination between the main content and the background area in the image, improving the fineness of the image texture, further improving the image hierarchy, reducing the difficulty of identifying the target to be highlighted in the image, and in addition, by fusing the low-frequency components of the visible light image and the infrared image, and fusing the high-frequency components of the visible light image and the infrared image, the frequency bands of the visible light image and the infrared image for fusion can be made closer, and then the transition between the visible light image and the infrared image is smoother, and the image fusion quality is improved.
In an alternative embodiment, as shown in fig. 3, a specific manner of decomposing the visible light matrix corresponding to the visible light image for image fusion by the decomposition module 301 to obtain the visible light low-frequency matrix corresponding to the visible light image and the visible light high-frequency matrix corresponding to the visible light image may include:
performing convolution operation processing on a visible light matrix corresponding to the visible light image for image fusion based on a predetermined mean filtering operator for mean convolution operation and a neighborhood window size corresponding to the mean convolution operation to obtain a visible light low-frequency matrix corresponding to the visible light image:
imgC_B=conv(avg,imgC),
where imgC _ B is used to represent the visible low frequency matrix, avg is used to represent the mean filter operator, imgC is used to represent the visible matrix, conv (av)gimgC) is used to represent a convolution algorithm for avg and imgC;
determining a visible light high-frequency matrix corresponding to the visible light image based on the visible light matrix and the visible light low-frequency matrix:
imgC_F=imgC-imgC_B,
wherein imgC _ F is used to represent the visible light high frequency matrix.
Therefore, the device described in the embodiment of fig. 3 can decompose the visible light matrix into the visible light low-frequency matrix and the visible light high-frequency matrix based on the mean value filtering algorithm, so that the accuracy and the reliability of decomposition of the visible light matrix are improved, and the fusion quality of the visible light image and the infrared image is favorably improved.
In another alternative embodiment, as shown in fig. 4, the apparatus may further include:
a first determining module 304, configured to determine a visible light high-frequency weight corresponding to the visible light image and an infrared high-frequency weight corresponding to the infrared image before the fusing module 302 fuses the visible light high-frequency matrix and the infrared high-frequency matrix according to the determined visible light high-frequency weight corresponding to the visible light image and the determined infrared high-frequency weight corresponding to the infrared image to obtain a high-frequency fusion matrix;
the specific manner of determining the visible light high-frequency weight corresponding to the visible light image and the infrared high-frequency weight corresponding to the infrared image by the first determining module 304 may include:
calculating a visible light norm matrix corresponding to the visible light image according to a visible light Gaussian filter matrix obtained after the visible light matrix is subjected to Gaussian filtering and a visible light median filter matrix obtained after the visible light matrix is subjected to median filtering, and calculating an infrared norm matrix corresponding to the infrared image according to an infrared Gaussian filter matrix obtained after the infrared matrix is subjected to Gaussian filtering and an infrared median filter matrix obtained after the infrared matrix is subjected to median filtering:
imgC_D(i,j)=||imgC_G(i,j)-imgC_M(i,j)||p
imgT_D(i,j)=||imgT_G(i,j)-imgT_M(i,j)||p
the infrared image processing method comprises the following steps that (i, j) coordinate values corresponding to each pixel point in a visible light image and an infrared image are expressed, imgC _ D (i, j) is used for expressing a visible light norm matrix, imgC _ G (i, j) is used for expressing a visible light Gaussian filter matrix, imgC _ M (i, j) is used for expressing a visible light median filter matrix, imgT _ D (i, j) is used for expressing an infrared norm matrix, imgT _ G (i, j) is used for expressing an infrared Gaussian filter matrix, and imgT _ M (i, j) is used for expressing an infrared median filter matrix;
according to the visible light norm matrix and the infrared norm matrix, calculating visible light high-frequency weight corresponding to the visible light image and infrared high-frequency weight corresponding to the infrared image:
Figure 820473DEST_PATH_IMAGE001
Figure 782612DEST_PATH_IMAGE002
wherein imgC _ W (i, j) is used for representing visible light high-frequency weight, imgT _ W (i, j) is used for representing infrared high-frequency weight, a and b are preset weight values, and a + b = 1.
Therefore, the implementation of the apparatus described in fig. 4 can calculate the visible light high-frequency weight and the infrared high-frequency weight based on the subtraction result of the gaussian filter matrix and the median filter matrix corresponding to the visible light image and the subtraction result of the gaussian filter matrix and the median filter matrix corresponding to the infrared image, so that the image frequency bands used for calculating the visible light high-frequency weight and the infrared high-frequency weight in the visible light image and the infrared image are closer, thereby improving the accuracy of weight calculation and the adaptation degree of the calculated weight and the fusion requirement, further making the transition between the visible light image and the infrared image smoother, and improving the image fusion quality, in addition, the visible light high-frequency weight and the infrared high-frequency weight are calculated by the norm matrix based on the subtraction result, so that the related weight corresponding to each pixel point in the multi-channel image can be simplified into a scalar, thereby reducing the calculation power in the image fusion process, and the image fusion efficiency is improved.
In yet another alternative embodiment, as shown in fig. 4, the apparatus may further include:
a second determining module 305, configured to determine a color matrix corresponding to the infrared image before the fusing module 302 fuses the visible light low-frequency matrix and the infrared low-frequency matrix according to the visible light low-frequency weight corresponding to the determined visible light image and the infrared low-frequency weight corresponding to the infrared image to obtain a low-frequency fusion matrix; determining a color infrared low-frequency matrix imgT _ B imgTC corresponding to the infrared image according to the color matrix and the infrared low-frequency matrix, and determining a color infrared high-frequency matrix imgT _ F imgTC corresponding to the infrared image according to the color matrix and the infrared high-frequency matrix, wherein imgT _ B is used for representing the infrared low-frequency matrix, imgT _ F is used for representing the infrared high-frequency matrix, and imgTC is used for representing the color matrix:
and the updating module 306 is configured to update the color infrared low-frequency matrix into an infrared low-frequency matrix and update the color infrared high-frequency matrix into an infrared high-frequency matrix.
Therefore, the device described in fig. 4 can also convert the infrared low-frequency matrix and the infrared high-frequency matrix corresponding to the infrared image of a single channel into a multi-channel color infrared low-frequency matrix and a multi-channel color infrared high-frequency matrix, so as to improve the matching degree of the visible light image and the infrared image for image fusion on the image type, and further improve the success rate of image fusion and the fusion quality of the image.
In yet another alternative embodiment, as shown in fig. 4, a specific manner of determining the color matrix corresponding to the infrared image by the second determining module 305 may include:
calculating the channel component of the infrared image in each color channel of the visible light image;
determining a color matrix corresponding to the infrared image according to the channel component of the infrared image in each color channel;
when the visible light image is a three-channel image based on an RGB space, the specific manner of calculating the channel component of the infrared image in each color channel of the visible light image by the second determining module may include:
according to the temperature distribution information represented by the infrared matrix, determining a first mark value l corresponding to the infrared image1A second flag value l2And a third flag value l3Wherein l1<l2<l3
According to the infrared matrix, the first mark value l1A second flag value l2And a third flag value l3And calculating the channel component of the infrared image in each color channel of the visible light image:
Figure 660307DEST_PATH_IMAGE003
Figure 24293DEST_PATH_IMAGE004
Figure 197916DEST_PATH_IMAGE005
wherein, imgTC _ r (i, j), imgTC _ g (i, j) and imgTC _ b (i, j) are respectively used for representing the channel component of the infrared image in each color channel, and imgT (i, j) is used for representing the infrared matrix.
It can be seen that, implementing the apparatus described in fig. 4 can also determine a color infrared matrix by calculating the channel component of the infrared image in each color channel of the visible light image, thereby improving the matching degree of the visible light image and the infrared image used for image fusion in the number and types of channels, and further improving the success rate of image fusion and the fusion quality of the images, in addition, calculating the channel component of the infrared image in each color channel of the visible light image according to a plurality of mark values determined by the temperature characterization information of the infrared image, which is beneficial to improving the matching degree of the color matrix and the temperature information, and further beneficial to improving the discrimination of the main content and the background area in the fusion image through the temperature information.
In yet another alternative embodiment, as shown in fig. 4, the apparatus may further include:
a normalization module 307, configured to perform normalization processing on each element value in the visible light matrix corresponding to the visible light image for image fusion before the decomposition module 301 decomposes the visible light matrix corresponding to the visible light image to obtain a visible light decomposition matrix corresponding to the visible light image, and further perform normalization processing on each element value in the infrared matrix corresponding to the infrared image for image fusion before the decomposition module decomposes the infrared matrix corresponding to the infrared image for image fusion to obtain an infrared decomposition matrix corresponding to the infrared image, so as to obtain a normalized infrared matrix corresponding to the infrared image;
the updating module 306 is further configured to update the normalized visible light matrix into a visible light matrix, and update the normalized infrared matrix into an infrared matrix.
Therefore, the device described in fig. 4 can also perform normalization processing on each element value in the visible light matrix and the infrared matrix to improve the matching degree of the measurement units of the visible light component and the infrared component in the image fusion process, reduce the occurrence of the situation that the characteristics of one image are lost due to imbalance of the visible light component and the infrared component caused by non-uniform statistical distribution of the visible light matrix and the infrared matrix, and improve the image fusion quality.
In yet another alternative embodiment, as shown in fig. 4, the apparatus may further include:
a judging module 308, configured to, before the normalizing module 307 performs normalization processing on each element value in the visible light matrix corresponding to the visible light image for image fusion to obtain a normalized visible light matrix corresponding to the visible light image, judge whether the visible light image is a three-channel image based on an RGB space according to the visible light matrix corresponding to the visible light image for image fusion;
a conversion module 309, configured to, when the determination module 308 determines that the visible light image is not a three-channel image, convert the visible light matrix into a three-channel visible light matrix corresponding to the RGB space based on a preset image conversion method;
the updating module 306 is further configured to update the three-channel visible light matrix to a visible light matrix.
Therefore, the device described in fig. 4 can also convert the three-channel image not based on the RGB space into the three-channel image based on the RGB space before the normalization processing is performed on the visible light matrix, which is beneficial to improving the matching degree of the image type of the visible light image used for image fusion and the actual requirement, and improving the accuracy and reliability of image fusion.
Example four
Referring to fig. 5, fig. 5 is a schematic structural diagram of another real-time visible light image and infrared image multi-channel fusion device disclosed in the embodiment of the present invention. As shown in fig. 5, the device for fusing a visible light image and an infrared image in real time in multiple channels may include:
a memory 401 storing executable program code;
a processor 402 coupled with the memory 401;
the processor 402 calls the executable program code stored in the memory 401 to execute the steps of the method for fusing the visible light image and the infrared image in multiple channels in real time according to the first embodiment or the second embodiment of the present invention.
EXAMPLE five
The embodiment of the invention discloses a computer storage medium, which stores computer instructions, and the computer instructions are used for executing the steps of the real-time visible light image and infrared image multi-channel fusion method described in the first embodiment or the second embodiment of the invention when being called.
Example six
The embodiment of the invention discloses a computer program product, which comprises a non-transitory computer readable storage medium storing a computer program, wherein the computer program is operable to make a computer execute the steps of the real-time visible light image and infrared image multi-channel fusion method described in the first embodiment or the second embodiment.
The above-described embodiments of the apparatus are merely illustrative, and the modules described as separate components may or may not be physically separate, and the components shown as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above detailed description of the embodiments, those skilled in the art will clearly understand that each embodiment may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. Based on such understanding, the above technical solutions may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, where the storage medium includes a Read-Only Memory (ROM), a Random Access Memory (RAM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), a One-time Programmable Read-Only Memory (OTPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc-Read-Only Memory (CD-ROM), or other disk memories, CD-ROMs, or other magnetic disks, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
Finally, it should be noted that: the method and the device for real-time multi-channel fusion of visible light images and infrared images disclosed in the embodiments of the present invention are only preferred embodiments of the present invention, and are only used for illustrating the technical solutions of the present invention, not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art; the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A real-time visible light image and infrared image multi-channel fusion method is characterized by comprising the following steps:
decomposing a visible light matrix corresponding to a visible light image for image fusion to obtain a visible light low-frequency matrix corresponding to the visible light image and a visible light high-frequency matrix corresponding to the visible light image, and decomposing an infrared matrix corresponding to an infrared image for image fusion to obtain an infrared low-frequency matrix corresponding to the infrared image and an infrared high-frequency matrix corresponding to the infrared image, wherein each element value in the visible light matrix corresponds to a color quantized value of one of pixel points in the visible light image, and each element value in the infrared matrix corresponds to a temperature quantized value of one of pixel points in the infrared image;
fusing the visible light low-frequency matrix and the infrared low-frequency matrix according to the determined visible light low-frequency weight corresponding to the visible light image and the determined infrared low-frequency weight corresponding to the infrared image to obtain a low-frequency fusion matrix, and fusing the visible light high-frequency matrix and the infrared high-frequency matrix according to the determined visible light high-frequency weight corresponding to the visible light image and the determined infrared high-frequency weight corresponding to the infrared image to obtain a high-frequency fusion matrix;
and fusing the low-frequency fusion matrix and the high-frequency fusion matrix based on a preset fusion mode to generate a fusion matrix corresponding to the visible light image and the infrared image, wherein the fusion matrix is used for generating a fusion image of the visible light image and the infrared image.
2. The method for multi-channel fusion of a visible light image and an infrared image in real time according to claim 1, wherein decomposing a visible light matrix corresponding to the visible light image for image fusion to obtain a visible light low-frequency matrix corresponding to the visible light image and a visible light high-frequency matrix corresponding to the visible light image comprises:
performing convolution operation processing on a visible light matrix corresponding to a visible light image for image fusion based on a predetermined mean filtering operator for mean convolution operation and a neighborhood window size corresponding to the mean convolution operation to obtain a visible light low-frequency matrix corresponding to the visible light image:
imgC_B=conv(avg,imgC),
wherein imgC _ B is used to represent the visible low frequency matrix, avg is used to represent the mean filter operator, imgC is used to represent the visible matrix, conv (av)gimgC) is used to represent a convolution algorithm for avg and imgC;
determining a visible light high-frequency matrix corresponding to the visible light image based on the visible light matrix and the visible light low-frequency matrix:
imgC_F=imgC-imgC_B,
wherein imgC _ F is used to represent the visible light high frequency matrix.
3. The method for multi-channel fusion of the visible light image and the infrared image in real time according to claim 1 or 2, wherein before the step of fusing the visible light high-frequency matrix and the infrared high-frequency matrix according to the determined visible light high-frequency weight corresponding to the visible light image and the determined infrared high-frequency weight corresponding to the infrared image to obtain the high-frequency fusion matrix, the method further comprises:
determining a visible light high-frequency weight corresponding to the visible light image and an infrared high-frequency weight corresponding to the infrared image;
wherein the determining the visible light high-frequency weight corresponding to the visible light image and the infrared high-frequency weight corresponding to the infrared image comprises:
calculating a visible light norm matrix corresponding to the visible light image according to a visible light Gaussian filter matrix obtained after the visible light matrix is subjected to Gaussian filtering and a visible light median filter matrix obtained after the visible light matrix is subjected to median filtering, and calculating an infrared norm matrix corresponding to the infrared image according to an infrared Gaussian filter matrix obtained after the infrared matrix is subjected to Gaussian filtering and an infrared median filter matrix obtained after the infrared matrix is subjected to median filtering:
imgC_D(i,j)=||imgC_G(i,j)-imgC_M(i,j)||p
imgT_D(i,j)=||imgT_G(i,j)-imgT_M(i,j)||p
wherein (i, j) is used for representing coordinate values corresponding to each pixel point in the visible light image and the infrared image, imgC _ D (i, j) is used for representing the visible light norm matrix, imgC _ G (i, j) is used for representing the visible light gaussian filter matrix, imgC _ M (i, j) is used for representing the visible light median filter matrix, imgT _ D (i, j) is used for representing the infrared norm matrix, imgT _ G (i, j) is used for representing the infrared gaussian filter matrix, and imgT _ M (i, j) is used for representing the infrared median filter matrix;
according to the visible light norm matrix and the infrared norm matrix, calculating a visible light high-frequency weight corresponding to the visible light image and an infrared high-frequency weight corresponding to the infrared image:
Figure 151799DEST_PATH_IMAGE001
Figure 712094DEST_PATH_IMAGE002
wherein imgC _ W (i, j) is used for representing the visible light high-frequency weight, imgT _ W (i, j) is used for representing the infrared high-frequency weight, a and b are preset weight values, and a + b = 1.
4. The method for multi-channel fusion of the visible light image and the infrared image in real time according to claim 1 or 2, wherein before the step of fusing the visible light low-frequency matrix and the infrared low-frequency matrix according to the determined visible light low-frequency weight corresponding to the visible light image and the determined infrared low-frequency weight corresponding to the infrared image to obtain the low-frequency fusion matrix, the method further comprises:
determining a color matrix corresponding to the infrared image;
determining a color infrared low-frequency matrix imgT _ B imgTC corresponding to the infrared image according to the color matrix and the infrared low-frequency matrix, and determining a color infrared high-frequency matrix imgT _ F imgTC corresponding to the infrared image according to the color matrix and the infrared high-frequency matrix, wherein imgT _ B is used for representing the infrared low-frequency matrix, imgT _ F is used for representing the infrared high-frequency matrix, and imgTC is used for representing the color matrix:
and updating the color infrared low-frequency matrix into the infrared low-frequency matrix, and updating the color infrared high-frequency matrix into the infrared high-frequency matrix.
5. The method for multi-channel fusion of a visible light image and an infrared image in real time according to claim 4, wherein the determining a color matrix corresponding to the infrared image comprises:
calculating a channel component of the infrared image in each color channel of the visible light image;
determining a color matrix corresponding to the infrared image according to the channel component of the infrared image in each color channel;
when the visible light image is a three-channel image based on an RGB space, the calculating a channel component of the infrared image in each color channel of the visible light image includes:
according to the temperature distribution information represented by the infrared matrix, determining a first mark value l corresponding to the infrared image1A second flag value l2And a third flag value l3Wherein l is1<l2<l3
According to the infrared matrix and the first mark value l1The second flag value l2And said third flag value l3And calculating the channel component of the infrared image in each color channel of the visible light image:
Figure 789466DEST_PATH_IMAGE003
Figure 615470DEST_PATH_IMAGE004
Figure 497976DEST_PATH_IMAGE005
wherein imgTC _ r (i, j), imgTC _ g (i, j), and imgTC _ b (i, j) are respectively used to represent channel components of the infrared image at each of the color channels, and imgT (i, j) is used to represent the infrared matrix.
6. The method for multi-channel fusion of the visible light image and the infrared image in real time according to claim 4, wherein before the decomposing the visible light matrix corresponding to the visible light image for image fusion to obtain the visible light decomposition matrix corresponding to the visible light image, the method further comprises:
normalizing each element value in a visible light matrix corresponding to a visible light image for image fusion to obtain a normalized visible light matrix corresponding to the visible light image, and updating the normalized visible light matrix into the visible light matrix;
and before decomposing the infrared matrix corresponding to the infrared image for image fusion to obtain the infrared decomposition matrix corresponding to the infrared image, the method further comprises the following steps:
and carrying out normalization processing on each element value in an infrared matrix corresponding to the infrared image for image fusion to obtain a normalized infrared matrix corresponding to the infrared image, and updating the normalized infrared matrix into the infrared matrix.
7. The method of claim 6, wherein before normalizing each element value in the visible light matrix corresponding to the visible light image for image fusion to obtain the normalized visible light matrix corresponding to the visible light image, the method further comprises:
judging whether the visible light image is a three-channel image based on an RGB space or not according to a visible light matrix corresponding to the visible light image for image fusion;
and when the judgment result is negative, converting the visible light matrix into a three-channel visible light matrix corresponding to the RGB space based on a preset image conversion method, and updating the three-channel visible light matrix into the visible light matrix.
8. A device for fusing a visible light image and an infrared image in real time in a multi-channel mode is characterized by comprising:
the decomposition module is used for decomposing a visible light matrix corresponding to a visible light image for image fusion to obtain a visible light low-frequency matrix corresponding to the visible light image and a visible light high-frequency matrix corresponding to the visible light image, and decomposing an infrared matrix corresponding to an infrared image for image fusion to obtain an infrared low-frequency matrix corresponding to the infrared image and an infrared high-frequency matrix corresponding to the infrared image, wherein each element value in the visible light matrix corresponds to a color quantized value of one pixel point in the visible light image, and each element value in the infrared matrix corresponds to a temperature quantized value of one pixel point in the infrared image;
the fusion module is used for fusing the visible light low-frequency matrix and the infrared low-frequency matrix according to the determined visible light low-frequency weight corresponding to the visible light image and the determined infrared low-frequency weight corresponding to the infrared image to obtain a low-frequency fusion matrix, and fusing the visible light high-frequency matrix and the infrared high-frequency matrix according to the determined visible light high-frequency weight corresponding to the visible light image and the determined infrared high-frequency weight corresponding to the infrared image to obtain a high-frequency fusion matrix;
the generating module is used for fusing the low-frequency fusion matrix and the high-frequency fusion matrix based on a preset fusion mode to generate a fusion matrix corresponding to the visible light image and the infrared image, and the fusion matrix is used for generating a fusion image of the visible light image and the infrared image.
9. A device for fusing a visible light image and an infrared image in real time in a multi-channel mode is characterized by comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the method for fusing the visible light image and the infrared image in a multi-channel mode in real time according to any one of claims 1 to 7.
10. A computer storage medium storing computer instructions which, when invoked, perform a method for multi-channel fusion of a visible light image and an infrared image in real time according to any one of claims 1 to 7.
CN202210585859.8A 2022-05-27 2022-05-27 Real-time visible light image and infrared image multi-channel fusion method and device Active CN114677316B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210585859.8A CN114677316B (en) 2022-05-27 2022-05-27 Real-time visible light image and infrared image multi-channel fusion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210585859.8A CN114677316B (en) 2022-05-27 2022-05-27 Real-time visible light image and infrared image multi-channel fusion method and device

Publications (2)

Publication Number Publication Date
CN114677316A true CN114677316A (en) 2022-06-28
CN114677316B CN114677316B (en) 2022-11-25

Family

ID=82079227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210585859.8A Active CN114677316B (en) 2022-05-27 2022-05-27 Real-time visible light image and infrared image multi-channel fusion method and device

Country Status (1)

Country Link
CN (1) CN114677316B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147325A (en) * 2022-09-05 2022-10-04 深圳清瑞博源智能科技有限公司 Image fusion method, device, equipment and storage medium
CN116503454A (en) * 2023-06-27 2023-07-28 季华实验室 Infrared and visible light image fusion method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780392A (en) * 2016-12-27 2017-05-31 浙江大华技术股份有限公司 A kind of image interfusion method and device
CN107657217A (en) * 2017-09-12 2018-02-02 电子科技大学 The fusion method of infrared and visible light video based on moving object detection
CN109801250A (en) * 2019-01-10 2019-05-24 云南大学 Infrared and visible light image fusion method based on ADC-SCM and low-rank matrix expression
CN111462028A (en) * 2020-03-16 2020-07-28 中国地质大学(武汉) Infrared and visible light image fusion method based on phase consistency and target enhancement
CN112233053A (en) * 2020-09-23 2021-01-15 浙江大华技术股份有限公司 Image fusion method, device, equipment and computer readable storage medium
WO2021077706A1 (en) * 2019-10-21 2021-04-29 浙江宇视科技有限公司 Image fusion method and apparatus, storage medium, and electronic device
CN114511484A (en) * 2021-12-29 2022-05-17 浙江大学 Infrared and color visible light image rapid fusion method based on multi-level LatLRR

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780392A (en) * 2016-12-27 2017-05-31 浙江大华技术股份有限公司 A kind of image interfusion method and device
CN107657217A (en) * 2017-09-12 2018-02-02 电子科技大学 The fusion method of infrared and visible light video based on moving object detection
CN109801250A (en) * 2019-01-10 2019-05-24 云南大学 Infrared and visible light image fusion method based on ADC-SCM and low-rank matrix expression
WO2021077706A1 (en) * 2019-10-21 2021-04-29 浙江宇视科技有限公司 Image fusion method and apparatus, storage medium, and electronic device
CN111462028A (en) * 2020-03-16 2020-07-28 中国地质大学(武汉) Infrared and visible light image fusion method based on phase consistency and target enhancement
CN112233053A (en) * 2020-09-23 2021-01-15 浙江大华技术股份有限公司 Image fusion method, device, equipment and computer readable storage medium
CN114511484A (en) * 2021-12-29 2022-05-17 浙江大学 Infrared and color visible light image rapid fusion method based on multi-level LatLRR

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BOYANG CHENG ET AL.: "A novel fusion framework of visible light and infrared images based on singular value decomposition and adaptive DUAL-PCNN in NSST domain", 《INFRARED PHYSICS & TECHNOLOGY》 *
蔡铠利 等: "红外图像与可见光图像融合算法研究", 《沈阳理工大学学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147325A (en) * 2022-09-05 2022-10-04 深圳清瑞博源智能科技有限公司 Image fusion method, device, equipment and storage medium
CN116503454A (en) * 2023-06-27 2023-07-28 季华实验室 Infrared and visible light image fusion method and device, electronic equipment and storage medium
CN116503454B (en) * 2023-06-27 2023-10-20 季华实验室 Infrared and visible light image fusion method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114677316B (en) 2022-11-25

Similar Documents

Publication Publication Date Title
CN114677316B (en) Real-time visible light image and infrared image multi-channel fusion method and device
Emberton et al. Underwater image and video dehazing with pure haze region segmentation
Negru et al. Exponential contrast restoration in fog conditions for driving assistance
KR20210149848A (en) Skin quality detection method, skin quality classification method, skin quality detection device, electronic device and storage medium
Li et al. A multi-scale fusion scheme based on haze-relevant features for single image dehazing
CN110458787B (en) Image fusion method and device and computer storage medium
Pei et al. Effective image haze removal using dark channel prior and post-processing
CN110047059B (en) Image processing method and device, electronic equipment and readable storage medium
CN107465911A (en) A kind of extraction of depth information method and device
CN111179202A (en) Single image defogging enhancement method and system based on generation countermeasure network
Hussein et al. Retinex theory for color image enhancement: A systematic review
CN114022397B (en) Endoscope image defogging method and device, electronic equipment and storage medium
CN111415304A (en) Underwater vision enhancement method and device based on cascade deep network
Mei et al. Single image dehazing using dark channel fusion and haze density weight
CN107909542A (en) Image processing method, device, computer-readable recording medium and electronic equipment
CN113344796A (en) Image processing method, device, equipment and storage medium
CN114140481A (en) Edge detection method and device based on infrared image
CN116823674B (en) Cross-modal fusion underwater image enhancement method
CN109754372A (en) A kind of image defogging processing method and processing device
CN107424134B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
JP5203159B2 (en) Image processing method, image processing system, and image processing program
JP4742068B2 (en) Image processing method, image processing system, and image processing program
CN112700396A (en) Illumination evaluation method and device for face picture, computing equipment and storage medium
CN116109511A (en) Method and system for infrared image edge enhancement
CN110610468B (en) Method, device, equipment and storage medium for identifying hairs based on skin mirror image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 518066 Qianhai Shimao financial center phase II, No. 3040, Xinghai Avenue, Nanshan street, Shenzhen Hong Kong cooperation zone, Nanshan District, Shenzhen, Guangdong Province 2005

Patentee after: Shenzhen Dingjiang Technology Co.,Ltd.

Address before: 518066 Qianhai Shimao financial center phase II, No. 3040, Xinghai Avenue, Nanshan street, Shenzhen Hong Kong cooperation zone, Nanshan District, Shenzhen, Guangdong Province 2005

Patentee before: SHENZHEN DINGJIANG TECHNOLOGY CO.,LTD.