CN114519808A - Image fusion method, device and equipment and storage medium - Google Patents

Image fusion method, device and equipment and storage medium Download PDF

Info

Publication number
CN114519808A
CN114519808A CN202210157058.1A CN202210157058A CN114519808A CN 114519808 A CN114519808 A CN 114519808A CN 202210157058 A CN202210157058 A CN 202210157058A CN 114519808 A CN114519808 A CN 114519808A
Authority
CN
China
Prior art keywords
image
fusion
determining
infrared
visible light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210157058.1A
Other languages
Chinese (zh)
Inventor
赵尧
于洪英
闫奇
顾建超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Iray Technology Co Ltd
Original Assignee
Iray Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Iray Technology Co Ltd filed Critical Iray Technology Co Ltd
Priority to CN202210157058.1A priority Critical patent/CN114519808A/en
Publication of CN114519808A publication Critical patent/CN114519808A/en
Priority to PCT/CN2022/094865 priority patent/WO2023155324A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Abstract

The embodiment of the application provides an image fusion method, an image fusion device, image processing equipment and a computer readable storage medium, wherein the image fusion method comprises the steps of acquiring a visible light image and an infrared image which are synchronously acquired aiming at a target field of view; performing binarization on the infrared image to obtain a mask image, and determining a target fusion area according to the mask image; and fusing the infrared image with the visible light image based on the target fusion area to obtain a fusion image. The method has the advantages that through the determination of the target fusion area, the part of the effective information contained in the infrared image can be extracted, the extracted effective information is fused with the visible light image, the problem that the image quality is reduced due to the fact that the fused image contains useless information can be effectively avoided, the invalid information in the image is reduced, the calculation amount and the complexity are reduced, and the real-time performance of the system can be improved.

Description

Image fusion method, device and equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image fusion method and apparatus, an image processing device, and a computer-readable storage medium.
Background
The images are mainly divided into visible light images and infrared images, the high-frequency details of the visible light images are rich, the overall detail characteristics of a shooting scene can be well reflected, but under the condition of poor illumination conditions, the image quality is reduced, and a target to be detected in the images and an environmental background become fuzzy; the infrared imaging principle mainly displays the shape and the outline of an object through the thermal radiation intensity of the object, has good adaptability to weather and illumination, and particularly has good detectability to hidden heat source targets, such as camouflaged enemies, weapons and other military targets, but the infrared imaging has the problems of blurred image details, unclear texture, less high-frequency information of a scene, poor contrast, low definition and the like.
In order to take the advantages of the visible light image and the infrared image into consideration, the visible light image and the infrared image are fused in an application scene to obtain comprehensive and accurate image description of a shooting scene, so that the information is fully utilized, and meanwhile, the accuracy and reliability of system analysis and decision can be improved.
The image fusion method for fusing the visible light image and the infrared image is mainly divided into three types according to the complexity of information processing in the fusion process: pixel level fusion, feature level fusion and decision level fusion. The pixel-level fusion is a process of obtaining a fused image by operating pixels of the image, and has the advantages of more information retention in an original image, and the defects of traversing analysis calculation on image pixel information, large data calculation amount and complexity and low system real-time performance. The feature level image fusion is to extract feature information such as edges, shapes, textures and pixel densities from an image to be fused, form a multi-dimensional vector space according to the extracted features, analyze and process feature vectors in the vector space to form a feature set of the image, train the feature set and fuse the image to be fused according to a training result. At present, the characteristic level image fusion mostly adopts an algorithm of an artificial neural network, and has the advantages of high processing speed and small calculated amount; the disadvantages are more information loss and higher requirement on an operating system. The decision-level image fusion is to firstly perform feature extraction, target feature recognition and decision classification on images to be fused, establish primary decision on the same target, then perform credibility fusion on decision information of visible light images and infrared images according to a fusion rule, and finally obtain a joint decision result. At present, a decision-level fusion method mainly comprises a fusion algorithm based on a support vector machine, a neural network, evidence reasoning, Bayes reasoning, fuzzy integral and the like, and has high complexity and higher requirement on an operating system.
Disclosure of Invention
In order to solve the existing technical problems, the application provides an image fusion method, an image fusion device, an image processing device and a computer-readable storage medium, which can reduce invalid information in an image, reduce the amount of computation and complexity, and improve the real-time performance of a system.
In order to achieve the above purpose, the technical solution of the embodiment of the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image fusion method, which is applied to an image processing device, and includes:
acquiring a visible light image and an infrared image which are synchronously acquired aiming at a target field of view;
performing binarization on the infrared image to obtain a mask image, and determining a target fusion area according to the mask image;
and fusing the infrared image with the visible light image based on the target fusion area to obtain a fusion image.
In a second aspect, an embodiment of the present application provides an image fusion apparatus, including:
the acquisition module is used for acquiring a visible light image and an infrared image which are synchronously acquired aiming at a target field of view;
the fusion region determining module is used for binarizing the infrared image to obtain a mask image and determining a target fusion region according to the mask image;
And the fusion module is used for fusing the infrared image with the visible light image based on the target fusion area to obtain a fusion image.
In a third aspect, an embodiment of the present application provides an image processing apparatus, which includes a processor, a memory connected to the processor, and a computer program stored on the memory and executable by the processor, and when executed by the processor, the computer program implements the image fusion method according to any embodiment of the present application applied to a terminal device side.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by the processor, the computer program implements an image fusion method according to any embodiment of the present application.
In the above embodiment, the mask image is obtained by binarizing the infrared image, the target fusion region is determined according to the mask image, and the infrared image is fused with the visible light image based on the target fusion region to obtain the fusion image, so that the part of the effective information contained in the infrared image can be extracted by determining the target fusion region, and the extracted effective information is fused with the visible light image, thereby effectively avoiding the image quality reduction caused by the useless information contained in the fused image, reducing the invalid information in the image, reducing the calculation amount and complexity, and improving the real-time performance of the system.
In the above embodiments, the image fusion device, the image processing apparatus, and the computer readable storage medium belong to the same concept as the corresponding image fusion method embodiments, so that the image fusion device and the image processing apparatus respectively have the same technical effects as the corresponding image fusion method embodiments, and are not described herein again.
Drawings
Fig. 1 is a schematic view of an application scenario of an image fusion method in an embodiment;
FIG. 2 is a flow diagram of a method for image fusion in an embodiment;
FIG. 3 is a flow chart of a method of image fusion in another embodiment;
FIG. 4 is a flowchart of an image fusion method in yet another embodiment;
FIG. 5 is a schematic representation of a gray level histogram of an exemplary mid-IR image;
FIG. 6 is a diagram illustrating an example of a unimodal distribution of gray level histogram data;
FIG. 7 is a comparison diagram of a gray level histogram of an infrared image in unimodal distribution, fused by a triangle method, a Gaussian method and an Otsu method;
FIG. 8 is a comparison diagram of the gray level histogram of the infrared image in a substantially uniform distribution, fused by the triangle method, the Gaussian method and the Otsu method;
FIG. 9 is a schematic diagram showing a gray level histogram of an infrared image in a bimodal distribution, which is a comparison result of a triangle method, a Gaussian method and an Otsu method;
FIG. 10 is a flow diagram of an alternative embodiment of an image fusion method;
FIG. 11 is a schematic illustration of an infrared image employed in the embodiment shown in FIG. 10;
FIG. 12 is a schematic illustration of a visible light image employed in the embodiment shown in FIG. 10;
fig. 13 is a schematic diagram of a fused image obtained by fusing an infrared image and a visible light image by using the image fusion method described in the present application;
FIG. 14 is a schematic illustration of a fused image using known low rank representation based principles for fusing infrared and visible images;
FIG. 15 is a schematic illustration of a fused image using known non-subsampled shear wave transform based principles for fusing an IR image and a visible light image;
FIG. 16 is a schematic illustration of a fused image using known non-subsampled contourlet transform based principles for fusing an infrared image and a visible light image;
FIG. 17 is a schematic diagram of a fused image using a known Poisson-based image editing principle to fuse an infrared image and a visible light image;
FIG. 18 is a diagram illustrating an image fusion apparatus according to an embodiment;
fig. 19 is a schematic structural diagram of an image processing apparatus in an embodiment.
Detailed Description
The technical solution of the present application is further described in detail with reference to the drawings and specific embodiments of the specification.
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to the expression "some embodiments" which describe a subset of all possible embodiments, it being noted that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first, second, and third" are only used to distinguish between similar items and do not denote a particular order, but rather the terms "first, second, and third" are used to indicate that a particular order or sequence of items may be interchanged where appropriate to enable embodiments of the application described herein to be practiced otherwise than as specifically illustrated or described herein.
Referring to fig. 1, a schematic view of an optional application scenario of the image processing method according to the embodiment of the present application is shown, where an image processing device 11 includes a processor 12, a memory 13 connected to the processor 12, a visible light shooting module 14, and an infrared shooting module 15. The image processing device 11 acquires a visible light image and an infrared image in real time synchronously through the visible light shooting module 14 and the infrared shooting module 14 and sends the visible light image and the infrared image to the processor 12, a computer program for implementing the image fusion method provided by the embodiment of the application is stored in the memory 13, the processor 12 obtains a mask image through binaryzation of the infrared image through the computer program, determines a target fusion area through the mask image, and fuses the infrared image with the visible light image based on the target fusion area to obtain a fusion image. The image processing device 11 may be various intelligent terminals, such as a security monitoring device, a vehicle-mounted device, and the like, which are integrated with the visible light shooting module 14 and the infrared shooting module 15 and have storage and processing functions; the image processing device 11 may also be a computing device connected with the visible light shooting module 14 and the infrared shooting module 15; the image processing device 11 may also be a two-light fusion aiming type device of white light and red light.
Referring to fig. 2, an image fusion method provided in an embodiment of the present application may be applied to the image processing apparatus shown in fig. 1. The image processing method comprises the following steps:
s101, acquiring a visible light image and an infrared image which are synchronously acquired aiming at a target field of view.
The visible light image and the infrared image are acquired synchronously aiming at the target field of view, so that the visible light image and the infrared image comprise the imaging of objects in the same target field of view. Optionally, the image processing device includes a visible light shooting module and an infrared shooting module, and the acquiring of the visible light image and the infrared image synchronously collected for the target field of view includes: the image processing equipment simultaneously collects visible light images and infrared images through the visible light shooting module and the infrared shooting module, and sends the collected visible light images and infrared images to the processor. In other optional embodiments, the image processing apparatus does not include an image capturing module, and the acquiring the visible light image and the infrared image synchronously acquired for the target field of view includes: the image processing device acquires visible light images and infrared images which are sent by other intelligent devices with visible light image and infrared image shooting functions and synchronously collected aiming at a target view field, wherein the other intelligent devices can comprise an infrared detector, a mobile phone terminal, a cloud end and the like.
S103, binarizing the infrared image to obtain a mask image, and determining a target fusion area according to the mask image.
The binarization of the infrared image refers to that the gray values of all pixel points on the infrared image are respectively assigned to obtain a binarization image capable of reflecting the overall and local characteristics of the image.
In an optional embodiment, in step S103, binarizing the infrared image to obtain a mask image, and determining a target fusion region according to the mask image, includes:
comparing the gray value of each pixel point in the infrared image with a binarization threshold value, setting the gray value of the pixel point with the gray value smaller than the binarization threshold value as a first set value, and setting the gray value of the pixel point with the gray value larger than or equal to the binarization threshold value as a second set value to obtain a mask image;
and selecting at least one part of the pixel point distribution region of the second set value in the mask image as a target fusion region.
The binarization threshold value can be preset, and can also be obtained by calculation according to the distribution characteristics of the gray values of the pixel points in the infrared image. The first setting value and the second setting value may be respectively selected from a maximum value and a minimum value of the gray-scale value interval, or may be two gray-scale values in the gray-scale value interval, which are respectively close to the maximum value and the minimum value. In a specific example, the first setting value is 0 and the second setting value is 255, so that the entire image exhibits a black-and-white image effect. The binarization of the infrared image to obtain a mask image comprises the following steps: the method comprises the steps of carrying out binarization processing on 256 brightness-level gray level images in an infrared image through a binarization threshold, comparing the gray level value of each pixel point in the infrared image with the binarization threshold, setting the gray level value of the pixel point with the gray level value smaller than the binarization threshold to be 0, and setting the gray level value of the pixel point with the gray level value larger than the binarization threshold to be 255, so as to obtain a binarization image, namely a mask image, capable of reflecting the overall and local characteristics of the image. Correspondingly, the target fusion region may be determined according to the pixel point distribution region of the second setting value in the mask image, that is, the white portion, for example, all the white portion in the mask image may be selected as the target fusion region, or a certain portion of the white portion in the mask image may be selected as the target fusion region.
And S105, fusing the infrared image with the visible light image based on the target fusion area to obtain a fused image.
Fusing the infrared image with the visible light image based on the target fusion area to obtain a fused image, which may mean that the infrared image and the visible light image are respectively fused with image parts corresponding to the target fusion area, and image parts of the visible light image are reserved at other parts to obtain the fused image; or extracting the target fusion area of the infrared image to form an image to be fused, fusing the image to be fused with the visible light image, and the like.
In the above embodiment, the mask image is obtained by binarizing the infrared image, the target fusion region is determined according to the mask image, and the infrared image is fused with the visible light image based on the target fusion region to obtain the fusion image, so that the part of the effective information contained in the infrared image can be extracted by determining the target fusion region, and the extracted effective information is fused with the visible light image, thereby effectively avoiding the image quality reduction caused by the useless information contained in the fused image, reducing the invalid information in the image, reducing the calculation amount and complexity, and improving the real-time performance of the system.
Optionally, referring to fig. 3, in S105, fusing the infrared image with the visible light image based on the target fusion area to obtain a fused image, where the fusing includes:
s1051, channel separation is carried out on the infrared image and the visible light image respectively, and two separated brightness channel components representing the brightness of the image are fused according to the target fusion area, so that a brightness channel fusion image is obtained.
For a digital image, the human eye observes a picture, but from the computer, a digital image is a stack of points with different brightness, for example, a digital image with size M × N can be represented by an M × N matrix, values of elements in the matrix respectively represent the brightness of a corresponding pixel point at this position, and a larger pixel value represents a brighter pixel point. In general, the grayscale map may be represented by a two-dimensional matrix, and the color image may be represented by a three-dimensional matrix (M × N × 3), i.e., a multi-channel image.
The hue and color of the image can be changed by the channel, for example, if only the red channel is saved, the image itself only retains the elements and information of red. For each single channel, the image can be displayed as a grayscale image (note that the grayscale image is not a black-and-white image), and the light and shade in the grayscale image of the single channel correspond to the light and shade of the single channel color, which correspondingly represents the distribution of the single channel color/light on the image.
The channel separation is performed on the infrared image and the visible light image, and the luminance channel component representing the luminance of the separated images is fused according to the target fusion region to obtain the luminance channel fusion image, which may mean that the channel separation is performed on the infrared image and the visible light image, and a portion corresponding to the target fusion region in the luminance channel component representing the luminance of the images separated from the infrared image and a portion corresponding to the target fusion region in the luminance channel component representing the luminance of the images separated from the visible light image are fused to obtain the luminance channel fusion image. The luminance channel components separated from the infrared image and the visible image are fused according to the target fusion area, so that the computation required by fusion can be reduced, and effective information in the target fusion area can be reserved.
And S1052, fusing the brightness channel fusion image and the visible light image to obtain a fusion image.
Image Fusion (Image Fusion) refers to that Image data which are collected by a multi-source channel and are related to the same target are subjected to an Image processing technology, beneficial information in respective channels is extracted to the maximum extent, and finally high-quality images are synthesized, so that the utilization rate of Image information is improved, the interpretation precision and reliability of a computer are improved, the spatial resolution and the spectral resolution of an original Image are improved, and monitoring is facilitated. The brightness channel fusion image comprises effective information in the brightness channel component of the visible light image and the brightness channel component of the infrared image, and the brightness channel fusion image and the visible light image are fused, so that the brightness channel component in the brightness channel fusion image is combined with other channel components of the visible light image to obtain the fusion image.
In the above embodiment, the infrared image is binarized to obtain the mask image, the target fusion region is determined based on the mask image, the infrared image and the visible light image are subjected to channel separation, fusing the separated brightness channel component representing the brightness of the image according to the target fusion area to obtain a brightness channel fusion image, fusing the brightness channel fusion image and the visible light image to obtain a fusion image, and determining the target fusion area, can ensure that the effective information part contained in the infrared image is extracted, the brightness channel components respectively separated from the infrared image and the visible light image are fused and then fused with the visible light image, the image quality reduction caused by the fact that the fused image contains useless information can be effectively avoided, the invalid information in the image is reduced, the calculated amount and the complexity are reduced, and the real-time performance of the system can be improved; the fused image can simultaneously keep respective advantages of the visible image and the infrared image, and the target can be better highlighted no matter the fused image is obtained after imaging under the condition of sufficient illumination condition or the fused image is obtained after imaging under the condition of poor illumination condition, so that the target concerned in the image is more clearly shown, and the human eye observation and identification are more convenient.
In some embodiments, referring to fig. 4, in step S103, binarizing the infrared image to obtain a mask image, and before comparing the gray value of each pixel point in the infrared image with a binarization threshold in the step of determining the target fusion area according to the mask image, the method includes:
s1031, determining a matched binarization strategy according to the distribution characteristics of the gray level histogram and the average gradient of the infrared image;
s1032, determining the binarization threshold according to the binarization strategy.
And binarizing the infrared image according to a binarization threshold value determined by the binarization strategy to obtain a mask image, and determining a target fusion area according to the mask image.
The principles of binarization methods adopted by different binarization strategies are different, and applicable target images are also different. The distribution characteristics of the gray level histogram and the average gradient of the infrared image can judge that the gray level histogram of the infrared image is in unimodal distribution, bimodal distribution or approximately more uniformly distributed so as to determine a binarization strategy matched with the gray level histogram. If the binarization strategy comprises a triangle method, a Gaussian method and an Otsu method, if the gray level histogram of the infrared image is in unimodal distribution, the triangle method is applied; if the gray level histogram of the infrared image is distributed more uniformly, a Gaussian method is applied; if the gray level histogram of the infrared image is bimodal, then Otsu's method is applicable. The method comprises the steps of determining a binarization strategy applicable to an infrared image, carrying out binarization processing on the infrared image through a corresponding binarization strategy to obtain a mask image, and determining a region of a white part in the mask image as a target fusion region.
In the above embodiment, the distribution characteristics of the gray level histogram and the average gradient of the infrared image are analyzed to determine the binarization strategy adapted to the infrared image, so as to ensure that the image region where the target in the image is located can be binarized into the white part more accurately after the infrared image is binarized, so that the target fusion region is determined based on the mask image, and effective information in the image can be more completely and comprehensively retained after the brightness channel components separated from the visible light image and the infrared image are fused according to the target fusion region.
And S1031, determining a matched binarization strategy according to the distribution characteristics of the gray level histogram and the average gradient of the infrared image, wherein the step comprises the following steps:
judging whether the gray level histogram of the infrared image is in unimodal distribution or not according to the distribution characteristics of the gray level histogram of the infrared image;
if yes, determining that the matched binarization strategy is a triangle method;
if not, determining that the matched binarization strategy is a Gaussian method or an Otsu method according to the comparison result of the average gradient of the infrared image and the average gradient of the visible light image.
In the process of determining the binarization strategy matched with the infrared image, firstly, whether the infrared image is suitable for a triangle method or not is judged according to the distribution characteristics of the infrared image histogram, and if not, a Gaussian method or an Otsu method is selected and used according to the average gradient. Judging whether a binarization strategy for performing binarization processing on the current infrared image is suitable for a triangular method or not, wherein the judgment is to use gray level histogram data of the infrared image to judge whether the infrared image is in unimodal distribution or not, and the maximum peak of the gray level histogram of the infrared image is assumed to be close to the brightest side so as to find the optimal threshold value for infrared image binarization. And under the condition that the gray level histogram data of the infrared image is used for judging that the single-peak distribution is not satisfied, judging that the infrared image is uniformly distributed or bimodal distribution through the comparison result of the average gradient of the infrared image and the average gradient of the visible light image so as to determine that the infrared image is suitable for the Gaussian method or the Otsu method.
In the above embodiment, the binarization strategy is set to include a triangle method, a gaussian method and an atsu method, and the distribution characteristics of the gray histogram and the average gradient of the infrared image are analyzed to determine that the binarization strategy adapted to the infrared image is the triangle method, the gaussian method or the atsu method, so as to ensure that the image region where the target in the image is located can be binarized into a white part more accurately after the binarization of the infrared image, so that after the target fusion region is determined based on the mask image, effective information in the image can be more completely and comprehensively retained in a brightness component fusion image obtained by fusing brightness channel components separated from the visible light image and the infrared image according to the target fusion region.
In some embodiments, in S1031, determining a matching binarization strategy according to the distribution characteristics of the gray level histogram and the average gradient of the infrared image includes:
determining a difference value between a gray value mode and a gray value average value of the infrared image, and if the difference value is less than or equal to a preset value, determining that a gray level histogram of the infrared image is in unimodal distribution;
determining a triangle by taking the maximum peak in the gray level histogram as a peak;
and determining the maximum linear distance through the triangle, and determining a binarization threshold value according to the histogram gray level corresponding to the maximum linear distance.
According to the gray level histogram data of the infrared image, if gray level values in the gray level histogram are distributed in a certain number in a concentrated manner, the difference value between the gray level mode and the gray level average value corresponding to the infrared image is much smaller than that of an infrared image in other forms of the gray level histogram, for example, the absolute value of the difference value between the gray level mode and the gray level average value can be denoted as a-M (average-mode), the degree that the gray level histogram is in unimodal distribution is judged by judging whether the difference value a-M between the gray level mode and the gray level average value is smaller than a preset value, if the difference value a-M is smaller than the preset value, the gray level histogram corresponding to the infrared image is in unimodal distribution, otherwise, the gray level histogram corresponding to the infrared image does not meet the characteristic of unimodal distribution. As shown in fig. 5, the gray histogram data of the infrared image is used to find the optimal binarization threshold value based on a pure geometric method, and assuming that the maximum peak in the gray histogram is close to the brightest side, the maximum straight line distance is obtained through a triangle, and the gray level of the histogram corresponding to the maximum straight line distance is determined as the segmentation threshold value.
As shown in fig. 6, a gray value mode and a gray value average in the gray histogram corresponding to the infrared image are 121 and 127.734, respectively, where the gray value mode is a gray value with the largest repetition number, and a calculation formula of the gray value average AG is as follows:
Figure BDA0003513193830000081
H (i, j) represents the gray value of the pixel point whose coordinate is (i, j), M represents the maximum value of the abscissa, and N represents the maximum value of the ordinate. Assuming that the preset value is 10, if the difference A-M between the mode of the gray values and the average value of the gray values is less than 10, performing binarization processing on the infrared image by adopting a triangle method, determining a triangle by taking the maximum peak in a gray histogram as a vertex, determining the maximum linear distance through the triangle, and determining a binarization threshold value according to the gray level of the histogram corresponding to the maximum linear distance.
In the above embodiment, the difference between the gray value mode and the gray average value in the gray histogram is calculated, and based on the relative size of the difference and the preset value, whether the gray value is concentrated in a certain index is measured, so as to determine whether the gray histogram corresponding to the infrared image is in unimodal distribution, so as to rapidly and accurately determine the distribution characteristics of the gray histogram of the infrared image.
Referring to fig. 7, the comparison diagram is a comparison diagram in which a gray level histogram of an infrared image is unimodal, and a triangle method, a gaussian method, and an Otsu method are used as corresponding binarization strategies to binarize the infrared image to obtain a mask image, and then the mask image is fused with a visible light image, wherein the mask image is obtained by binarizing the infrared image after a binarization threshold is determined by using the triangle method as the binarization strategy, so that an image target can be more comprehensively and completely highlighted, and the loss of effective information of the image in the triangle method fused image obtained by fusing the infrared image with the visible light image is minimal.
In some embodiments, the S1031, determining the matched binarization strategy according to the distribution characteristics of the gray histogram and the average gradient of the infrared image, further includes:
if the difference value is larger than the preset value, determining a first average gradient of the infrared image and a second average gradient of the visible light image;
and if the second average gradient is larger than or equal to the first average gradient, calculating a Gaussian average value of the gray value of the infrared image in the target window function, and determining a binarization threshold value according to the Gaussian average value.
And if the difference A-M between the gray value mode and the gray value average value is larger than a preset value, the gray value histogram of the corresponding infrared image does not meet the characteristic of unimodal distribution, and whether the gray value histogram of the corresponding infrared image is distributed more uniformly or in a bimodal distribution or not is determined by determining the relative size of the average gradient of the infrared image and the average gradient of the visible light image. For the sake of distinction, the average gradient of the infrared image is referred to as a first average gradient, and the average gradient of the visible light image is referred to as a second average gradient. The calculation formula of the average gradient of the image can be the following formula two:
Figure BDA0003513193830000091
h (i, j) represents the gray value of the pixel point with the coordinate (i, j), M represents the maximum value of the abscissa, and N represents the maximum value of the ordinate. If the second average gradient is larger than the first average gradient, the gray level histogram is distributed more uniformly, and the binarization strategy applicable to the current infrared image is a Gaussian method. The principle of determining the binarization threshold value by the Gaussian method is that the Gaussian mean value of the image gray level in the window function is calculated, and the Gaussian mean value is used as the binarization threshold value to carry out binarization operation on the partial image. The Gaussian method is to realize binarization by a method of obtaining a local threshold value, determine a part to be fused and a part not to be fused in a corresponding window function by optimizing the scale of the window function, so as to binarize an image area in which targets respectively corresponding to a plurality of indexes in an infrared image are respectively located into a white part in a mask image, so that after a target fusion area is determined based on the white part in the mask image, effective information in the image can be more completely and comprehensively retained in a brightness component fusion image obtained by fusing brightness channel components separated from a visible light image and the infrared image according to the target fusion area after the gray value in the gray histogram is not concentrated in an infrared image of a certain index, namely, the gray value is respectively concentrated in the infrared image of the plurality of indexes.
In the above embodiment, by the relative sizes of the average gradient of the visible light image and the average gradient of the infrared image, so as to judge whether the gray value is approximately uniformly distributed, to realize the purpose of quickly and accurately judging whether the binarization strategy of the infrared image conforms to the Gaussian method, for the case that the average gradient of the visible light image is larger than that of the infrared image, the visible light image is clear and contains most of the valid information, thereby obtaining a binary image by adopting a Gaussian method to ensure that the image area where each target is positioned in the image can be binary into a white part more accurately after the infrared image is binary, after the target fusion area is determined based on the mask image, and according to the target fusion area, the effective information in the image can be more completely and comprehensively retained in the brightness component fusion image obtained by fusing the brightness channel components separated from the visible light image and the infrared image.
Referring to fig. 8, the gray level histograms of the infrared images are approximately uniformly distributed, and a comparison diagram is obtained by respectively using a triangle method, a gaussian method, and an tsu method as corresponding binarization strategies to binarize the infrared images to obtain mask images and then fusing the mask images with the visible light images, where after the gaussian method is used as the binarization strategy to determine a binarization threshold, the mask images obtained by binarizing the infrared images are clearer and more prominent in target contour, and finally the loss of effective information of the images in the gaussian method fused images obtained by fusing the infrared images with the visible light images is minimal.
In some embodiments, the determining a matching binarization strategy according to distribution characteristics of a gray level histogram and an average gradient of the infrared image further includes:
if the second average gradient is smaller than the first average gradient, the infrared image is divided into a foreground image and a background image;
and determining a binarization threshold value according to the between-class variance values of the foreground image and the background image.
And if the difference A-M between the gray value mode and the gray value average value is larger than a preset value, the gray value histogram of the corresponding infrared image does not meet the characteristic of unimodal distribution, and the relative size of the average gradient of the infrared image and the average gradient of the visible light image is determined to determine whether the gray value histogram of the corresponding infrared image is distributed more uniformly or in a bimodal distribution. For the sake of distinction, the average gradient of the infrared image is referred to as a first average gradient, and the average gradient of the visible light image is referred to as a second average gradient. If the second average gradient is smaller than the first average gradient, the gray level histogram is in bimodal distribution, and the binarization strategy applicable to the current infrared image is Otsu's method. The principle of determining the binary threshold value by Otsu method is to divide the image into two parts, namely a background and a target according to the gray characteristic of the image. The larger the inter-class variance between the background and the target is, the larger the difference between two parts forming the image is, and when part of the target is mistaken for the background or part of the background is mistaken for the target, the difference between the two parts is reduced, so that the probability of wrong division can be reduced to the maximum extent based on the binarization threshold segmentation with the maximum inter-class variance.
In the above embodiment, whether the gray level histogram is in bimodal distribution or not is determined by the relative size of the average gradient of the visible light image and the average gradient of the infrared image, so as to rapidly and accurately determine whether the binarization strategy of the infrared image conforms to the atsu method, the infrared image is divided into the foreground image and the background image by the atsu method, wherein the foreground image includes main information, such as energy radiated outward by other heating target objects such as people, and the gray level difference between pixels in the foreground image including the main information and pixels adjacent to the pixels is relatively large, so that for the case that the average gradient of the infrared image is greater than the average gradient of the visible light image, the method is more suitable for determining the area including effective information according to the foreground image of the infrared image, and the binarization image of the infrared image is obtained by the atsu method, so as to ensure that the image area where the target in the image is located can be more accurately binarized into the white part after the infrared image is binarized, after the target fusion area is determined based on the mask image, effective information in the image can be more completely and comprehensively retained in a brightness component fusion image obtained by fusing brightness channel components separated from the visible light image and the infrared image according to the target fusion area.
Please refer to fig. 9, which is a schematic diagram illustrating a comparison between a mask image obtained by binarizing an infrared image by using a triangle method, a gaussian method, and an ohq method as corresponding binarization strategies, and fusing the mask image with a visible light image, wherein the mask image obtained by binarizing the infrared image after determining a binarization threshold value by using the ohq method as the binarization strategies is clearer and more prominent in the outline of a plurality of targets, and finally, the loss of effective information of the image in the ohq method fused image obtained by fusing the infrared image with the visible light image is minimal.
In some embodiments, the channel separating the infrared image and the visible light image, and fusing the separated luminance channel components representing the luminance of the image according to the target fusion region to obtain a luminance channel fusion image includes:
performing HSI channel separation on the infrared image and the visible light image respectively, and fusing two separated I channel components according to the Poisson image editing principle according to the target fusion area to obtain an I channel fusion image;
the fusing the luminance channel fusion image and the visible light image to obtain a fusion image, comprising:
Combining the I channel component of the I channel fusion image with the H channel component and the S channel component separated from the visible light image to obtain a fusion reference image;
and converting the fusion reference image into an RGB color space to obtain a fusion image.
Wherein, HSI (Hue-preservation-intensity (lightness)) refers to a color model of a digital image, the HSI color model describes the color characteristics of the image with H, S, I three parameters, and H defines the frequency of color, called Hue; s represents the shade degree of the color, called saturation; i denotes intensity or brightness.
Optionally, the HSI channel separation is performed on the infrared image and the visible light image, and two I channel components separated from the target fusion region corresponding to the infrared image and the visible light image are fused according to a poisson image editing principle, so as to obtain an I channel fusion image, where: performing HSI channel separation on the visible light image to separate an H channel component, an S channel component and an I channel component of the visible light image; determining a matched binarization strategy according to the distribution characteristics of the gray level histogram and the average gradient of the infrared image, binarizing the infrared image according to the binarization strategy to obtain a mask image, and determining a target fusion area according to the mask image; performing HSI channel separation on the infrared image, and separating an H channel component, an S channel component and an I channel component of the infrared image; and fusing the I-channel component of the infrared image and the I-channel component of the visible light image according to the target fusion area determined by the mask image and the Poisson image editing principle to obtain an I-channel fusion image. If the infrared image and the visible light image are images in a non-HSI format, before HSI channel separation is carried out on the infrared image and the visible light image, the method also comprises the step of converting the infrared image and the visible light image into the infrared image and the visible light image in the HSI format.
In the above embodiment, the I-channel component of the visible light image to be fused and the I-channel component of the infrared image are extracted and fused according to the target fusion area specified by the mask image, and the H-channel component and the S-channel component separated from the obtained I-channel fusion image and the visible light image are merged and then converted into the RGB color space to obtain the fusion image.
In order to more fully understand the image fusion method provided in the embodiment of the present application, please refer to fig. 10 to fig. 12, which illustrate an alternative example.
S11, reading the infrared image and the visible light image; as shown in fig. 11 and 12, an infrared image IR _1 and a visible light image VIS _ 2;
Selecting an adaptive threshold binarization strategy matched with the infrared image, and binarizing the infrared image; the adaptive threshold value binarization strategy comprises a triangle method, a Gaussian method and an Otsu method; the method for selecting the self-adaptive binarization strategy comprises the following steps:
s121, calculating the difference A-M value between the gray value mode of the gray value of the image and the gray value average value according to the binary distribution characteristic of the gray value histogram of the infrared image;
s122, judging whether the A-M value is larger than a preset value or not;
s123, if the A-M value is smaller than or equal to a preset value, representing that the gray level histogram is in unimodal distribution, and performing binarization processing on the infrared image by adopting a triangular method;
s124, if the A-M value is larger than a preset value, calculating the average gradient AG1 of the infrared image and the average gradient AG2 of the visible light image;
s125, judging whether the difference between the AG1 and the AG2 is larger than 0;
s126, if the difference between the AG1 and the AG2 is less than or equal to 0, the gray level histogram is distributed more uniformly, and the infrared image is subjected to binarization processing by adopting a Gaussian method;
s127, if the difference between AG1 and AG2 is larger than 0, representing that the gray level histogram is in bimodal distribution, and carrying out binarization processing on the infrared image by using an Otsu method;
s13, generating a mask image according to the binary image after the binary processing, and defining a region to be fused according to the mask image;
S14, performing HSI channel separation on the infrared image to obtain an H channel component, an S channel component and an I channel component of the infrared image;
s15, performing HSI channel separation on the visible light image to obtain an H channel component, an S channel component and an I channel component of the visible light image;
s16, fusing the I channel components of the infrared image and the visible light image according to the Poisson principle of the region to be fused specified by the mask image;
s17, merging the I channel of the fused image and the H, S channel of the visible light image;
and S18, converting the combined image into RGB color space to obtain a fusion image.
The image fusion method provided by the embodiment at least has the following characteristics:
firstly, performing binarization processing on an infrared image by selecting a matched binarization strategy, and determining an area to be fused according to a binarization image; the fusion calculation amount is reduced and the fusion processing efficiency is improved by locking the region to be fused, and effective information in the image is effectively reserved;
secondly, a method for selecting a matched binarization strategy by determining whether the gray level histogram is in unimodal distribution, relatively uniform distribution and bimodal distribution according to the distribution characteristics of the gray level histogram and the average gradient of the infrared image is provided, so that effective information and detail information contained in the infrared image and the visible light image can be reserved after determining the region to be fused based on a binarization result;
Thirdly, fusing the I-channel components of the infrared image and the visible light image, and then merging the fused image with the H, S-channel component of the visible light image to obtain a fused image, which can effectively avoid the fused image from containing useless information to reduce the image quality, reduce the invalid information in the image, reduce the amount of computation and complexity, and improve the real-time performance of the system, as shown in fig. 13, in order to obtain the fused image by fusing the infrared image and the visible light image by using the image fusion method described in the present application, fig. 14 shows the fusion contrast effect of fusing the infrared image and the visible light image by using the known low-rank representation principle, fig. 15 shows the fusion contrast effect of fusing the infrared image and the visible light image by using the known non-down-sampling shear wave transformation principle, and fig. 16 shows the fusion contrast effect of fusing the infrared image and the visible light image by using the known non-down-sampling contourlet transformation principle, Fig. 17 shows the fusion contrast effect of directly fusing the whole of the infrared image and the visible image by using the known poisson-based image editing principle.
The evaluation index pair ratios of the fused images corresponding to fig. 13 to 17 are shown in table one below:
Figure BDA0003513193830000131
Wherein ie (information entropy) refers to information entropy; sf (spatial frequency) refers to spatial frequency; RMSE (root Mean Sqared error) refers to the root Mean square error; SSIM (structural Similarity index) refers to the structural Similarity index; TIME refers to the fusion process duration. NSST (Non-subsampled shear Transform) refers to a Non-subsampled shear wave Transform; nsct (nonsubsampled contourlet transform) refers to the non-subsampled contourlet transform principle. By combining the graph and the table I, the fused image obtained by fusing the infrared image and the visible light image by the image fusion method has the advantages of minimum root mean square error value, obviously greatly reduced fusion processing time, approximate structural similarity index of 1, relatively larger information entropy and spatial frequency, and obviously better comprehensive image performance than the fused image obtained by other fusion methods.
Referring to fig. 18, in another aspect of the present application, an image fusion apparatus is provided, including: the acquisition module 131 is configured to acquire a visible light image and an infrared image which are synchronously acquired for a target field of view; a fusion region determining module 132, configured to binarize the infrared image to obtain a mask image, and determine a target fusion region according to the mask image; and the fusion module 134 is configured to fuse the infrared image with the visible light image based on the target fusion region to obtain a fusion image.
Optionally, the fusion module 134 is specifically configured to perform channel separation on the infrared image and the visible light image, and fuse the separated luminance channel components representing the luminance of the image according to the target fusion region to obtain a luminance channel fusion image; and fusing the brightness channel fusion image and the visible light image to obtain a fusion image.
Optionally, the fusion region determining module 132 is specifically configured to compare the gray value of each pixel point in the infrared image with a binarization threshold, set the gray value of the pixel point whose gray value is smaller than the binarization threshold to a first set value, and set the gray value of the pixel point whose gray value is greater than or equal to the binarization threshold to a second set value, so as to obtain a mask image; and selecting at least one part of the pixel point distribution region of the second set value in the mask image as a target fusion region.
Optionally, the fusion region determining module 132 is further configured to determine a matched binarization strategy according to a distribution characteristic of a gray level histogram and an average gradient of the infrared image; and determining a binarization threshold value according to the binarization strategy.
Optionally, the fusion region determining module 132 is further configured to determine whether the gray level histogram of the infrared image is in unimodal distribution according to a distribution characteristic of the gray level histogram; if yes, determining that the matched binarization strategy is a triangle method; if not, determining that the matched binarization strategy is a Gaussian method or an Otsu method according to the comparison result of the average gradient of the infrared image and the average gradient of the visible light image.
Optionally, the fusion region determining module 132 is further configured to determine a difference between a mode of the gray values of the infrared image and a gray average value, and if the difference is smaller than or equal to a preset value, determine that a gray histogram of the infrared image is in unimodal distribution; determining a triangle by taking the maximum peak in the gray level histogram as a vertex; and determining the maximum straight line distance through the triangle, and determining a binarization threshold value according to the histogram gray level corresponding to the maximum straight line distance.
Optionally, the fusion region determining module 132 is further configured to determine a first average gradient of the infrared image and a second average gradient of the visible light image if the difference is greater than the preset value; and if the second average gradient is larger than or equal to the first average gradient, calculating a Gaussian average value of the gray value of the infrared image in the target window function, and determining a binarization threshold value according to the Gaussian average value.
Optionally, the fusion region determining module 132 is further configured to segment the infrared image into a foreground image and a background image if the second average gradient is smaller than the first average gradient; and determining a binarization threshold value according to the inter-class variance values of the foreground image and the background image.
Optionally, the fusion module 134 is further configured to perform HSI channel separation on the infrared image and the visible light image, and fuse the two separated I channel components according to the target fusion region and according to a poisson image editing principle to obtain an I channel fusion image; merging the I channel component of the I channel fusion image with the H, S channel component separated from the visible light image to obtain a fusion reference image; and converting the fused reference image into an RGB color space to obtain a fused image.
It should be noted that: in the process of implementing the fusion processing of the visible light image and the infrared image, the image fusion device provided in the above embodiment is exemplified by only the division of the above program modules, and in practical applications, the processing may be distributed to be completed by different program modules according to needs, that is, the internal structure of the device may be divided into different program modules, so as to complete all or part of the above described method steps. In addition, the image fusion device and the image fusion method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
Referring to fig. 19, an optional hardware structure diagram of the image processing apparatus provided in the embodiment of the present application is provided, where the image processing apparatus includes a processor 111 and a memory 112 connected to the processor 111, and the memory 112 is used for storing various types of data to support operations of the image processing apparatus and storing a computer program for implementing the image processing method provided in any embodiment of the present application, and when the computer program is executed by the processor, the steps of the image processing method provided in any embodiment of the present application are implemented, and the same technical effects can be achieved, and are not described herein again to avoid repetition.
Optionally, the image processing apparatus further includes an infrared shooting module and a visible light shooting module connected to the processor 111, where the infrared shooting module and the visible light shooting module are configured to shoot an infrared image and a visible light image of the same target view field synchronously as images to be fused and send to the processor 111.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element identified by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present invention essentially or partly contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes several instructions for enabling a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to execute the methods according to the embodiments of the present invention.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. An image fusion method applied to an image processing device is characterized by comprising the following steps:
acquiring a visible light image and an infrared image which are synchronously acquired aiming at a target field of view;
performing binarization on the infrared image to obtain a mask image, and determining a target fusion area according to the mask image;
and fusing the infrared image with the visible light image based on the target fusion area to obtain a fusion image.
2. The image fusion method of claim 1, wherein fusing the infrared image with the visible light image based on the target fusion region to obtain a fused image comprises:
channel separation is carried out on the infrared image and the visible light image respectively, and two separated brightness channel components representing the brightness of the image are fused according to the target fusion area to obtain a brightness channel fusion image;
And fusing the brightness channel fusion image and the visible light image to obtain a fusion image.
3. The image fusion method according to claim 1, wherein the binarizing the infrared image to obtain a mask image, and determining a target fusion area from the mask image comprises:
comparing the gray value of each pixel point in the infrared image with a binarization threshold value, setting the gray value of the pixel point with the gray value smaller than the binarization threshold value as a first set value, and setting the gray value of the pixel point with the gray value larger than or equal to the binarization threshold value as a second set value to obtain a mask image;
and selecting at least one part of the pixel point distribution region of the second set value in the mask image as a target fusion region.
4. The image fusion method of claim 3, wherein before comparing the gray value of each pixel point in the infrared image with a binarization threshold, the method comprises:
determining a matched binarization strategy according to the distribution characteristics of the gray level histogram and the average gradient of the infrared image;
and determining the binarization threshold value according to the binarization strategy.
5. The image fusion method of claim 3, wherein the determining the matched binarization strategy according to the distribution characteristics of the gray level histogram and the average gradient of the infrared image comprises:
Judging whether the gray level histogram of the infrared image is in unimodal distribution or not according to the distribution characteristics of the gray level histogram of the infrared image;
if yes, determining the matched binarization strategy to be a triangle method;
if not, determining that the matched binarization strategy is a Gaussian method or an Otsu method according to the comparison result of the average gradient of the infrared image and the average gradient of the visible light image.
6. The image fusion method of claim 4, wherein the determining the matched binarization strategy according to the distribution characteristics of the gray level histogram and the average gradient of the infrared image comprises:
determining a difference value between a gray value mode and a gray value average value of the infrared image, and if the difference value is smaller than or equal to a preset value, determining that a gray level histogram of the infrared image is in unimodal distribution, and determining that a matched binarization strategy is a triangular method;
at this time, the determining the binarization threshold value according to the binarization strategy specifically includes:
determining a triangle by taking the maximum peak in the gray level histogram as a vertex;
and determining the maximum straight line distance through the triangle, and determining a binarization threshold value according to the histogram gray level corresponding to the maximum straight line distance.
7. The image fusion method according to claim 6, wherein the determining the matched binarization strategy according to the distribution characteristics of the gray level histogram and the average gradient of the infrared image further comprises:
if the difference value is larger than the preset value, determining a first average gradient of the infrared image and a second average gradient of the visible light image;
if the second average gradient is larger than or equal to the first average gradient, determining that the matched binarization strategy is a Gaussian method;
at this time, the determining the binarization threshold value according to the binarization strategy specifically includes: and calculating a Gaussian mean value of the gray value of the infrared image in the target window function, and determining a binarization threshold value according to the Gaussian mean value.
8. The image fusion method according to claim 7, wherein the determining the matched binarization strategy according to the distribution characteristics of the gray level histogram and the average gradient of the infrared image further comprises:
if the second average gradient is smaller than the first average gradient, determining that the matched binarization strategy is Otsu method;
at this time, the determining the binarization threshold value according to the binarization strategy specifically includes:
Segmenting the infrared image into a foreground image and a background image;
and determining a binarization threshold value according to the between-class variance values of the foreground image and the background image.
9. The image fusion method according to claim 2, wherein the channel-separating the infrared image and the visible light image, and fusing the separated luminance channel components representing the luminance of the image according to the target fusion region to obtain a luminance channel fusion image comprises:
performing HSI channel separation on the infrared image and the visible light image respectively, and fusing two separated I channel components according to the Poisson image editing principle according to the target fusion area to obtain an I channel fusion image;
the fusing the luminance channel fusion image and the visible light image to obtain a fusion image, comprising:
combining the I channel component of the I channel fusion image with the H channel component and the S channel component separated from the visible light image to obtain a fusion reference image;
and converting the fused reference image into an RGB color space to obtain a fused image.
10. An image fusion apparatus, comprising:
The acquisition module is used for acquiring a visible light image and an infrared image which are synchronously acquired aiming at a target field of view;
the fusion region determining module is used for binarizing the infrared image to obtain a mask image and determining a target fusion region according to the mask image;
and the fusion module is used for fusing the infrared image with the visible light image based on the target fusion area to obtain a fusion image.
11. An image processing apparatus comprising a processor, a memory connected to the processor, and a computer program stored on the memory and executable by the processor, the computer program, when executed by the processor, implementing the image fusion method according to any one of claims 1 to 9.
12. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, realizes the image fusion method according to any one of claims 1 to 9.
CN202210157058.1A 2022-02-21 2022-02-21 Image fusion method, device and equipment and storage medium Pending CN114519808A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210157058.1A CN114519808A (en) 2022-02-21 2022-02-21 Image fusion method, device and equipment and storage medium
PCT/CN2022/094865 WO2023155324A1 (en) 2022-02-21 2022-05-25 Image fusion method and apparatus, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210157058.1A CN114519808A (en) 2022-02-21 2022-02-21 Image fusion method, device and equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114519808A true CN114519808A (en) 2022-05-20

Family

ID=81598755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210157058.1A Pending CN114519808A (en) 2022-02-21 2022-02-21 Image fusion method, device and equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114519808A (en)
WO (1) WO2023155324A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115170810A (en) * 2022-09-08 2022-10-11 南京理工大学 Visible light infrared image fusion target detection example segmentation method
CN115278016A (en) * 2022-07-25 2022-11-01 烟台艾睿光电科技有限公司 Infrared intelligent shooting method and device, infrared thermal imaging equipment and medium
CN116433695A (en) * 2023-06-13 2023-07-14 天津市第五中心医院 Mammary gland region extraction method and system of mammary gland molybdenum target image
WO2023155324A1 (en) * 2022-02-21 2023-08-24 烟台艾睿光电科技有限公司 Image fusion method and apparatus, device and storage medium
CN116977154A (en) * 2023-09-22 2023-10-31 南方电网数字电网研究院有限公司 Visible light image and infrared image fusion storage method, device, equipment and medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117773405A (en) * 2024-02-28 2024-03-29 茌平鲁环汽车散热器有限公司 Method for detecting brazing quality of automobile radiator

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1273937C (en) * 2003-11-27 2006-09-06 上海交通大学 Infrared and visible light image merging method
CN103778618A (en) * 2013-11-04 2014-05-07 国家电网公司 Method for fusing visible image and infrared image
CN108665443B (en) * 2018-04-11 2021-02-05 中国石油大学(北京) Infrared image sensitive area extraction method and device for mechanical equipment fault
CN112767289A (en) * 2019-10-21 2021-05-07 浙江宇视科技有限公司 Image fusion method, device, medium and electronic equipment
KR20200102907A (en) * 2019-11-12 2020-09-01 써모아이 주식회사 Method and apparatus for object recognition based on visible light and infrared fusion image
CN112102340A (en) * 2020-09-25 2020-12-18 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN114519808A (en) * 2022-02-21 2022-05-20 烟台艾睿光电科技有限公司 Image fusion method, device and equipment and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023155324A1 (en) * 2022-02-21 2023-08-24 烟台艾睿光电科技有限公司 Image fusion method and apparatus, device and storage medium
CN115278016A (en) * 2022-07-25 2022-11-01 烟台艾睿光电科技有限公司 Infrared intelligent shooting method and device, infrared thermal imaging equipment and medium
CN115170810A (en) * 2022-09-08 2022-10-11 南京理工大学 Visible light infrared image fusion target detection example segmentation method
CN116433695A (en) * 2023-06-13 2023-07-14 天津市第五中心医院 Mammary gland region extraction method and system of mammary gland molybdenum target image
CN116433695B (en) * 2023-06-13 2023-08-22 天津市第五中心医院 Mammary gland region extraction method and system of mammary gland molybdenum target image
CN116977154A (en) * 2023-09-22 2023-10-31 南方电网数字电网研究院有限公司 Visible light image and infrared image fusion storage method, device, equipment and medium
CN116977154B (en) * 2023-09-22 2024-03-19 南方电网数字电网研究院有限公司 Visible light image and infrared image fusion storage method, device, equipment and medium

Also Published As

Publication number Publication date
WO2023155324A1 (en) 2023-08-24

Similar Documents

Publication Publication Date Title
CN114519808A (en) Image fusion method, device and equipment and storage medium
CN108090888B (en) Fusion detection method of infrared image and visible light image based on visual attention model
CN111104943B (en) Color image region-of-interest extraction method based on decision-level fusion
Ajmal et al. A comparison of RGB and HSV colour spaces for visual attention models
US11238301B2 (en) Computer-implemented method of detecting foreign object on background object in an image, apparatus for detecting foreign object on background object in an image, and computer-program product
CN113792827B (en) Target object recognition method, electronic device, and computer-readable storage medium
Tiwari et al. A survey on shadow detection and removal in images and video sequences
CN111815528A (en) Bad weather image classification enhancement method based on convolution model and feature fusion
CN111507340B (en) Target point cloud data extraction method based on three-dimensional point cloud data
CN110866889A (en) Multi-camera data fusion method in monitoring system
CN111489330A (en) Weak and small target detection method based on multi-source information fusion
CN111274964A (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
CN112818983A (en) Method for judging character inversion by using picture acquaintance
CN115578304B (en) Multi-band image fusion method and system combining saliency region detection
CN109635679B (en) Real-time target paper positioning and loop line identification method
JP3962517B2 (en) Face detection method and apparatus, and computer-readable medium
Bhandari et al. Can a camera tell the weather?
KR100488014B1 (en) YCrCb color based human face location detection method
Liu et al. Detection and recognition of traffic signs in adverse conditions
Ragb et al. Human Detection in Infrared Imagery using Gradient and Texture Features and Super-pixel Segmentation
CN111325209A (en) License plate recognition method and system
Saxena et al. A study on automatic detection and recognition techniques for road signs
Xia et al. Research on object recognition algorithm based on color histogram
CN114220053B (en) Unmanned aerial vehicle video vehicle retrieval method based on vehicle feature matching
Liu et al. A Fusion-based Enhancement Method for Low-light UAV Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination