CN114693581B - Image fusion processing method, device, equipment and storage medium - Google Patents
Image fusion processing method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN114693581B CN114693581B CN202210618014.4A CN202210618014A CN114693581B CN 114693581 B CN114693581 B CN 114693581B CN 202210618014 A CN202210618014 A CN 202210618014A CN 114693581 B CN114693581 B CN 114693581B
- Authority
- CN
- China
- Prior art keywords
- infrared
- pixel
- image
- value
- thermal imaging
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000007499 fusion processing Methods 0.000 title claims abstract description 33
- 238000003860 storage Methods 0.000 title claims abstract description 12
- 238000001931 thermography Methods 0.000 claims abstract description 212
- 239000011159 matrix material Substances 0.000 description 10
- 230000005855 radiation Effects 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 238000004590 computer program Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 230000004927 fusion Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 238000005286 illumination Methods 0.000 description 5
- 238000009826 distribution Methods 0.000 description 4
- 239000000463 material Substances 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 239000000356 contaminant Substances 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000012447 hatching Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 238000005336 cracking Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000003344 environmental pollutant Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 239000011810 insulating material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 231100000719 pollutant Toxicity 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Radiation Pyrometers (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Studio Devices (AREA)
Abstract
The application provides an image fusion processing method, device, equipment and storage medium. The method comprises the following steps: acquiring a thermal imaging image, a visible light image and a near infrared image for image acquisition of the same target object; the resolution ratio of the visible light image is the same as that of the near-infrared image, and the resolution ratio of the near-infrared image is greater than that of the thermal imaging image; each near-infrared pixel in the near-infrared image corresponds to a brightness value; each thermal imaging pixel in the thermal imaging image corresponds to a first thermal value; determining M × N near-infrared pixels corresponding to each thermal imaging pixel according to the resolution of the near-infrared image and the resolution of the thermal imaging image; m and N are integers greater than 1; determining a target heat value of each near-infrared pixel in the M x N near-infrared pixels according to the brightness value of the near-infrared pixel and a first heat value corresponding to the near-infrared pixel; and interpolating the visible light image according to the target heat value of the near-infrared pixel to obtain a fused target image.
Description
Technical Field
The present application relates to image processing technologies, and in particular, to an image fusion processing method, apparatus, device, and storage medium.
Background
Thermography is a technique by which infrared radiation emitted by a scene is received and converted into a thermographic image indicative of a thermal pattern. Because it is not influenced by light, it is widely used in the field of monitoring.
The resolution of the current mainstream thermal imaging camera can only reach 2-8 ten thousand pixels, so that the thermal imaging image cannot be displayed for some image details. High resolution thermal imaging cameras also exist in the market, but at a higher cost. Therefore, it is proposed to fuse the thermal value pixels provided by the thermal imaging image into the visible light image according to a certain rule to perform image enhancement on the thermal imaging image, but the fusion effect is poor, so that the difference before and after the image enhancement is small.
Disclosure of Invention
The application provides an image fusion processing method, an image fusion processing device, image fusion processing equipment and a storage medium, which are used for solving the problems that a thermal imaging image is subjected to image enhancement, but the fusion effect is poor, so that the difference between the image enhancement and the image enhancement is small.
In a first aspect, the present application provides an image fusion processing method, including: acquiring a thermal imaging image, a visible light image and a near infrared image which are acquired by image acquisition of the same target object; the resolution of the visible light image is the same as that of the near-infrared image, and the resolution of the near-infrared image is greater than that of the thermal imaging image; each near-infrared pixel in the near-infrared image corresponds to a brightness value; each thermographic pixel in the thermographic image corresponds to a first thermal value; determining M x N near-infrared pixels corresponding to each thermal imaging pixel in the thermal imaging image according to the resolution of the near-infrared image and the resolution of the thermal imaging image; both M and N are integers greater than 1; determining a target heat value of each near-infrared pixel in the M x N near-infrared pixels according to the brightness value of the near-infrared pixel and a first heat value of a thermal imaging pixel corresponding to the near-infrared pixel; and interpolating the visible light image according to the target heat value of the near-infrared pixel to obtain a target image obtained by fusing the thermal imaging image, the visible light image and the near-infrared image.
Optionally, the determining, for each near-infrared pixel of the M × N near-infrared pixels, a target thermal value of the near-infrared pixel according to the brightness value of the near-infrared pixel and the first thermal value of the thermal imaging pixel corresponding to the near-infrared pixel includes: determining a first heat value sum of the M x N near-infrared pixels according to a product of a first heat value of a thermal imaging pixel corresponding to the M x N near-infrared pixels and the M x N near-infrared pixels; determining, for each of the M × N near-infrared pixels, a luminance value fraction of the near-infrared pixel in the M × N near-infrared pixels according to a luminance value of the near-infrared pixel and a sum of luminance values of the M × N near-infrared pixels; and determining a target heat value of the near-infrared pixel according to the product of the sum of the first heat values of the M x N near-infrared pixels and the ratio of the brightness values of the near-infrared pixels in the M x N near-infrared pixels.
Optionally, the determining, for each near-infrared pixel of the M × N near-infrared pixels, a target thermal value of the near-infrared pixel according to the brightness value of the near-infrared pixel and the first thermal value of the thermal imaging pixel corresponding to the near-infrared pixel includes: determining a first heat value sum of the M x N near-infrared pixels according to a product of a first heat value of a thermal imaging pixel corresponding to the M x N near-infrared pixels and the M x N near-infrared pixels; for each near-infrared pixel in the M x N near-infrared pixels, determining the brightness value ratio of the near-infrared pixel in the M x N near-infrared pixels according to the brightness value of the near-infrared pixel and the sum of the brightness values of the M x N near-infrared pixels; determining a second heat value of the near-infrared pixel according to the product of the sum of the first heat values of the M x N near-infrared pixels and the ratio of the brightness values of the near-infrared pixels in the M x N near-infrared pixels; determining a heat deviation value of the near-infrared pixels according to a difference value of first heat values between thermal imaging pixels corresponding to the M x N near-infrared pixels and adjacent thermal imaging pixels; and determining a target heat value of the near-infrared pixel according to the second heat value of the near-infrared pixel and the heat deviation value.
Optionally, the determining, according to a difference between first heat values of thermal imaging pixels corresponding to the M × N near infrared pixels and adjacent thermal imaging pixels, a heat offset value of the near infrared pixel includes: determining the distance between a central pixel point and an edge pixel point in the M x N near-infrared pixels; determining a difference value of a first thermal value between a thermal imaging pixel corresponding to the M x N near infrared pixels and an adjacent thermal imaging pixel; and obtaining the heat deviation value of the near-infrared pixel according to the ratio of the difference value of the first heat value to the distance.
Optionally, the determining a target thermal value of the near-infrared pixel according to the second thermal value of the near-infrared pixel and the thermal offset value includes: and obtaining a target heat value of the near-infrared pixel according to the second heat value of the near-infrared pixel and the sum of the products of the heat deviation value and the adjacent pixel heat value influence coefficient.
Optionally, the interpolating the visible light image according to the target heat value of the near-infrared pixel to obtain a target image obtained by fusing the thermal imaging image, the visible light image, and the near-infrared image includes: and inserting the target heat value of the near-infrared pixel into the visible light pixel corresponding to the near-infrared pixel according to the corresponding relation between the near-infrared pixel and the visible light pixel in the visible light image to obtain the target heat value of the visible light pixel.
Optionally, the interpolating the visible light image according to the target heat value of the near-infrared pixel to obtain a target image obtained by fusing the thermal imaging image, the visible light image, and the near-infrared image includes: inserting the target heat value of the near-infrared pixel into the visible light pixel corresponding to the near-infrared pixel according to the corresponding relation between the near-infrared pixel and the visible light pixel in the visible light image to obtain an initial heat value of the visible light pixel; and adjusting the initial heat value of the visible light pixel according to the heat difference between the maximum target heat value and the minimum target heat value in the near-infrared image to obtain the target heat value of the visible light pixel.
Optionally, the adjusting the initial heat value of the visible light pixel according to the heat difference between the maximum target heat value and the minimum target heat value in the near-infrared image to obtain the target heat value of the visible light pixel includes: determining a heat difference between the maximum target heat value and the minimum target heat value in the near-infrared image and a target ratio of target color values; and determining the product of the target ratio and the initial heat value of the visible light pixel as the target heat value of the visible light pixel.
Optionally, the resolution of the near-infrared image is Wir Hir, and the resolution of the thermographic image is Wt Ht; determining M x N near-infrared pixels corresponding to each thermal imaging pixel in the thermal imaging image according to the resolution of the near-infrared image and the resolution of the thermal imaging image, including: obtaining M x N near-infrared pixels corresponding to each thermal imaging pixel in the thermal imaging image according to the ratio of the resolution of the near-infrared image to the resolution of the thermal imaging image; wherein, the value of M is equal to the ratio of Wir to Wt, and the value of N is equal to the ratio of Hir to Ht.
In a second aspect, the present application provides an image fusion processing apparatus, comprising: the acquisition module is used for acquiring a thermal imaging image, a visible light image and a near infrared image which are acquired by image acquisition of the same target object; the resolution of the visible light image is the same as that of the near-infrared image, and the resolution of the near-infrared image is greater than that of the thermal imaging image; each near-infrared pixel in the near-infrared image corresponds to a brightness value; each thermographic pixel in the thermographic image corresponds to a first thermal value; the determining module is used for determining M x N near-infrared pixels corresponding to each thermal imaging pixel in the thermal imaging image according to the resolution of the near-infrared image and the resolution of the thermal imaging image; both M and N are integers greater than 1; the determining module is further configured to determine, for each near-infrared pixel of the M × N near-infrared pixels, a target heat value of the near-infrared pixel according to the brightness value of the near-infrared pixel and a first heat value of a thermal imaging pixel corresponding to the near-infrared pixel; and the interpolation module is used for interpolating the visible light image according to the target heat value of the near-infrared pixel to obtain a target image obtained by fusing the thermal imaging image, the visible light image and the near-infrared image.
In a third aspect, the present application provides an electronic device, comprising: a processor, and a memory communicatively coupled to the processor; the memory stores computer-executable instructions; the processor executes computer-executable instructions stored by the memory to implement the method of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon computer-executable instructions for implementing the method according to the first aspect when executed by a processor.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements the method according to the first aspect.
According to the image fusion processing method, the image fusion processing device, the image fusion processing equipment and the image fusion processing storage medium, a thermal imaging image, a visible light image and a near infrared image which are obtained by image acquisition on the same target object are obtained; the resolution ratio of the visible light image is the same as that of the near-infrared image, and the resolution ratio of the near-infrared image is greater than that of the thermal imaging image; each near-infrared pixel in the near-infrared image corresponds to a brightness value; each thermal imaging pixel in the thermal imaging image corresponds to a first thermal value; determining M x N near-infrared pixels corresponding to each thermal imaging pixel in the thermal imaging image according to the resolution of the near-infrared image and the resolution of the thermal imaging image; m and N are integers greater than 1; determining a target heat value of each near-infrared pixel in the M x N near-infrared pixels according to the brightness value of the near-infrared pixel and the first heat value of the thermal imaging pixel corresponding to the near-infrared pixel; and interpolating the visible light image according to the target heat value of the near-infrared pixel to obtain a target image obtained by fusing the thermal imaging image, the visible light image and the near-infrared image. The method comprises the steps of determining M-N near-infrared pixels corresponding to each thermal imaging pixel in a thermal imaging image according to the resolution of the near-infrared image and the resolution of the thermal imaging image, finding out the corresponding relation between the thermal imaging image and the pixels in the near-infrared image, and re-determining the target heat value of each near-infrared pixel according to the brightness value of each near-infrared pixel and the first heat value of the thermal imaging pixel corresponding to each near-infrared pixel, wherein the brightness value of each near-infrared pixel can reflect the detail difference of the surface of a target object.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is an application scenario diagram provided in an embodiment of the present application;
fig. 2 is a first flowchart of an image fusion processing method provided in the embodiment of the present application;
FIG. 3 is a graph illustrating the resolution relationship between a near infrared image and a thermal image provided by an embodiment of the present application;
fig. 4 is a second flowchart of an image fusion processing method provided in the embodiment of the present application;
fig. 5 is a flowchart of a third image fusion processing method provided in the embodiment of the present application;
FIG. 6 is an exemplary graph illustrating a determination of a heat offset value for a near-infrared pixel provided by an embodiment of the present application;
fig. 7 is a schematic structural diagram of an image fusion processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The thermal imaging technology is that an infrared detector and an optical imaging objective lens are used for receiving an infrared radiation energy distribution pattern of a detected target and reflecting the infrared radiation energy distribution pattern on a photosensitive element of the infrared detector, so that an infrared thermal image is obtained, and the thermal image corresponds to a thermal distribution field on the surface of an object. In popular terms, thermal infrared imagers convert the invisible infrared energy emitted by an object into visible thermal images, with the different colors on the thermal images representing the different temperatures of the object being measured.
All objects in nature, whether arctic glaciers, flames, human bodies or even extremely cold universe deep space, have infrared radiation as long as the temperature is higher than absolute zero-273 ℃, and the higher the temperature of the object is, the more violent the thermal motion of molecules or atoms is, the stronger the infrared radiation is. Thermal imaging cameras utilize this principle, by detecting thermal radiation data of the entire scene, receive infrared radiation emitted by the scene, and convert the infrared radiation into an infrared image indicative of a thermal pattern.
The visible light camera captures the visible wave band part of the spectrum in the scene, and forms a visible image after converting the visible wave band part into a digital signal.
At present, the image imaging technology of visible light is rapidly developed, for example, the total number of pixels of the resolution of an image collector in a mobile phone reaches even one hundred million pixels, and 400 ten thousand pixels in a common monitoring camera are already basic configuration. However, in the field of thermal imaging, the size of an acquisition unit and cost are limited, and the mainstream resolution of thermal imaging is still in the level of 2-8 ten thousand. Clearly, at such a resolution, it is not sufficient for the display of details in some images. Such as indicator lights at a distance, flex connectors, etc., appear as a blurred block in the thermal image.
In limited cost, to obtain an ideal thermal imaging image, a double-light fusion mode is proposed, that is, heat value pixels provided by the thermal imaging image are fused into a visible light image according to a certain rule, but the resolution difference between the thermal imaging image and the visible light image is too large, so that the final image fusion effect is poor.
In addition, the interpolation filling is carried out only by referring to the color part in the visible light image fusion, and actually, the color is influenced by the complex illumination of the scene, the change is large, the structure, the texture characteristic and the like of an object cannot be truly reflected, only 2 relations between the brightness and the color are reflected, and the temperature difference caused by the three-dimensional structure cannot be reflected.
In order to solve the technical problems, the application provides the following technical concepts: respectively acquiring a near-infrared image, a thermal imaging image and a visible light image through image acquisition equipment integrated with a near-infrared lens, a thermal imaging lens and a visible light lens; the resolution of the near-infrared image is the same as that of the visible light image, and the resolution of the near-infrared image is greater than that of the thermal imaging image, which means that each thermal imaging pixel in the thermal imaging image corresponds to a plurality of near-infrared pixels in the near-infrared image, and the pixels are recorded as M × N near-infrared pixels. According to the principle, the target heat value of each near-infrared pixel in the M x N near-infrared pixels can be determined, and then the target heat value of each near-infrared pixel in the near-infrared image is determined. Since the near-infrared image and the visible light image have the same resolution, the thermal imaging image with enhanced image resolution can be obtained by inserting the target heat value of each near-infrared pixel in the near-infrared image into the corresponding visible light pixel in the visible light image.
Fig. 1 is an application scenario diagram provided in an embodiment of the present application. As shown in fig. 1, the application scenario includes an image capture device 11 and a computing platform 12;
the image acquisition device 11 comprises a thermal imaging image acquisition unit 111, a visible light image acquisition unit 112, a near-infrared image acquisition unit 113 and a near-infrared emitter 114; the near-infrared image acquisition unit 113 is provided at its front end with a filter unit 115. Optionally, the thermal imaging image collecting unit 111 may be a thermal imaging camera, the visible light image collecting unit 112 may be a visible light camera, the near-infrared image collecting unit 113 may be a near-infrared camera, and the filtering unit 115 may be a narrow-band filter.
The computing platform 12 may be a desktop computer, a notebook computer, a mobile phone, a server, or other devices with image data processing functions.
The near-infrared image capturing unit 113 is a special visible light camera, and a narrow-band filter is disposed at the front end of the camera and receives only light of a specific wavelength band. The near infrared emitter 114 emits light, and the light reaches the object and then is reflected to the near infrared image collecting unit, so that a single illumination picture irradiated by the near infrared image collecting unit in the optical axis direction can be obtained, and the imaging characteristic of the single illumination picture is not easily interfered by other light. The resulting near-infrared image may be represented as a matrix of Y (0, 1.. Wir.. Hir).
Then, if the Field Of View (FOV) angle and the optical axis angle Of the thermal imaging image collection unit 111 and the near infrared image collection unit 113 are the same, the width and height Of the near infrared lens is Wir Hir, the width and height Of the thermal imaging lens is Wt Ht, and the modulo M = Wir/Wt and N = Hir/Ht can be obtained according to the resolution ratio. Namely, the heat value of the coverage area of M × N near-infrared pixels is represented by one thermal imaging pixel point.
The image fusion processing method provided by the embodiment of the application can be applied to the following scenes:
when the conflagration is monitored, through setting up image acquisition equipment 11 on unmanned aerial vehicle to replace the fire fighter to get into the scene of danger and reconnoitre the conflagration condition of a fire.
In crop monitoring, for example, the image capturing device 11 is provided on an agricultural drone to monitor crop health.
In an environmental protection scenario, the thermal radiation emitted by contaminants such as oil, chemicals, etc. is different from the radiation emitted by the surrounding soil or water, and these contaminants can be tracked and the source sought by the image acquisition device 11.
In the petrochemical industry, since many important equipments in petrochemical production need to work under high temperature and high pressure conditions, there are some potential dangers, and thus, the online monitoring of the production process is very important. The image acquisition equipment 11 can detect relevant information such as corrosion, cracking, thinning, blocking, leakage and the like of products and pipelines, refractory and heat-insulating materials and various reaction furnaces, and can quickly and accurately obtain two-dimensional temperature distribution of the surfaces of the equipment and the materials.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a first flowchart of an image fusion processing method according to an embodiment of the present application. As shown in fig. 2, the image fusion processing method includes:
s201, acquiring a thermal imaging image, a visible light image and a near infrared image which are acquired by image acquisition of the same target object; the resolution ratio of the visible light image is the same as that of the near-infrared image, and the resolution ratio of the near-infrared image is greater than that of the thermal imaging image; each near-infrared pixel in the near-infrared image corresponds to a brightness value; each thermographic pixel in the thermographic image corresponds to a first thermal value.
The execution subject of the method of the present embodiment may be the computing platform shown in fig. 1.
The thermal imaging image, the visible light image and the near infrared image can be obtained by image acquisition of the same target object by the image acquisition device shown in fig. 1. The target object here can be any object of the forest, pollutants, crops, production equipment, etc. that needs to be monitored by means of thermal imaging techniques.
Specifically, the light is emitted by the near-infrared emitter, and after reaching the target object, the light is reflected by the target object to reach the near-infrared lens, so that a single illumination picture, namely a near-infrared image, irradiated in the direction of the optical axis of the near-infrared lens can be obtained, and the single illumination picture can be represented as a matrix of Y (0, 1.. Wir · Hir). Then, the FOV angle and the optical axis angle of the thermal imaging lens and the near infrared light lens are adjusted, so that the FOV angle and the optical axis angle of the thermal imaging lens and the near infrared light lens are the same. At this time, the width and height of the near-infrared lens is Wir × Hir, the width and height of the thermal imaging lens is Wt × Ht, and the obtained thermal imaging image is a matrix of (0, 1.. Wt × Ht). And adjusting the FOV angle and the optical axis angle of the visible light lens and the near infrared light lens, so that the FOV angle and the optical axis angle of the visible light lens and the near infrared light lens are the same. In this case, the width and height of the near-infrared lens is Wir × Hir, the width and height of the visible light lens is Wir × Hir, and the visible light image and the near-infrared image have the same resolution, and therefore, the visible light image can be similarly expressed as a matrix of (0, 1.. Wir × Hir).
Wir Hir can be used for representing the resolution of the visible light image and the near infrared image, and Wt Ht can be used for representing the resolution of the thermal imaging image; wir Hir is greater than Wt Ht.
Because the surface reflection intensity of the target object is caused by the factors such as the material, color, shape and distance of the target object, for example, when a single plane is illuminated, the brightness value thereof should be regularly and gradually reduced along the direction away from the optical center. Therefore, when the near-infrared light source is irradiated on the target object, the target object also returns an intensity value at which the light is irradiated on the surface of the target object, that is, a luminance value of the near-infrared pixel, reflecting a difference in material, color, shape, distance, or the like of the target object. The near-infrared image includes a plurality of near-infrared pixels, and each near-infrared pixel corresponds to a brightness value, i.e., an intensity value returned by the target object.
The thermal imaging image comprises a plurality of thermal imaging pixels, and each thermal imaging pixel is corresponding to a thermal value and is used for reflecting the temperature difference of the target object.
S202, determining M × N near-infrared pixels corresponding to each thermal imaging pixel in the thermal imaging image according to the resolution of the near-infrared image and the resolution of the thermal imaging image; m and N are both integers greater than 1.
Optionally, determining M × N near-infrared pixels corresponding to each thermal imaging pixel in the thermal imaging image according to the resolution of the near-infrared image and the resolution of the thermal imaging image, including: obtaining M x N near-infrared pixels corresponding to each thermal imaging pixel in the thermal imaging image according to the ratio of the resolution of the near-infrared image to the resolution of the thermal imaging image; wherein, the value of M is equal to the ratio of Wir to Wt, and the value of N is equal to the ratio of Hir to Ht. In the embodiment, M × N near-infrared pixels corresponding to each thermal imaging pixel in the thermal imaging image are determined according to the ratio of the resolution of the near-infrared image to the resolution of the thermal imaging image. Specifically, the value of M is obtained according to the ratio of the width of the near-infrared image to the width of the thermal imaging image; and obtaining the value of N according to the ratio of the height of the near-infrared image to the height of the thermal imaging image.
Fig. 3 is a resolution relationship diagram of a near-infrared image and a thermal imaging image provided in an embodiment of the present application. As shown in fig. 3, the resolution of the near-infrared image is greater than the resolution of the thermographic image, meaning that a plurality of near-infrared pixels in the near-infrared image correspond to one thermographic pixel in the thermographic image. The ratio Wir/Wt of the width Wir of the near-infrared image and the width Wt of the thermal imaging image is the value of M, namely M = Wir/Wt; the ratio Hir/Ht of the height Hir of the near-infrared image to the width Ht of the thermal imaging image is the value of N, namely N = Hir/Ht. That is, every M × N near-infrared pixels in the near-infrared image (the rectangle shown by hatching to the left of the arrow in the figure) correspond to one thermal imaging pixel in the thermal imaging image (the rectangle shown by hatching to the right of the arrow in the figure), it can also be understood that every M × N near-infrared pixels in the near-infrared image are represented by one thermal imaging pixel in the thermal imaging image. For example, assuming that the resolution of the near-infrared image is 400 ten thousand and the resolution of the thermal imaging image is 2 ten thousand, M × N =200, i.e., each thermal imaging pixel in the thermal imaging image corresponds to 200 near-infrared pixels in the near-infrared image.
S203, aiming at each near infrared pixel in the M x N near infrared pixels, determining a target heat value of the near infrared pixel according to the brightness value of the near infrared pixel and the first heat value of the thermal imaging pixel corresponding to the near infrared pixel.
The brightness value of the near-infrared pixel can reflect detail differences such as material, color, shape, distance between the near-infrared lens and the like of the target object. The target heat value of each of the M × N near-infrared pixels can be determined by a principle that the ratio of the brightness value of each of the M × N near-infrared pixels to the sum of the brightness values of the M × N near-infrared pixels and the ratio of the target heat value of each of the M × N near-infrared pixels to the sum of the target heat values of the M × N near-infrared pixels are the same. Further, a target heat value for each near-infrared pixel in the near-infrared image may be determined.
S204, interpolating the visible light image according to the target heat value of the near-infrared pixel to obtain a target image obtained by fusing the thermal imaging image, the visible light image and the near-infrared image.
Because the resolution of the near-infrared pixels is the same as that of the visible light image, the near-infrared pixels in the near-infrared image and the pixels in the visible light image are in one-to-one correspondence, and according to the one-to-one correspondence, the target heat value of each near-infrared pixel in the near-infrared image can be assigned to each visible light pixel in the visible light image, so that visible light image interpolation is realized, and the target image formed by fusing the thermal imaging image, the visible light image and the near-infrared image is obtained.
In the embodiment, a thermal imaging image, a visible light image and a near-infrared image obtained by image acquisition on the same target object are obtained; the resolution ratio of the visible light image is the same as that of the near-infrared image, and the resolution ratio of the near-infrared image is greater than that of the thermal imaging image; each near-infrared pixel in the near-infrared image corresponds to a brightness value; each thermal imaging pixel in the thermal imaging image corresponds to a first thermal value; determining M x N near-infrared pixels corresponding to each thermal imaging pixel in the thermal imaging image according to the resolution of the near-infrared image and the resolution of the thermal imaging image; m and N are integers greater than 1; determining a target heat value of the near-infrared pixel according to the brightness value of the near-infrared pixel and a first heat value of a thermal imaging pixel corresponding to the near-infrared pixel aiming at each near-infrared pixel in the M-N near-infrared pixels; and interpolating the visible light image according to the target heat value of the near-infrared pixel to obtain a target image obtained by fusing the thermal imaging image, the visible light image and the near-infrared image. The method comprises the steps of determining M-N near-infrared pixels corresponding to each thermal imaging pixel in a thermal imaging image according to the resolution of the near-infrared image and the resolution of the thermal imaging image, finding out the corresponding relation between the thermal imaging image and the pixels in the near-infrared image, and re-determining the target heat value of each near-infrared pixel according to the brightness value of each near-infrared pixel and the first heat value of the thermal imaging pixel corresponding to each near-infrared pixel, wherein the brightness value of each near-infrared pixel can reflect the detail difference of the surface of a target object, so that the re-determined target heat value of each near-infrared pixel is used for image fusion, and the effect of enhancing the resolution of the thermal imaging image can be achieved.
Fig. 4 is a second flowchart of an image fusion processing method according to an embodiment of the present application. As shown in fig. 4, in the image fusion processing method, for each near-infrared pixel of M × N near-infrared pixels, determining a target heat value of the near-infrared pixel according to a luminance value of the near-infrared pixel and a first heat value of a thermal imaging pixel corresponding to the near-infrared pixel, includes:
s401, determining the sum of the first heat values of the M x N near-infrared pixels according to the product of the first heat values of the thermal imaging pixels corresponding to the M x N near-infrared pixels and the M x N near-infrared pixels.
Referring to fig. 3, if the first heat value of the thermal imaging pixels corresponding to the M × N near-infrared pixels is Tp, the sum of the first heat values of the M × N near-infrared pixels is equal to the product of the first heat value of the thermal imaging pixels corresponding to the M × N near-infrared pixels and the M × N near-infrared pixels in the physical world, so that the sum of the first heat values of the M × N near-infrared pixels is Tp.
S402, aiming at each near-infrared pixel in the M-N near-infrared pixels, determining the ratio of the brightness values of the near-infrared pixels in the M-N near-infrared pixels according to the brightness value of the near-infrared pixel and the sum of the brightness values of the M-N near-infrared pixels.
And setting the sum of the brightness values of the M-N near-infrared pixels as S, and the brightness value of the M-N near-infrared pixels as Yp, wherein the brightness value ratio of each near-infrared pixel in the M-N near-infrared pixels is Yp/S.
And S403, determining a target heat value of the near-infrared pixel according to the product of the sum of the first heat values of the M x N near-infrared pixels and the ratio of the brightness values of the near-infrared pixels in the M x N near-infrared pixels.
Optionally, determining a target heat value of the near-infrared pixel according to a product of a sum of the first heat values of the M × N near-infrared pixels and a ratio of luminance values of the near-infrared pixel in the M × N near-infrared pixels, includes: obtaining a second heat value of the near-infrared pixel according to the product of the sum of the first heat values of the M x N near-infrared pixels and the ratio of the brightness values of the near-infrared pixels in the M x N near-infrared pixels; and taking the second heat value of the near-infrared pixel as a target heat value of the near-infrared pixel. This implementation can be expressed as the following equation (1):
Ty=Yp/S*(Tp*M*N);(1)
in the formula (1), Ty is a second heat value of the near-infrared pixel; tp M N is the sum of the first heat values of M N near-infrared pixels; Yp/S is the ratio of the brightness value of each near-infrared pixel to the brightness value of M near-infrared pixels.
Fig. 5 is a flowchart three of an image fusion processing method provided in the embodiment of the present application. As shown in fig. 5, in the image fusion processing method, for each near-infrared pixel of M × N near-infrared pixels, determining a target heat value of the near-infrared pixel according to a luminance value of the near-infrared pixel and a first heat value of a thermal imaging pixel corresponding to the near-infrared pixel, includes:
s501, determining the sum of the first heat values of the M x N near-infrared pixels according to the product of the first heat values of the thermal imaging pixels corresponding to the M x N near-infrared pixels and the M x N near-infrared pixels.
The specific implementation of step S501 is the same as the specific implementation of step S401, and reference may be specifically made to the description of the specific implementation of step S401, which is not described herein again.
And S502, determining the brightness value ratio of the near-infrared pixels in the M x N near-infrared pixels according to the brightness value of the near-infrared pixels and the sum of the brightness values of the M x N near-infrared pixels for each of the M x N near-infrared pixels.
The specific implementation of step S502 is the same as the specific implementation of step S402, and reference may be specifically made to the description of the specific implementation of step S402, which is not described herein again.
And S503, determining a second heat value of the near-infrared pixel according to the product of the sum of the first heat values of the M-N near-infrared pixels and the ratio of the brightness values of the near-infrared pixels in the M-N near-infrared pixels.
The specific implementation of step S503 is the same as the specific implementation of step S403, and reference may be specifically made to the description of the specific implementation of step S403, which is not described herein again.
And S504, determining a heat deviation value of the near-infrared pixel according to a difference value of the first heat values between the thermal imaging pixel corresponding to the M x N near-infrared pixels and the adjacent thermal imaging pixel.
Optionally, determining a heat deviation value of the near-infrared pixel according to a difference of the first heat values between the thermal imaging pixel corresponding to the M × N near-infrared pixels and the adjacent thermal imaging pixel, including:
and b1, determining the difference value of the first heat value between the thermal imaging pixel corresponding to the M x N near infrared pixels and the adjacent thermal imaging pixel.
Fig. 6 is a diagram illustrating an example of determining a heat offset value of a near-infrared pixel according to an embodiment of the present disclosure. As shown in fig. 6, for each thermographic pixel in the thermographic image, taking 3 × 3 of the matrix of the thermographic image with each thermographic pixel as a center point, and calculating the relationship of heat values between the thermographic pixels in the surrounding 3 × 3 matrix of the thermographic pixels.
With continued reference to fig. 6, assume that each thermal imaging pixel in the 3 x 3 thermal imaging pixel matrix is designated as p i,j I takes the values of 1, 2 and 3; j takes the value of 1, 2, 3. p is a radical of 1,2 Corresponding to a heat value of 31, p 2,1 Corresponding to a heat value of 37, p 2,2 Corresponding to a heat value of 38, p 2,3 Corresponding to a heat value of 39, p 3,2 The corresponding heat value is 21. In order from left to right in the figure, the order from p can be determined 2,1 To p 2,2 Heat value increase of 1 from p 2,2 To p 2,3 The heat value increased by 1. From top to bottom, in order, p can be determined 1,2 To p 2,2 Heat value increase of 7 from p 2,2 To p 3,2 The heat value is increased by-17 to obtain a first heat value difference between the thermal imaging pixel corresponding to the M × N near-infrared pixels and the adjacent thermal imaging pixel, as shown in the matrix to the right of the arrow in the drawing.
And b2, determining the distance between the central pixel point and the edge pixel point in the M x N near infrared pixels.
With continued reference to fig. 6, the center pixel in the 3 × 3 matrix to the right of the arrow represents M × N nir pixels corresponding to one thermal imaging pixel. Assuming that M of the M × N near-infrared pixels takes a value of 9, the distance between the center pixel point and the edge pixel point of the M × N near-infrared pixels is 5.
It should be noted that, in this embodiment, the execution sequence of step b1 and step b2 is not limited to be the order of execution, step b1 may be executed first, step b2 may be executed later, step b2 may be executed first, step b1 may be executed later, or step b1 and step b2 may be executed simultaneously, which is not limited in this embodiment.
And b3, obtaining the heat deviation value of the near infrared pixel according to the ratio of the difference value of the first heat value to the distance.
Illustratively, the distance between a center pixel point and an edge pixel point in the M × N near-infrared pixels is recorded as L, the difference between first heat values between a thermal imaging pixel corresponding to the M × N near-infrared pixels and an adjacent thermal imaging pixel is recorded as K, and then the heat offset value of each near-infrared pixel is K \ L.
And S505, determining a target heat value of the near-infrared pixel according to the second heat value and the heat offset value of the near-infrared pixel.
Optionally, determining the target heat value of the near-infrared pixel according to the second heat value and the heat offset value of the near-infrared pixel includes: and obtaining the target heat value of the near-infrared pixel according to the second heat value of the near-infrared pixel and the sum of the products of the heat deviation value and the adjacent pixel heat value influence coefficient. Specifically, it can be expressed as the following formula (2):
Tyn=Ty+K\L*P;(2)
in the formula (2), Tyn is a target heat value of the near-infrared pixel; ty is a second heat value of the near-infrared pixel; k \ L is the heat deviation value of each near infrared pixel; and P is the influence coefficient of adjacent near-infrared pixels around the near-infrared pixel on the heat value of the near-infrared pixel, and the value of the influence coefficient is a constant between 0.1 and 0.5.
With continued reference to fig. 6, the center pixel in the 3 × 3 matrix to the right of the arrow represents M × N nir pixels corresponding to one thermal imaging pixel. Suppose that M is 9, and the second heat value of the 9 near-infrared pixels in the first row of M × N near-infrared pixels is 1, 2, 3, 4, 5, 6, 4, 2 in order from left to right in the figure, and the heat offset value of each near-infrared pixel is 1\5= 0.2. Assuming that the value of P is 0.5, 0.2 × 0.5=0.1 is added on the basis of the second heat value of the 9 near-infrared pixels in the first row of M × N near-infrared pixels, and the target heat values of the 9 near-infrared pixels in the first row of M × N near-infrared pixels are obtained to be 1.1, 2.1, 3.1, 4.1, 5.1, 6.1, 4.1 and 2.1 in sequence from left to right in the figure.
In one or more embodiments of the present application, optionally, interpolating the visible light image according to the target heat value of the near-infrared pixel to obtain a target image obtained by fusing the thermal imaging image, the visible light image, and the near-infrared image, includes: and inserting the target heat value of the near-infrared pixel into the visible light pixel corresponding to the near-infrared pixel according to the corresponding relation between the near-infrared pixel and the visible light pixel in the visible light image to obtain the target heat value of the visible light pixel. Specifically, the target heat value of the near-infrared pixel is inserted into the visible light pixel corresponding to the near-infrared pixel according to the corresponding relationship between the near-infrared pixel and the visible light pixel in the visible light image, so as to obtain the initial heat value of the visible light pixel; and taking the initial heat value of the visible light pixel as a target heat value of the visible light pixel.
In one or more embodiments of the present application, optionally, interpolating the visible light image according to the target heat value of the near-infrared pixel to obtain a target image obtained by fusing the thermal imaging image, the visible light image, and the near-infrared image, includes:
and c1, inserting the target heat value of the near-infrared pixel into the visible light pixel corresponding to the near-infrared pixel according to the corresponding relation between the near-infrared pixel and the visible light pixel in the visible light image to obtain the initial heat value of the visible light pixel.
And c2, adjusting the initial heat value of the visible light pixel according to the heat difference between the maximum target heat value and the minimum target heat value in the near-infrared image to obtain the target heat value of the visible light pixel.
Optionally, step c2 includes:
and c21, determining the heat difference between the maximum target heat value and the minimum target heat value in the near-infrared image and the target ratio of the target color value.
And step c22, determining the target heat value of the visible light pixel by multiplying the target ratio by the initial heat value of the visible light pixel.
According to the method steps in the embodiment, the target heat value of each near-infrared pixel in the near-infrared image can be obtained; and obtaining a near-infrared light-based heat interpolation table according to the target heat value of each near-infrared pixel in the near-infrared image.
Then, the image is inserted into a visible light image collected by a visible light lens with the same FOV angle and optical axis angle as those of the near infrared lens.
And subtracting the lowest heat value according to the highest heat value in the heat interpolation table to obtain the maximum heat difference value between the maximum target heat value and the minimum target heat value in the near-infrared image. Because the visible light is composed of R (Red ), G (Green ), B (Blue, Blue) arrays, the color value of each pixel point can be weighted proportionally. Taking R as an example, the proportional weighting of R includes: and adjusting the maximum R value in the visible light pixel points of the visible light image to 255, and weighting the R values of the rest visible light pixel points in proportion to obtain the fused target image. For example, the temperature difference is 30.0 degrees, the decimal point of 30.0 is moved to the right by one bit, the order of the color gradation is 300, the value of each color gradation is 256/300, and the initial heat value of the visible light pixel is multiplied by the value, so that the target heat value of the visible light pixel which needs to be displayed finally is obtained.
On the basis of the foregoing method embodiment, fig. 7 is a schematic structural diagram of an image fusion processing apparatus provided in the embodiment of the present application. As shown in fig. 7, the image fusion processing apparatus includes: an acquisition module 71, a determination module 72 and an interpolation module 73; the acquiring module 71 is configured to acquire a thermal imaging image, a visible light image and a near-infrared image obtained by acquiring images of the same target object; the resolution of the visible light image is the same as that of the near-infrared image, and the resolution of the near-infrared image is greater than that of the thermal imaging image; each near-infrared pixel in the near-infrared image corresponds to a brightness value; each thermographic pixel in the thermographic image corresponds to a first thermal value; a determining module 72, configured to determine, according to the resolution of the near-infrared image and the resolution of the thermal imaging image, M × N near-infrared pixels corresponding to each thermal imaging pixel in the thermal imaging image; both M and N are integers greater than 1; the determining module 72 is further configured to determine, for each near-infrared pixel of the M × N near-infrared pixels, a target heat value of the near-infrared pixel according to the brightness value of the near-infrared pixel and the first heat value of the thermal imaging pixel corresponding to the near-infrared pixel; and the interpolation module 73 is configured to interpolate the visible light image according to the target heat value of the near-infrared pixel to obtain a target image obtained by fusing the thermal imaging image, the visible light image, and the near-infrared image.
Optionally, the determining module 72 determines, for each near-infrared pixel in the M × N near-infrared pixels, a target heat value of the near-infrared pixel according to the brightness value of the near-infrared pixel and the first heat value of the thermal imaging pixel corresponding to the near-infrared pixel, specifically including: determining a first heat value sum of the M x N near-infrared pixels according to a product of a first heat value of a thermal imaging pixel corresponding to the M x N near-infrared pixels and the M x N near-infrared pixels; determining, for each of the M × N near-infrared pixels, a luminance value fraction of the near-infrared pixel in the M × N near-infrared pixels according to a luminance value of the near-infrared pixel and a sum of luminance values of the M × N near-infrared pixels; and determining a target heat value of the near-infrared pixel according to the product of the sum of the first heat values of the M x N near-infrared pixels and the ratio of the brightness values of the near-infrared pixels in the M x N near-infrared pixels.
Optionally, the determining module 72 determines, for each near-infrared pixel in the M × N near-infrared pixels, a target heat value of the near-infrared pixel according to the brightness value of the near-infrared pixel and the first heat value of the thermal imaging pixel corresponding to the near-infrared pixel, specifically including: determining a first heat value sum of the M x N near-infrared pixels according to a product of a first heat value of a thermal imaging pixel corresponding to the M x N near-infrared pixels and the M x N near-infrared pixels; determining, for each of the M × N near-infrared pixels, a luminance value fraction of the near-infrared pixel in the M × N near-infrared pixels according to a luminance value of the near-infrared pixel and a sum of luminance values of the M × N near-infrared pixels; determining a second heat value of the near-infrared pixel according to the product of the sum of the first heat values of the M x N near-infrared pixels and the ratio of the brightness values of the near-infrared pixels in the M x N near-infrared pixels; determining a heat deviation value of the near-infrared pixels according to a difference value of first heat values between thermal imaging pixels corresponding to the M x N near-infrared pixels and adjacent thermal imaging pixels; and determining a target heat value of the near-infrared pixel according to the second heat value of the near-infrared pixel and the heat deviation value.
Optionally, the determining module 72 determines the heat deviation value of the near-infrared pixel according to a difference between the thermal imaging pixel corresponding to the M × N near-infrared pixels and the first heat value of the adjacent thermal imaging pixel, specifically including: determining the distance between a central pixel point and an edge pixel point in the M x N near-infrared pixels; determining a difference value of a first thermal value between a thermal imaging pixel corresponding to the M x N near infrared pixels and an adjacent thermal imaging pixel; and obtaining the heat deviation value of the near-infrared pixel according to the ratio of the difference value of the first heat value to the distance.
Optionally, the determining module 72 determines the target heat value of the near-infrared pixel according to the second heat value of the near-infrared pixel and the heat offset value, and specifically includes: and obtaining a target heat value of the near-infrared pixel according to the second heat value of the near-infrared pixel and the sum of the products of the heat deviation value and the adjacent pixel heat value influence coefficient.
Optionally, the interpolation module 73 interpolates the visible light image according to the target heat value of the near-infrared pixel to obtain a target image obtained by fusing the thermal imaging image, the visible light image, and the near-infrared image, and specifically includes: and inserting the target heat value of the near-infrared pixel into the visible light pixel corresponding to the near-infrared pixel according to the corresponding relation between the near-infrared pixel and the visible light pixel in the visible light image to obtain the target heat value of the visible light pixel.
Optionally, the interpolation module 73 interpolates the visible light image according to the target heat value of the near-infrared pixel to obtain a target image obtained by fusing the thermal imaging image, the visible light image, and the near-infrared image, and specifically includes: inserting the target heat value of the near-infrared pixel into the visible light pixel corresponding to the near-infrared pixel according to the corresponding relation between the near-infrared pixel and the visible light pixel in the visible light image to obtain an initial heat value of the visible light pixel; and adjusting the initial heat value of the visible light pixel according to the heat difference between the maximum target heat value and the minimum target heat value in the near-infrared image to obtain the target heat value of the visible light pixel.
Optionally, the interpolation module 73 adjusts the initial heat value of the visible light pixel according to a heat difference between a maximum target heat value and a minimum target heat value in the near-infrared image to obtain the target heat value of the visible light pixel, and specifically includes: determining a heat difference between a maximum target heat value and a minimum target heat value in the near-infrared image and a target ratio of target color values; and determining the product of the target ratio and the initial heat value of the visible light pixel as the target heat value of the visible light pixel.
Optionally, the resolution of the near-infrared image is Wir Hir, and the resolution of the thermal imaging image is Wt Ht; the interpolation module 73 determines, according to the resolution of the near-infrared image and the resolution of the thermal imaging image, M × N near-infrared pixels corresponding to each thermal imaging pixel in the thermal imaging image, and specifically includes: obtaining M x N near-infrared pixels corresponding to each thermal imaging pixel in the thermal imaging image according to the ratio of the resolution of the near-infrared image to the resolution of the thermal imaging image; wherein, the value of M is equal to the ratio of Wir to Wt, and the value of N is equal to the ratio of Hir to Ht.
The image fusion processing device provided in the embodiment of the present application can be used for implementing the technical scheme of the image fusion processing method in the above embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the determining module 72 may be a separate processing element, or may be integrated into a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and a processing element of the apparatus calls and executes the function of the determining module 72. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 8, the electronic device may include: transceiver 81, processor 82, memory 83.
The processor 82 executes computer-executable instructions stored in the memory, causing the processor 82 to perform the aspects of the embodiments described above. The processor 82 may be a general-purpose processor including a central processing unit CPU, a Network Processor (NP), etc.; but also a digital signal processor DSP, an application specific integrated circuit ASIC, a field programmable gate array FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components.
A memory 83 is coupled to the processor 82 via the system bus and communicates with each other, and the memory 83 is used for storing computer program instructions.
The transceiver 81 may be used to acquire a thermographic image, a visible light image, and a near infrared image of the same target object for image acquisition.
The system bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The system bus may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus. The transceiver is used to enable communication between the database access device and other computers (e.g., clients, read-write libraries, and read-only libraries). The memory may include Random Access Memory (RAM) and may also include non-volatile memory (non-volatile memory).
The electronic device provided by the embodiment of the application may be the computing platform of the above embodiment.
The embodiment of the application further provides a chip for running the instructions, and the chip is used for executing the technical scheme of the image fusion processing method in the embodiment.
The embodiment of the present application further provides a computer-readable storage medium, where a computer instruction is stored in the computer-readable storage medium, and when the computer instruction runs on a computer, the computer is enabled to execute the technical solution of the image fusion processing method in the foregoing embodiment.
The embodiment of the present application further provides a computer program product, where the computer program product includes a computer program, which is stored in a computer-readable storage medium, and at least one processor can read the computer program from the computer-readable storage medium, and when the at least one processor executes the computer program, the technical solution of the image fusion processing method in the foregoing embodiments can be implemented.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Claims (12)
1. An image fusion processing method is characterized by comprising the following steps:
acquiring a thermal imaging image, a visible light image and a near-infrared image for image acquisition of the same target object; the resolution of the visible light image is the same as that of the near-infrared image, and the resolution of the near-infrared image is greater than that of the thermal imaging image; each near-infrared pixel in the near-infrared image corresponds to a brightness value; each thermographic pixel in the thermographic image corresponds to a first thermal value;
determining M x N near-infrared pixels corresponding to each thermal imaging pixel in the thermal imaging image according to the resolution of the near-infrared image and the resolution of the thermal imaging image; both M and N are integers greater than 1;
determining a target heat value of each near-infrared pixel in the M x N near-infrared pixels according to the brightness value of the near-infrared pixel and a first heat value of a thermal imaging pixel corresponding to the near-infrared pixel;
and interpolating the visible light image according to the target heat value of the near-infrared pixel to obtain a target image obtained by fusing the thermal imaging image, the visible light image and the near-infrared image.
2. The method of claim 1, wherein determining, for each of the M x N near-infrared pixels, a target thermal value for the near-infrared pixel based on a luminance value of the near-infrared pixel and a first thermal value of a thermal imaging pixel to which the near-infrared pixel corresponds comprises:
determining a first heat value sum of the M x N near-infrared pixels according to a product of a first heat value of a thermal imaging pixel corresponding to the M x N near-infrared pixels and the M x N near-infrared pixels;
determining, for each of the M × N near-infrared pixels, a luminance value fraction of the near-infrared pixel in the M × N near-infrared pixels according to a luminance value of the near-infrared pixel and a sum of luminance values of the M × N near-infrared pixels;
and determining a target heat value of the near-infrared pixel according to the product of the sum of the first heat values of the M x N near-infrared pixels and the ratio of the brightness values of the near-infrared pixels in the M x N near-infrared pixels.
3. The method of claim 1, wherein determining, for each of the M x N near-infrared pixels, a target thermal value for the near-infrared pixel based on a luminance value of the near-infrared pixel and a first thermal value of a thermal imaging pixel to which the near-infrared pixel corresponds comprises:
determining a first heat value sum of the M x N near-infrared pixels according to a product of a first heat value of a thermal imaging pixel corresponding to the M x N near-infrared pixels and the M x N near-infrared pixels;
for each near-infrared pixel in the M x N near-infrared pixels, determining the brightness value ratio of the near-infrared pixel in the M x N near-infrared pixels according to the brightness value of the near-infrared pixel and the sum of the brightness values of the M x N near-infrared pixels;
determining a second heat value of the near-infrared pixel according to the product of the sum of the first heat values of the M x N near-infrared pixels and the ratio of the brightness values of the near-infrared pixels in the M x N near-infrared pixels;
determining a heat deviation value of the near-infrared pixels according to a difference value of first heat values between thermal imaging pixels corresponding to the M x N near-infrared pixels and adjacent thermal imaging pixels;
and determining a target heat value of the near-infrared pixel according to the second heat value of the near-infrared pixel and the heat deviation value.
4. The method according to claim 3, wherein determining the heat offset value of the near-infrared pixel according to the difference of the first heat values between the thermal imaging pixel and the adjacent thermal imaging pixel corresponding to the M x N near-infrared pixels comprises:
determining the distance between a central pixel point and an edge pixel point in the M x N near-infrared pixels;
determining a difference value of a first thermal value between a thermal imaging pixel corresponding to the M x N near infrared pixels and an adjacent thermal imaging pixel;
and obtaining the heat deviation value of the near-infrared pixel according to the ratio of the difference value of the first heat value to the distance.
5. The method of claim 3, wherein determining the target heat value for the near-infrared pixel based on the second heat value for the near-infrared pixel and the heat offset value comprises:
and obtaining a target heat value of the near-infrared pixel according to the second heat value of the near-infrared pixel and the sum of the products of the heat deviation value and the adjacent pixel heat value influence coefficient.
6. The method according to any one of claims 1 to 5, wherein the interpolating the visible light image according to the target heat value of the near-infrared pixel to obtain a target image obtained by fusing the thermal imaging image, the visible light image and the near-infrared image comprises:
and inserting the target heat value of the near-infrared pixel into the visible light pixel corresponding to the near-infrared pixel according to the corresponding relation between the near-infrared pixel and the visible light pixel in the visible light image to obtain the target heat value of the visible light pixel.
7. The method according to any one of claims 1 to 5, wherein the interpolating the visible light image according to the target heat value of the near-infrared pixel to obtain a target image obtained by fusing the thermal imaging image, the visible light image and the near-infrared image comprises:
inserting the target heat value of the near-infrared pixel into the visible light pixel corresponding to the near-infrared pixel according to the corresponding relation between the near-infrared pixel and the visible light pixel in the visible light image to obtain an initial heat value of the visible light pixel;
and adjusting the initial heat value of the visible light pixel according to the heat difference between the maximum target heat value and the minimum target heat value in the near-infrared image to obtain the target heat value of the visible light pixel.
8. The method of claim 7, wherein the adjusting the initial heat value of the visible light pixel according to the heat difference between the maximum target heat value and the minimum target heat value in the near-infrared image to obtain the target heat value of the visible light pixel comprises:
determining a heat difference between a maximum target heat value and a minimum target heat value in the near-infrared image and a target ratio of target color values;
and determining the product of the target ratio and the initial heat value of the visible light pixel as the target heat value of the visible light pixel.
9. The method according to any one of claims 1-5, wherein the resolution of the near-infrared image is Wir Hir, the resolution of the thermographic image is Wt Ht;
determining M x N near-infrared pixels corresponding to each thermal imaging pixel in the thermal imaging image according to the resolution of the near-infrared image and the resolution of the thermal imaging image, including:
obtaining M x N near-infrared pixels corresponding to each thermal imaging pixel in the thermal imaging image according to the ratio of the resolution of the near-infrared image to the resolution of the thermal imaging image;
wherein, the value of M is equal to the ratio of Wir to Wt, and the value of N is equal to the ratio of Hir to Ht.
10. An image fusion processing apparatus characterized by comprising:
the acquisition module is used for acquiring a thermal imaging image, a visible light image and a near-infrared image which are acquired by image acquisition on the same target object; the resolution of the visible light image is the same as that of the near-infrared image, and the resolution of the near-infrared image is greater than that of the thermal imaging image; each near-infrared pixel in the near-infrared image corresponds to a brightness value; each thermographic pixel in the thermographic image corresponds to a first thermal value;
a determining module, configured to determine, according to a resolution of the near-infrared image and a resolution of the thermal imaging image, M × N near-infrared pixels corresponding to each thermal imaging pixel in the thermal imaging image; both M and N are integers greater than 1;
the determining module is further configured to determine, for each near-infrared pixel of the M × N near-infrared pixels, a target heat value of the near-infrared pixel according to the brightness value of the near-infrared pixel and a first heat value of a thermal imaging pixel corresponding to the near-infrared pixel;
and the interpolation module is used for interpolating the visible light image according to the target heat value of the near-infrared pixel to obtain a target image obtained by fusing the thermal imaging image, the visible light image and the near-infrared image.
11. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory to implement the method of any of claims 1-9.
12. A computer-readable storage medium having computer-executable instructions stored therein, which when executed by a processor, are configured to implement the method of any one of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210618014.4A CN114693581B (en) | 2022-06-02 | 2022-06-02 | Image fusion processing method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210618014.4A CN114693581B (en) | 2022-06-02 | 2022-06-02 | Image fusion processing method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114693581A CN114693581A (en) | 2022-07-01 |
CN114693581B true CN114693581B (en) | 2022-09-06 |
Family
ID=82131262
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210618014.4A Active CN114693581B (en) | 2022-06-02 | 2022-06-02 | Image fusion processing method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114693581B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017020595A1 (en) * | 2015-08-05 | 2017-02-09 | 武汉高德红外股份有限公司 | Visible light image and infrared image fusion processing system and fusion method |
CN111062378A (en) * | 2019-12-23 | 2020-04-24 | 重庆紫光华山智安科技有限公司 | Image processing method, model training method, target detection method and related device |
CN112712485A (en) * | 2019-10-24 | 2021-04-27 | 杭州海康威视数字技术股份有限公司 | Image fusion method and device |
WO2021184029A1 (en) * | 2020-11-12 | 2021-09-16 | Innopeak Technology, Inc. | Systems and methods for fusing color image and near-infrared image |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7340099B2 (en) * | 2003-01-17 | 2008-03-04 | University Of New Brunswick | System and method for image fusion |
TWI666935B (en) * | 2017-07-12 | 2019-07-21 | 謝基生 | A mini thermography for enhance nir captures images |
CN112053314B (en) * | 2020-09-04 | 2024-02-23 | 深圳市迈测科技股份有限公司 | Image fusion method, device, computer equipment, medium and thermal infrared imager |
-
2022
- 2022-06-02 CN CN202210618014.4A patent/CN114693581B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017020595A1 (en) * | 2015-08-05 | 2017-02-09 | 武汉高德红外股份有限公司 | Visible light image and infrared image fusion processing system and fusion method |
CN112712485A (en) * | 2019-10-24 | 2021-04-27 | 杭州海康威视数字技术股份有限公司 | Image fusion method and device |
CN111062378A (en) * | 2019-12-23 | 2020-04-24 | 重庆紫光华山智安科技有限公司 | Image processing method, model training method, target detection method and related device |
WO2021184029A1 (en) * | 2020-11-12 | 2021-09-16 | Innopeak Technology, Inc. | Systems and methods for fusing color image and near-infrared image |
Also Published As
Publication number | Publication date |
---|---|
CN114693581A (en) | 2022-07-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109377469B (en) | Processing method, system and storage medium for fusing thermal imaging with visible light image | |
CN110717942B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN110689581B (en) | Structured light module calibration method, electronic device and computer readable storage medium | |
JP4938690B2 (en) | Determination of scene distance in digital camera images | |
US9729803B2 (en) | Apparatus and method for multispectral imaging with parallax correction | |
CN106683071B (en) | Image splicing method and device | |
US9383548B2 (en) | Image sensor for depth estimation | |
WO2018161466A1 (en) | Depth image acquisition system and method | |
CN111837155A (en) | Image processing method and apparatus | |
JP6600936B2 (en) | Image processing apparatus, image processing method, image processing system, program, and recording medium | |
CN110213491B (en) | Focusing method, device and storage medium | |
JP7378219B2 (en) | Imaging device, image processing device, control method, and program | |
Pistellato et al. | Deep demosaicing for polarimetric filter array cameras | |
JP4193342B2 (en) | 3D data generator | |
JP2020024103A (en) | Information processing device, information processing method, and program | |
CN110661940A (en) | Imaging system with depth detection and method of operating the same | |
JP4104495B2 (en) | Data processing apparatus, image processing apparatus, and camera | |
JP5900017B2 (en) | Depth estimation apparatus, reconstructed image generation apparatus, depth estimation method, reconstructed image generation method, and program | |
CN114693581B (en) | Image fusion processing method, device, equipment and storage medium | |
KR101630856B1 (en) | Multispectral photometric stereo system and operating method of the same | |
Pashchenko et al. | An algorithm for the visualization of stereo images simultaneously captured with different exposures | |
CN111260781B (en) | Method and device for generating image information and electronic equipment | |
JP2019022147A (en) | Light source direction estimation device | |
CN112866552B (en) | Focusing method and device, electronic equipment and computer readable storage medium | |
Ju et al. | Color fringe removal in narrow color regions of digital images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder |
Address after: 518100 Guangdong Shenzhen Baoan District Xixiang street, Wutong Development Zone, Taihua Indus Industrial Park 8, 3 floor. Patentee after: Shenzhen Haiqing Zhiyuan Technology Co.,Ltd. Address before: 518100 Guangdong Shenzhen Baoan District Xixiang street, Wutong Development Zone, Taihua Indus Industrial Park 8, 3 floor. Patentee before: SHENZHEN HIVT TECHNOLOGY Co.,Ltd. |
|
CP01 | Change in the name or title of a patent holder |