CN116342445A - Method and system for fusing visible light image and infrared image - Google Patents

Method and system for fusing visible light image and infrared image Download PDF

Info

Publication number
CN116342445A
CN116342445A CN202310197764.3A CN202310197764A CN116342445A CN 116342445 A CN116342445 A CN 116342445A CN 202310197764 A CN202310197764 A CN 202310197764A CN 116342445 A CN116342445 A CN 116342445A
Authority
CN
China
Prior art keywords
image
visible light
light image
infrared
infrared image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310197764.3A
Other languages
Chinese (zh)
Inventor
黄晟
杨能
吴攀峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Guide Sensmart Tech Co ltd
Original Assignee
Wuhan Guide Sensmart Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Guide Sensmart Tech Co ltd filed Critical Wuhan Guide Sensmart Tech Co ltd
Priority to CN202310197764.3A priority Critical patent/CN116342445A/en
Publication of CN116342445A publication Critical patent/CN116342445A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • G06T3/608Rotation of whole images or parts thereof by skew deformation, e.g. two-pass or three-pass rotation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a system for fusing a visible light image and an infrared image, wherein the method comprises the following steps: s1) establishing a pixel point coordinate mapping relation between an infrared image and a visible light image; s2) acquiring a change matrix according to the coordinate mapping relation; s3) performing perspective transformation on the visible light image by utilizing the change matrix; s4) fusing the visible light image after perspective transformation with the infrared image to obtain a fused image. The invention calibrates the pictures by utilizing the contour points of the plurality of groups of target objects, and can rapidly and conveniently extract a plurality of groups of fusion contour points. Aiming at the situation that the direct zooming fusion of the visible light image and the infrared thermal image can cause the misalignment of the target object, the invention utilizes the interface selection point to acquire the mapping relation, and then utilizes the mapping relation to perform perspective transformation, thereby achieving the purposes of fusion and superposition. The invention can restore the shape and angle of the target object in the fusion image, and the abundant details in the target object area are reserved in the fusion image through transparency fusion.

Description

Method and system for fusing visible light image and infrared image
Technical Field
The invention relates to the technical field of fusion of visible light images and infrared images, in particular to a method and a system for fusion of visible light images and infrared images.
Background
Visual information is an important information acquisition channel for human beings, and visual sensors are taken as tools for assisting human beings in acquiring visual information, and occupy an important position in the production and life of human beings. However, in the face of complex and varied natural environments, the information obtained by a single vision sensor has great limitation, and image fusion technology is generated. Infrared imaging equipment and visible light imaging equipment are widely focused in the field of imaging equipment, and infrared images and visible light images are fused, so that the advantages of the infrared imaging equipment and the visible light imaging equipment can be combined, and richer details are generated.
When the existing infrared image and visible light image are fused, the visible light image is directly scaled to the size of the infrared image on infrared thermal imaging double-light equipment, and then the fused image is directly overlapped and fused according to transparency, so that the fused image is generated. However, when the visible light image is directly scaled and superimposed on the infrared thermal image, there is a case where the target object is not superimposed due to a difference in the angle of view between the visible light image and the infrared image. When the visible light image and the infrared image are acquired, the visible light lens and the infrared lens are not arranged on the same horizontal plane or the same vertical plane, so that the phenomenon of spatial angle distortion of a target object in the visible light image and the infrared image can possibly occur.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a method and a system for fusing a visible light image and an infrared image.
In order to achieve the expected effect, the invention adopts the following technical scheme:
the invention discloses a method for fusing a visible light image and an infrared image, which comprises the following steps:
s1) establishing a pixel point coordinate mapping relation between an infrared image and a visible light image;
s2) acquiring a change matrix according to the coordinate mapping relation;
s3) performing perspective transformation on the visible light image by utilizing the change matrix;
s4) fusing the visible light image after perspective transformation with the infrared image to obtain a fused image.
Further, the S1) specifically includes: and respectively selecting a plurality of corresponding pixel points in the target object areas of the visible light image and the infrared image to establish a coordinate mapping relation.
Further, the pixel points of at least three contour points of the target object in the infrared image are selected to establish a coordinate mapping relation with the corresponding pixel points in the visible light image.
Further, the visible light image in S3) is subjected to perspective transformation after being scaled in resolution, and the scaled resolution is consistent with the resolution of the infrared image to be fused.
Further, the perspective transformation specifically includes: the angle of view of the infrared image and the visible image is adjusted to be the same.
Further, the S4) specifically includes: and overlapping the visible light images subjected to perspective transformation according to the transparency weight, and adding the visible light images to the infrared images for fusion to obtain a fused image.
Further, the fusion specifically comprises the following steps: acquiring brightness information of an infrared image and a visible light image, acquiring a target rectangular area of the infrared image and the visible light image, which is overlapped with a target in the visible light image, according to the brightness information, cutting the visible light image according to the target rectangular area, scaling the cut visible light image according to the size of the infrared image, and fusing the scaled visible light image and the infrared image according to a certain proportion.
Further, four vertexes of the target rectangular area are selected to establish a coordinate mapping relation.
Further, the method further comprises: and carrying out gray processing on the fused image to obtain a final fused image.
The invention also discloses a system for fusing the visible light image and the infrared image, which comprises:
the receiving module is used for receiving the infrared image and the visible light image;
the image fusion module is used for establishing a pixel point coordinate mapping relation between the infrared image and the visible light image; the change matrix is used for acquiring the infrared image and the visible light image according to the coordinate mapping relation; the method comprises the steps of performing perspective transformation on a visible light image by using a change matrix; and the method is used for fusing the visible light image after perspective transformation with the infrared image to obtain a fused image.
Compared with the prior art, the invention has the beneficial effects that: the invention provides a method and a system for fusing a visible light image and an infrared image, wherein the method comprises the following steps: s1) establishing a pixel point coordinate mapping relation between an infrared image and a visible light image; s2) acquiring a change matrix according to the coordinate mapping relation; s3) performing perspective transformation on the visible light image by utilizing the change matrix; s4) fusing the visible light image after perspective transformation with the infrared image to obtain a fused image. The invention calibrates the pictures by utilizing the contour points of the plurality of groups of target objects, and can rapidly and conveniently extract a plurality of groups of fusion contour points. Aiming at the situation that the direct zooming fusion of the visible light image and the infrared thermal image can cause the misalignment of the target object, the invention utilizes the interface selection point to acquire the mapping relation, and then utilizes the mapping relation to perform perspective transformation, thereby achieving the purposes of fusion and superposition. The point selection method is very convenient, and corresponding contour points are selected according to the same target object. By adopting the invention, the shape and angle of the target object can be restored in the fusion image after the angle distortion of the target object in the visible light image and the infrared image is corrected by the coordinate mapping relation, and the abundant details in the target object area are reserved in the fusion image through transparency fusion.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings described below are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for fusing a visible light image and an infrared image according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of 5-point coordinate mapping of a method for fusing a visible light image and an infrared image according to an embodiment of the present invention.
Fig. 3 is a perspective transformation example diagram of a method for fusing a visible light image and an infrared image according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a fusion effect of a method for fusing a visible light image and an infrared image according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a target rectangular area of a method for fusing a visible light image and an infrared image according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of a cut visible light image of a method for fusing a visible light image and an infrared image according to an embodiment of the present invention.
Fig. 7 is a schematic diagram of a fusion image of a method for fusing a visible light image and an infrared image after gray scale processing according to an embodiment of the present invention.
Fig. 8 is an infrared image schematic diagram of a method for fusing a visible light image and an infrared image according to an embodiment of the present invention.
Fig. 9 is a schematic diagram of a fusion image of a method for fusing a visible light image and an infrared image according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1 to 9, the invention discloses a method for fusing a visible light image and an infrared image, which comprises the following steps:
s1) establishing a pixel point coordinate mapping relation between an infrared image and a visible light image; this step is to correlate the infrared image with the visible light image through the pixel points for subsequent acquisition of the change matrix.
S2) acquiring a change matrix according to the coordinate mapping relation; the step can be realized by third-party software, and pixel point coordinate mapping relation data of the infrared image and the visible light image are input into the third-party software to obtain a change matrix so as to carry out perspective transformation operation subsequently. In a preferred embodiment, the OpenCV open source library is used to obtain a change matrix of the visible light image and the infrared image.
S3) performing perspective transformation on the visible light image by utilizing the change matrix; compared with the core invention point of the prior art, the invention has no perspective transformation operation, and the situation that the visible light image and the infrared image are different in view angle and cannot be effectively fused exists in the prior art. After perspective transformation, the position of the target object on the visible light image and the infrared image can be ensured to be consistent.
S4) fusing the visible light image after perspective transformation with the infrared image to obtain a fused image. After perspective transformation, the visible light image and the infrared image can be effectively fused, the shape and the angle of the target object can be restored in the fused image, and rich details in the target object area are reserved in the fused image through transparency fusion.
In a preferred embodiment, the S1) specifically includes: and respectively selecting a plurality of corresponding pixel points in the target object areas of the visible light image and the infrared image to establish a coordinate mapping relation. Typically, only 3 points are needed to complete the mapping relationship.
In a preferred embodiment, a coordinate mapping relationship is established between the pixel points of at least three contour points of the target object in the infrared image and the corresponding pixel points in the visible light image. Specifically, the number of pixel points for establishing the coordinate mapping relationship can be figurative to form details, the more points, the stronger the details, the more accurate the mapping relationship, but at the same time, part of performance can be sacrificed. In general, the mapping relation is completed by only 3 points, and experiments show that the effect of establishing the coordinate mapping relation by selecting the pixel points of 5 corresponding contour points is better. As shown in fig. 2, 5 points are selected to establish a coordinate mapping relationship. For example, the image includes a face, a computer and a wall, the target object is a face, 5 pixels such as eyes, mouth, nose, ears and eyebrows are selected in the visible light image of the face due to different angles of view, and corresponding pixels are selected in the infrared image of the face, so that a corresponding coordinate mapping relation is established, perspective transformation is performed by using the mapping relation, and the distorted position of the face can be restored to a normal state.
In a preferred embodiment, the visible light image in S3) is subjected to perspective transformation after being scaled by the resolution, and the scaled resolution is consistent with the resolution of the infrared image to be fused. And scaling the visible light image to be consistent with the resolution of the infrared image to be fused, and then performing perspective transformation, so that the fusion effect can be further improved.
Further, the perspective transformation specifically includes: the angle of view of the infrared image and the visible image is adjusted to be the same. The perspective transformation (perspective transformation) is to rotate the shadow bearing surface (perspective surface) around the trace (perspective axis) by a certain angle according to the perspective rotation law by utilizing the condition that the perspective center, the image point and the target point are collinear, and destroy the original projection light beam, and still can keep the projection geometric figure on the shadow bearing surface unchanged. In short, a plane is projected onto a designated plane through a projection matrix, and the projection geometry on the image bearing surface can be maintained unchanged, as shown in fig. 3. The perspective transformation includes a series of operations such as translation, scaling, flipping, rotation, etc., and not only plane transformation, but also spatial transformation, projective transformation, affine transformation, etc. can be performed. The purpose of performing perspective transformation is to adjust the angle of view of the infrared image and the visible light image to be the same angle as much as possible so as to improve the fusion effect.
In a preferred embodiment, the S4) specifically includes: and overlapping the visible light images subjected to perspective transformation according to the transparency weight, and adding the visible light images to the infrared images for fusion to obtain a fused image. The fused image is shown in fig. 4. The transparency weight can be flexibly adjusted according to the requirement so as to realize the optimal fusion effect. In a preferred embodiment, the infrared image and the visible image each have a transparency weight of 50%.
On one hand, the method is suitable for single-channel processing, only the brightness information is reserved, the fusion efficiency can be improved, and the resource consumption is reduced. For example, YUV420 (pixel picture format) of the visible light image artwork is acquired, and only the luminance information, i.e., the Y (luminance inside the YUV format) component, is retained. And simultaneously, a Y (brightness inside the YUV format) image of the infrared image is acquired.
On the other hand, the image operation of the method is based on the simultaneous processing of three RGB channels, so that the chromaticity information of visible light can be effectively reserved, but the resource consumption is larger than that of a single channel.
In a preferred embodiment, the fusion specifically comprises the steps of: acquiring brightness information of an infrared image and a visible light image, acquiring a target rectangular area of the infrared image and the visible light image, which is overlapped with a target in the visible light image, according to the brightness information, cutting the visible light image according to the target rectangular area, scaling the cut visible light image according to the size of the infrared image, and fusing the scaled visible light image and the infrared image according to a certain proportion. As shown in fig. 5, 6 and 7. In a preferred embodiment, the proportion ranges from 0 to 100% with a default value of 50%. In the preferred embodiment, the visible light image is overlapped and fused on the infrared image, so that the fusion effect can be greatly improved.
In a preferred embodiment, the method further comprises: and carrying out gray processing on the fused image to obtain a final fused image. Specifically, as shown in fig. 8 and 9. First, YUV420 (pixel picture format) of the original visible light image is acquired, and only luminance information, i.e., Y (luminance inside the YUV format) component is retained. And simultaneously, a Y (brightness inside the YUV format) image of the infrared image is acquired. And secondly, selecting a target object (a person head) in the visible light image, and manually adjusting the interface, namely dragging the frame up and down, left and right, so that the target object is overlapped, and a visible light target rectangular area is obtained. And then, cutting the visible light image (removing the part outside the area) by using the target rectangle information, and obtaining a cut visible light brightness image. And scaling the cut visible light image according to the size of the infrared image, and fusing the scaled visible light image and the infrared image according to a certain fusion proportion. Finally, pseudo colors (i.e., RGB color images) are added to the obtained fusion image, and the brightness image is converted into RGB images, typically iron oxide red, medical treatment, etc.
In another embodiment, four vertices of the target rectangular region are selected to establish a coordinate mapping relationship. Specifically, four vertexes of the target rectangular area are selected to establish a coordinate mapping relation, perspective transformation is carried out on the visible light image according to the coordinate mapping relation, and finally superposition fusion is carried out on the visible light image and the infrared image. But this approach may not be effective in selecting 5 points to establish a coordinate mapping relationship.
Based on the same thought, the invention also discloses a system for fusing the visible light image and the infrared image, which comprises the following steps:
the receiving module is used for receiving the infrared image and the visible light image;
the image fusion module is used for establishing a pixel point coordinate mapping relation between the infrared image and the visible light image; this step is to correlate the infrared image with the visible light image through the pixel points for subsequent acquisition of the change matrix. The image fusion module is also used for acquiring a change matrix of the infrared image and the visible light image according to the coordinate mapping relation; the step can be realized by third-party software, and pixel point coordinate mapping relation data of the infrared image and the visible light image are input into the third-party software to obtain a change matrix so as to carry out perspective transformation operation subsequently. In a preferred embodiment, the OpenCV open source library is used to obtain a change matrix of the visible light image and the infrared image. The image fusion module is also used for performing perspective transformation on the visible light image by utilizing the change matrix; compared with the core invention point of the prior art, the invention has no perspective transformation operation, and the situation that the visible light image and the infrared image are different in view angle and cannot be effectively fused exists in the prior art. After perspective transformation, the position of the target object on the visible light image and the infrared image can be ensured to be consistent. The image fusion module is also used for fusing the visible light image after perspective transformation with the infrared image to obtain a fused image. After perspective transformation, the visible light image and the infrared image can be effectively fused, the shape and the angle of the target object can be restored in the fused image, and rich details in the target object area are reserved in the fused image through transparency fusion.
In a preferred embodiment, the establishing a pixel coordinate mapping relationship between the infrared image and the visible light image specifically includes: and respectively selecting a plurality of corresponding pixel points in the target object areas of the visible light image and the infrared image to establish a coordinate mapping relation. Typically, only 3 points are needed to complete the mapping relationship.
In a preferred embodiment, a coordinate mapping relationship is established between the pixel points of at least three contour points of the target object in the infrared image and the corresponding pixel points in the visible light image. Specifically, the number of pixel points for establishing the coordinate mapping relationship can be figurative to form details, the more points, the stronger the details, the more accurate the mapping relationship, but at the same time, part of performance can be sacrificed. In general, the mapping relation is completed by only 3 points, and experiments show that the effect of establishing the coordinate mapping relation by selecting the pixel points of 5 corresponding contour points is better. As shown in fig. 2, 5 points are selected to establish a coordinate mapping relationship. For example, the image includes a face, a computer and a wall, the target object is a face, 5 pixels such as eyes, mouth, nose, ears and eyebrows are selected in the visible light image of the face due to different angles of view, and corresponding pixels are selected in the infrared image of the face, so that a corresponding coordinate mapping relation is established, perspective transformation is performed by using the mapping relation, and the distorted position of the face can be restored to a normal state.
In a preferred embodiment, the resolution of the visible light image in the perspective transformation of the visible light image by using the change matrix is scaled, and then the perspective transformation is performed, and the scaled resolution is consistent with the resolution of the infrared image to be fused. And scaling the visible light image to be consistent with the resolution of the infrared image to be fused, and then performing perspective transformation, so that the fusion effect can be further improved.
Further, the perspective transformation specifically includes: the angle of view of the infrared image and the visible image is adjusted to be the same. The perspective transformation (perspective transformation) is to rotate the shadow bearing surface (perspective surface) around the trace (perspective axis) by a certain angle according to the perspective rotation law by utilizing the condition that the perspective center, the image point and the target point are collinear, and destroy the original projection light beam, and still can keep the projection geometric figure on the shadow bearing surface unchanged. In short, a plane is projected onto a designated plane through a projection matrix, and the projection geometry on the image bearing surface can be maintained unchanged, as shown in fig. 3. The perspective transformation includes a series of operations such as translation, scaling, flipping, rotation, etc., and not only plane transformation, but also spatial transformation, projective transformation, affine transformation, etc. can be performed. The purpose of performing perspective transformation is to adjust the angle of view of the infrared image and the visible light image to be the same angle as much as possible so as to improve the fusion effect.
In a preferred embodiment, the fusing the visible light image and the infrared image after perspective transformation to obtain a fused image specifically includes: and overlapping the visible light images subjected to perspective transformation according to the transparency weight, and adding the visible light images to the infrared images for fusion to obtain a fused image. The fused image is shown in fig. 4. The transparency weight can be flexibly adjusted according to the requirement so as to realize the optimal fusion effect. In a preferred embodiment, the infrared image and the visible image each have a transparency weight of 50%.
On one hand, the method is suitable for single-channel processing, only the brightness information is reserved, the fusion efficiency can be improved, and the resource consumption is reduced. For example, YUV420 (pixel picture format) of the visible light image artwork is acquired, and only the luminance information, i.e., the Y (luminance inside the YUV format) component, is retained. And simultaneously, a Y (brightness inside the YUV format) image of the infrared image is acquired.
On the other hand, the image operation of the method is based on the simultaneous processing of three RGB channels, so that the chromaticity information of visible light can be effectively reserved, but the resource consumption is larger than that of a single channel.
In a preferred embodiment, the fusion specifically comprises the steps of: acquiring brightness information of an infrared image and a visible light image, acquiring a target rectangular area of the infrared image and the visible light image, which is overlapped with a target in the visible light image, according to the brightness information, cutting the visible light image according to the target rectangular area, scaling the cut visible light image according to the size of the infrared image, and fusing the scaled visible light image and the infrared image according to a certain proportion. As shown in fig. 5, 6 and 7. In a preferred embodiment, the proportion ranges from 0 to 100% with a default value of 50%. In the preferred embodiment, the visible light image is overlapped and fused on the infrared image, so that the fusion effect can be greatly improved.
In a preferred embodiment, the method further comprises: and carrying out gray processing on the fused image to obtain a final fused image. Specifically, as shown in fig. 8 and 9. First, YUV420 (pixel picture format) of the original visible light image is acquired, and only luminance information, i.e., Y (luminance inside the YUV format) component is retained. And simultaneously, a Y (brightness inside the YUV format) image of the infrared image is acquired. And secondly, selecting a target object (a person head) in the visible light image, and manually adjusting the interface, namely dragging the frame up and down, left and right, so that the target object is overlapped, and a visible light target rectangular area is obtained. And then, cutting the visible light image (removing the part outside the area) by using the target rectangle information, and obtaining a cut visible light brightness image. And scaling the cut visible light image according to the size of the infrared image, and fusing the scaled visible light image and the infrared image according to a certain fusion proportion. Finally, pseudo colors (i.e., RGB color images) are added to the obtained fusion image, and the brightness image is converted into RGB images, typically iron oxide red, medical treatment, etc.
In another embodiment, four vertices of the target rectangular region are selected to establish a coordinate mapping relationship. Specifically, four vertexes of the target rectangular area are selected to establish a coordinate mapping relation, perspective transformation is carried out on the visible light image according to the coordinate mapping relation, and finally superposition fusion is carried out on the visible light image and the infrared image. But this approach may not be effective in selecting 5 points to establish a coordinate mapping relationship.
The invention calibrates the pictures by utilizing the contour points of the plurality of groups of target objects, and can rapidly and conveniently extract a plurality of groups of fusion contour points. Aiming at the situation that the direct zooming fusion of the visible light image and the infrared thermal image can cause the misalignment of the target object, the invention utilizes the interface selection point to acquire the mapping relation, and then utilizes the mapping relation to perform perspective transformation, thereby achieving the purposes of fusion and superposition. The point selection method is very convenient, and corresponding contour points are selected according to the same target object. By adopting the invention, the shape and angle of the target object can be restored in the fusion image after the angle distortion of the target object in the visible light image and the infrared image is corrected by the coordinate mapping relation, and the abundant details in the target object area are reserved in the fusion image through transparency fusion.
The invention also discloses an electronic device, which comprises: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are in communication with each other through the communication bus. The processor may invoke logic instructions in the memory to perform a method of fusing a visible light image with an infrared image, the method comprising: s1) establishing a pixel point coordinate mapping relation between an infrared image and a visible light image; s2) acquiring a change matrix according to the coordinate mapping relation; s3) performing perspective transformation on the visible light image by utilizing the change matrix; s4) fusing the visible light image after perspective transformation with the infrared image to obtain a fused image.
Further, the logic instructions in the memory described above may be implemented in the form of software functional units and stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-only memory (ROM), a random access memory (RAM, randomAccessMemory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, embodiments of the present invention further provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the method for fusing a visible light image with an infrared image provided by the above method embodiments, the method comprising: s1) establishing a pixel point coordinate mapping relation between an infrared image and a visible light image; s2) acquiring a change matrix according to the coordinate mapping relation; s3) performing perspective transformation on the visible light image by utilizing the change matrix; s4) fusing the visible light image after perspective transformation with the infrared image to obtain a fused image.
In yet another aspect, embodiments of the present invention further provide a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the method for fusing a visible light image with an infrared image provided in the above embodiments, the method comprising: s1) establishing a pixel point coordinate mapping relation between an infrared image and a visible light image; s2) acquiring a change matrix according to the coordinate mapping relation; s3) performing perspective transformation on the visible light image by utilizing the change matrix; s4) fusing the visible light image after perspective transformation with the infrared image to obtain a fused image.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (10)

1. A method for fusing a visible light image with an infrared image, comprising:
s1) establishing a pixel point coordinate mapping relation between an infrared image and a visible light image;
s2) acquiring a change matrix according to the coordinate mapping relation;
s3) performing perspective transformation on the visible light image by utilizing the change matrix;
s4) fusing the visible light image after perspective transformation with the infrared image to obtain a fused image.
2. A method for fusing a visible light image with an infrared image as defined in claim 1, wherein said S1) specifically comprises: and respectively selecting a plurality of corresponding pixel points in the target object areas of the visible light image and the infrared image to establish a coordinate mapping relation.
3. The method of claim 2, wherein selecting pixels of at least three contour points of the target object in the infrared image creates a coordinate mapping relationship with corresponding pixels in the visible image.
4. The method for fusing a visible light image and an infrared image according to claim 1, wherein the visible light image in S3) is subjected to perspective transformation after being scaled in resolution, and the scaled resolution is consistent with the resolution of the infrared image to be fused.
5. A method of fusion of a visible light image with an infrared image according to claim 1, wherein the perspective transformation comprises: the angle of view of the infrared image and the visible image is adjusted to be the same.
6. The method of fusing a visible light image with an infrared image according to claim 1, wherein S4) specifically comprises: and overlapping the visible light images subjected to perspective transformation according to the transparency weight, and adding the visible light images to the infrared images for fusion to obtain a fused image.
7. A method of fusing a visible light image with an infrared image as defined in claim 6, wherein said fusing comprises the steps of: acquiring brightness information of an infrared image and a visible light image, acquiring a target rectangular area of the infrared image and the visible light image, which is overlapped with a target in the visible light image, according to the brightness information, cutting the visible light image according to the target rectangular area, scaling the cut visible light image according to the size of the infrared image, and fusing the scaled visible light image and the infrared image according to a certain proportion.
8. The method of claim 7, wherein four vertices of the target rectangular region are selected to establish a coordinate mapping.
9. A method of fusing a visible light image with an infrared image as recited in claims 1-8, further comprising: and carrying out gray processing on the fused image to obtain a final fused image.
10. A system for fusing a visible light image with an infrared image, comprising:
the receiving module is used for receiving the infrared image and the visible light image;
the image fusion module is used for establishing a pixel point coordinate mapping relation between the infrared image and the visible light image; the change matrix is used for acquiring the infrared image and the visible light image according to the coordinate mapping relation; the method comprises the steps of performing perspective transformation on a visible light image by using a change matrix; and the method is used for fusing the visible light image after perspective transformation with the infrared image to obtain a fused image.
CN202310197764.3A 2023-03-03 2023-03-03 Method and system for fusing visible light image and infrared image Pending CN116342445A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310197764.3A CN116342445A (en) 2023-03-03 2023-03-03 Method and system for fusing visible light image and infrared image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310197764.3A CN116342445A (en) 2023-03-03 2023-03-03 Method and system for fusing visible light image and infrared image

Publications (1)

Publication Number Publication Date
CN116342445A true CN116342445A (en) 2023-06-27

Family

ID=86886933

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310197764.3A Pending CN116342445A (en) 2023-03-03 2023-03-03 Method and system for fusing visible light image and infrared image

Country Status (1)

Country Link
CN (1) CN116342445A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116895094A (en) * 2023-09-11 2023-10-17 杭州魔点科技有限公司 Dark environment imaging method, system, device and medium based on binocular fusion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116895094A (en) * 2023-09-11 2023-10-17 杭州魔点科技有限公司 Dark environment imaging method, system, device and medium based on binocular fusion
CN116895094B (en) * 2023-09-11 2024-01-30 杭州魔点科技有限公司 Dark environment imaging method, system, device and medium based on binocular fusion

Similar Documents

Publication Publication Date Title
EP3614340B1 (en) Methods and devices for acquiring 3d face, and computer readable storage media
CN111047510B (en) Large-field-angle image real-time splicing method based on calibration
TWI513297B (en) Image processing apparatus and image processing method
CN111243071A (en) Texture rendering method, system, chip, device and medium for real-time three-dimensional human body reconstruction
CN109685913B (en) Augmented reality implementation method based on computer vision positioning
TWI602154B (en) Panoramic image stitching method and system thereof
WO2018235163A1 (en) Calibration device, calibration chart, chart pattern generation device, and calibration method
CA2464569A1 (en) Single or multi-projector for arbitrary surfaces without calibration nor reconstruction
CN101689292A (en) The BANANA codec
CN107689033B (en) Fisheye image distortion correction method based on ellipse segmentation
RU2654127C1 (en) Method for generating a digital panoramic image
CN111866523B (en) Panoramic video synthesis method and device, electronic equipment and computer storage medium
TWI599809B (en) Lens module array, image sensing device and fusing method for digital zoomed images
CN116342445A (en) Method and system for fusing visible light image and infrared image
WO2022133683A1 (en) Mixed reality display method, mixed reality device, and storage medium
CN114648458A (en) Fisheye image correction method and device, electronic equipment and storage medium
US8971636B2 (en) Image creating device, image creating method and recording medium
JP4554231B2 (en) Distortion parameter generation method, video generation method, distortion parameter generation apparatus, and video generation apparatus
CN116310142A (en) Method for 360-degree projection of panoramic image onto model surface
GB2585197A (en) Method and system for obtaining depth data
CN114913308A (en) Camera tracking method, device, equipment and storage medium
US10902669B2 (en) Method for estimating light for augmented reality and electronic device thereof
JP5865092B2 (en) Image processing apparatus, image processing method, and program
CN111714883A (en) Method and device for processing map and electronic equipment
CN108564070A (en) Method for extracting gesture and its device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination