CN110766706A - Image fusion method and device, terminal equipment and storage medium - Google Patents

Image fusion method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN110766706A
CN110766706A CN201910919511.6A CN201910919511A CN110766706A CN 110766706 A CN110766706 A CN 110766706A CN 201910919511 A CN201910919511 A CN 201910919511A CN 110766706 A CN110766706 A CN 110766706A
Authority
CN
China
Prior art keywords
image
light image
infrared light
visible light
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910919511.6A
Other languages
Chinese (zh)
Inventor
刘明
李飞衡
吴汉俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jing Yang Information Technology Co Ltd Of Shenzhen
Original Assignee
Jing Yang Information Technology Co Ltd Of Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jing Yang Information Technology Co Ltd Of Shenzhen filed Critical Jing Yang Information Technology Co Ltd Of Shenzhen
Priority to CN201910919511.6A priority Critical patent/CN110766706A/en
Publication of CN110766706A publication Critical patent/CN110766706A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides an image fusion method, which comprises the following steps: acquiring a frame of infrared light image and a frame of visible light image under the same timestamp; cutting out a local size image of the visible light image matched with the infrared light image; extracting edge information of the local size image and generating a gray image with the edge information; and carrying out fusion operation on the infrared light image and the gray level image to obtain a fusion image. The edge information in the gray level image is added to the infrared light image, so that the fused image has a stronger edge effect, and the problem that the existing thermal imaging camera does not image or images in a fuzzy manner is solved.

Description

Image fusion method and device, terminal equipment and storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image fusion method, an image fusion device, a terminal device, and a storage medium.
Background
The existing thermal imaging camera has the defects of low imaging resolution and imaging depending on infrared radiation of objects due to the limitations of cost and technology, but the temperature of a plurality of objects is not high and the temperature difference between the objects is small, so that the thermal imaging camera does not have imaging or imaging blurring, and the imaging effect is very poor.
Content of application
The embodiment of the application provides an image fusion method, an image fusion device, terminal equipment and a storage medium, and can solve the problem that an existing thermal imaging camera is very poor in imaging effect.
In a first aspect, an embodiment of the present application provides an image fusion method, including:
acquiring a frame of infrared light image and a frame of visible light image under the same timestamp;
cutting out a local size image of the visible light image matched with the infrared light image;
extracting edge information of the local size image and generating a gray image with the edge information;
and carrying out fusion operation on the infrared light image and the gray level image to obtain a fusion image.
According to the method and the device, the infrared light image and the visible light image are obtained according to the timestamp, and the infrared light image and the visible light image are obtained by the camera at the same moment, so that the fusion precision of the fused image is ensured; the method comprises the steps of cutting out a local size image of a visible light image matched with an infrared light image, extracting edge information of the local size image, and carrying out fusion operation on a gray level image with the edge information and the infrared light image, so that the edge information in the gray level image is added to the infrared light image, the fused image has a stronger edge effect, and the problem that an existing thermal imaging camera does not have imaging or imaging blurring is solved.
Illustratively, clipping out a partial size image of a visible light image that matches the infrared light image includes: amplifying the infrared light image to a preset size; matching the amplified infrared light image to a local size image of the visible light image by an affine transformation vector space principle; and cutting a local size image of the visible light image, and zooming the local size image to the preset size.
The infrared light image is matched to the local part of the visible light image through an affine transformation vector space principle, and the problem of poor image visual field effect caused by inconsistent visual fields of the visible light image and the infrared light image is solved.
Illustratively, performing a fusion operation on the infrared light image and the grayscale image to obtain a fused image includes: and performing OR operation on the infrared light image and the gray level image to obtain a fused image.
The two images are subjected to OR operation fusion reconstruction processing, so that the original image information of the infrared light image is reserved, and the abundant details of the visible light image are increased, so that the fused image is finer and clearer, the image details are more, and the imaging effect is better.
In a second aspect, an embodiment of the present application provides an image fusion apparatus, including:
the acquisition module is used for acquiring a frame of infrared light image and a frame of visible light image under the same timestamp;
the shearing module is used for shearing a local size image of the visible light image matched with the infrared light image;
the extraction module is used for extracting edge information of the local size image and generating a gray level image with the edge information;
and the fusion module is used for carrying out fusion operation on the infrared light image and the gray level image to obtain a fusion image.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the image fusion method according to any one of the above first aspects when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the image fusion method according to any one of the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer program product, which, when run on a terminal device, causes the terminal device to execute the image fusion method according to any one of the first aspect.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of an image fusion method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of an image fusion method according to another embodiment of the present application;
FIG. 3 is a schematic flow chart diagram of an image fusion method according to another embodiment of the present application;
FIG. 4 is a schematic flowchart of an image fusion method according to another embodiment of the present application;
FIG. 5 is a schematic flow chart diagram illustrating an image fusion method according to another embodiment of the present application;
FIG. 6 is a schematic structural diagram of an image fusion apparatus provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
As described in the related art, the current thermal imaging camera has a poor imaging effect, and the thermal imaging camera with a visible light camera improves the imaging effect by fusing a visible light image and an infrared light image, but the current thermal imaging camera has high requirements on a structure and a lens, the resolution of a fused image obtained by fusion is low, the imaging effect is improved, but the equipment cost is improved, and the imaging effect is not optimal.
Therefore, the image fusion method provided by the application can be used for realizing the processes of acquiring the infrared light image and the visible light image at the same moment, extracting the edge information of the visible light image and fusing the edge information into the infrared light image so as to enhance the imaging effect of the infrared light image.
The image fusion method is applied to terminal equipment. For example, the terminal device may be a mobile phone provided with an infrared camera and a visible light camera, a tablet computer, a wearable device, an in-vehicle device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), a thermal imaging camera, and the like, and the specific type of the terminal device is not limited in this embodiment.
By way of example and not limitation, when the terminal device is a wearable device, the wearable device may also be a generic term for intelligently designing daily wearing by applying wearable technology, developing wearable devices, such as glasses, gloves, watches, clothing, shoes, and the like. A wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also realizes powerful functions through software support, data interaction and cloud interaction. The generalized wearable intelligent device has the advantages of complete functions, large size and capability of realizing complete or partial functions without depending on a smart phone, such as a smart watch or smart glasses.
By way of example and not limitation, when the terminal device is a thermal imaging camera, the thermal imaging camera may be a camera with complete functions and capable of realizing complete functions without depending on an intelligent terminal such as a computer, such as a surveillance camera for video surveillance, and the surveillance camera may be a gun-type camera, a dome-type camera, an indoor panoramic camera (such as a fish eye or a sky eye), a small pan-tilt camera, a micro-camera, and the like according to shape differentiation; the thermal imaging camera can also be a device which is only concentrated on a certain application function and needs to be matched with other intelligent devices such as computers for use, such as a network camera for daily activities such as video chatting, camera shooting and the like. The infrared camera and the visible light camera provided in the terminal device have substantially the same imaging field of view, but the direction of the infrared camera and the direction of the visible light camera are not strictly the same. The scene of the infrared light image collected by the infrared camera is ensured to be the same as the scene of the visible light image collected by the visible light camera. For example, an infrared camera is disposed adjacent to a visible camera. It is understood that the arrangement of the infrared camera and the visible camera is not limited to the adjacent arrangement.
Further, the image fusion method applied to the terminal device applies to the affine transformation vector space principle, the bilinear interpolation method, the Canny edge detection algorithm and other algorithms or principles to process the image. It should be understood that the image fusion method of the embodiments of the present application may use one or more of the algorithms or principles described above for image processing, and may also use other non-described algorithms or principles in addition to the algorithms or principles described above for image processing.
For ease of understanding and explanation, the affine transformation vector space principle, the bilinear interpolation method, and the Canny edge detection algorithm applied to the embodiments of the present application are further explained below.
The affine transformation vector space principle, called affine transformation principle or affine mapping principle for short, is characterized by that in the geometric process, a vector space is undergone the process of linear transformation and is connected with translation, and then transformedIs another vector space process. The linear transformation includes, but is not limited to, rotation, scaling, translation, and miscut operations. For example, the vector matrix of image A before transformation is
Figure BDA0002217109120000061
Affine transformation matrix ofThe vector matrix of the transformed image B is
Figure BDA0002217109120000063
ThenWhen the image a is converted into the image B by rotating clockwise by θ degrees around (x, y) as the axis, the variable corresponding to the affine transformation matrix is
Figure BDA0002217109120000065
Wherein a, b, c and d are rotation variables, and c and f are translation variables. It should be understood that embodiments of the present application may perform a linear transformation on an image only once, without a subsequent translation.
The bilinear interpolation method, also called as bilinear interpolation method, is a process of performing linear interpolation on an interpolation function containing two variables (X, Y) in two directions (X-axis direction and Y-axis direction of a coordinate system) respectively, and the result of the linear interpolation is independent of the interpolation sequence. For example, if it is desired to obtain the value of an unknown function f (x, y) at point P ═ x, y, linear interpolation is performed using a known function f (x, y): obtaining the values of a known function f (X, y) at four points, namely (X1, y1), L (X1, y2), M (X2, y1) and N (X2, y2), and performing linear interpolation on the known function f (X, y) in the X-axis direction to obtain a point R (X, y1) and a point S (X, y2), wherein:
Figure BDA0002217109120000071
Figure BDA0002217109120000072
and performing linear interpolation on the known function f (x, Y) in the Y-axis direction to obtain a point P which is (x, Y), wherein
Figure BDA0002217109120000073
It should be understood that, in addition to the above linear interpolation in the two-dimensional space, the embodiment of the present application may also apply linear interpolation in the three-dimensional space.
The Canny edge detection algorithm refers to a process of detecting edges between different objects or between different media in an image. Specifically, the original image is smoothed by gaussian filtering, intensity gradients (intensities) of the smoothed image are searched, a non-maximum suppression (non-maximum suppression) technology is adopted to eliminate false detection edges, a double-threshold method is adopted to determine possible boundaries, and finally a hysteresis technology is adopted to track the boundaries.
Fig. 1 shows a schematic flow diagram of an image fusion method provided herein, which may be applied, by way of example and not limitation, in the thermal imaging camera described above.
S101, acquiring a frame of infrared light image and a frame of visible light image under the same timestamp.
The timestamp is a difference value from a time point when a certain action is performed to a fixed time point, and the action can be the acquisition of the infrared light image or the visible light image. The measurement unit of the timestamp can be accurate to microsecond, so that the thermal imaging camera can acquire the infrared light image and the visible light image at the same time, and the environment state of the acquired infrared light image and the environment state of the acquired visible light image can be consistent. For example, in order to acquire an infrared light image and a visible light image during a motion process of a person, and to ensure that a motion of the person in the infrared light image acquired by the infrared camera is consistent with a motion of the person in the visible light image acquired by the visible light camera, it is necessary to ensure that the time for acquiring the images by the infrared camera and the time for acquiring the images by the visible light camera are the same as far as possible. It should be understood that in other embodiments, the minimum unit of measure of the timestamp may be other units of time such as seconds, milliseconds, and the like.
By acquiring the infrared light image and the visible light image under the same timestamp, the environment state of the two images is ensured to be consistent when the two images are acquired, the problem that the current camera has high requirements on the structure and the lens is solved, and the equipment cost is reduced.
And S102, cutting out a local size image of the visible light image matched with the infrared light image.
The local size image can be an image containing complete edge information and partial visible light information obtained by removing local pixel points in the visible light image. Since the infrared light image has a problem of imaging blur, and it is difficult to distinguish the boundary between objects in the image, the present embodiment adds the boundary of the visible light image to the infrared light image. And the visible light image has many pixel points, and in order to reduce unnecessary calculation amount, the local size image containing the object boundary in the visible light image matched with the infrared light image is cut out.
S103, extracting edge information of the local size image and generating a gray level image with the edge information.
The edge information is boundary information between different objects or different media in the local size image, for example, boundary information between a person and an environmental background in the image. The cut partial size image contains a large number of visible light pictures, and the edge information between the objects is added into the infrared light image to increase the boundary between the objects in the infrared light image, so the edge information of the partial size image is extracted to generate a gray scale image with the edge information for being fused with the infrared light image.
Optionally, the embodiment of the present application may detect edge information of the local size image through a canny edge detection algorithm.
And S104, performing fusion operation on the infrared light image and the gray level image to obtain a fusion image.
The fusion operation is to calculate each pixel point of the infrared image and the pixel point of the gray image at the corresponding position so as to add the edge information of the gray image to the infrared image.
According to the method and the device, the infrared light image and the visible light image are obtained according to the timestamp, and the infrared light image and the visible light image are obtained by the camera at the same moment, so that the fusion precision of the fused image is ensured; the local size image of the visible light image matched with the infrared light image is cut out, the edge information of the local size image is extracted, and the gray level image with the edge information and the infrared light image are subjected to fusion operation, so that the edge information in the gray level image is added into the infrared light image, the fused image has a stronger edge effect, and the problem that an existing thermal imaging camera does not have imaging or imaging blurring is solved.
On the basis of the embodiment shown in fig. 1, fig. 2 shows a schematic flow chart of another image fusion method provided in the embodiment of the present application. As shown in fig. 2, the step S101 specifically includes steps S201 to S203. It should be noted that the steps that are the same as those in the embodiment of fig. 1 are not repeated herein, please refer to the foregoing description.
S201, acquiring a frame of infrared light image and a preset continuous frame number of visible light images in real time, recording a first time stamp for acquiring the infrared light image, and recording a second time stamp for acquiring each frame of the visible light image;
optionally, the timestamp is accurate to microseconds and free of system time interference. For example, the starting time point when the camera starts to shoot is taken as a starting point, and the first timestamp is a difference value between the time point when the infrared light image is acquired and the starting time point, wherein the difference value is accurate to microsecond; similarly, the second timestamp is a difference between a time point when each frame of visible light image is acquired and the start time point. It should be understood that the above description is only for illustrative purposes, and is not intended to limit the specific means implemented in the present application, taking the starting time point of the camera to start the image capturing as a starting point.
Because the infrared camera and the visible light camera of the camera may have a large difference in the time points when the infrared camera and the visible light camera receive the signals of the collected images, when a frame of infrared light image is obtained, a preset continuous number of frames of visible light images are obtained, so that it is easy to find a frame of visible light image with the same timestamp as the infrared light image from the plurality of frames of visible light images, wherein the preset continuous number of frames may be 10 frames. It can be understood that, in the process of acquiring one infrared light image by the infrared camera, the visible light camera acquires a plurality of visible light images continuously, i.e. the speed of acquiring images by the visible light camera is faster than that of the infrared camera.
S202, matching the first time stamp with each second time stamp;
the matching process is a process of calculating a difference between the first timestamp and each of the second timestamps, and determining whether each difference is within a preset difference range.
S203, when the difference value between the first time stamp and the second time stamp is within a preset difference value range, judging that the first time stamp is the same as the second time stamp, and acquiring a visible light image corresponding to the second time stamp which is the same as the first time stamp.
The preset difference range can be 0-10 microseconds, and when the difference value between the first time stamp and the second time stamp is within the preset difference range, the obtained environment states of the visible light image and the infrared light image are infinitely close to be consistent, so that the visible light image corresponding to the second time stamp is obtained for matching and fusing with the infrared light image. Further, when the difference between the plurality of second timestamps and the first timestamp is within the preset difference range, the second timestamp with the smallest difference can be used as a basis for judging that the first timestamp is the same as the second timestamp. Still further, when there is no difference between the second timestamp and the first timestamp within a preset difference range, the second timestamp with the smallest difference may be used as a basis for determining that the first timestamp is the same as the second timestamp.
On the basis of the embodiment shown in fig. 1, the present application provides another embodiment of an image fusion method. Step S1011 is also included after step S101. It should be noted that the steps that are the same as those in the embodiment of fig. 1 are not repeated herein, please refer to the foregoing description.
And S1011, converting the formats of the infrared light image and the visible light image into a preset format.
In this embodiment, the image format is converted into a preset format, so as to facilitate data operation of the image processor in the image processing process. Optionally, the preset format may be a YUV format, and the format conversion process may be implemented by an ISP image processing process.
On the basis of the embodiment shown in fig. 1, fig. 3 shows a schematic flow chart of another image fusion method provided in the embodiment of the present application. As shown in fig. 3, the step S102 specifically includes steps S301 to S303. It should be noted that the steps that are the same as those in the embodiment of fig. 1 are not repeated herein, please refer to the foregoing description.
S301, amplifying the infrared light image to a preset size;
the predetermined size may be D1, i.e., 704 × 576 mm. The resolution ratio of the infrared image acquired by the infrared camera is low, so that the infrared image needs to be amplified to a preset size, the infrared camera with high resolution ratio does not need to be adopted, and the equipment cost is reduced. The problem that the current camera has high requirements on the structure and the lens is solved by a software code mode.
S302, matching the amplified infrared light image to a local size image of the visible light image through an affine transformation vector space principle;
in this embodiment, since the infrared camera and the visible light camera are not arranged in an overlapping manner, and may be arranged adjacently, the two cameras inevitably have a difference in the acquired image field of view due to the difference in angle. According to the embodiment, the infrared light image is matched with the local size image of the visible light image through the affine transformation principle, so that the visual field of the infrared light image is consistent with that of the visible light image, the camera structure does not need to have strict requirements, and the equipment cost is reduced.
Alternatively, in order to reduce unnecessary operations, the present embodiment employs a linear transformation in the affine transformation principle without the need for a translation operation.
It should be understood that the local size image may be an image containing edge information and partial visible light information obtained by removing local pixel points in the visible light image, and the image size of the image is the same as the size of the visible light image.
S303, cutting out a local size image of the visible light image, and zooming the local size image to the preset size.
In this embodiment, the cost of the visible light camera is lower than that of the infrared light camera, and the visible light camera can collect a visible light image with high resolution, so that for the convenience of subsequent image fusion, the local size image of light can be zoomed to the size same as that of the infrared light image.
Optionally, the local size image is scaled to a preset size by bilinear interpolation to improve the scaling accuracy of the local size image.
Optionally, the visible light camera may adopt a zoom lens to obtain a fused image with higher precision and higher resolution.
On the basis of the embodiment shown in fig. 1, fig. 4 shows a schematic flow chart of another image fusion method provided in the embodiment of the present application. As shown in fig. 4, the step S104 specifically includes steps S401 to S402. It should be noted that the steps that are the same as those in the embodiment of fig. 1 are not repeated herein, please refer to the foregoing description.
S401, obtaining the value of each first pixel point in the infrared light image and obtaining the value of each second pixel point in the gray level image;
s402, performing OR operation on the value of each first pixel point and the value of the second pixel point at the corresponding position to obtain the fusion image.
For the above S401 and S402, the or operation is a kind of computer logic operation, which means that when two constants having an or relationship are simultaneously false, the operation result is false, otherwise it is true. For example, if the value "0" is false and other constants are true, 0| | |0 ═ 0 and 0| | | |1 ═ 1 are obtained, where "|" indicates an operator of the or operation. In this embodiment, the value of the pixel point that does not contain the edge information in the gray-scale image may be set to "0", and when the value of the first pixel point is 129 and the value of the second pixel point at the corresponding position is 0, the value of the pixel point at the corresponding position in the fused image is 129| |0 ═ 129; when the value of the first pixel is 129 and the value of the second pixel at the corresponding position is 132, the value of the pixel at the corresponding position in the fused image is 129| |132 ═ 133. The value of each first pixel point and the value of the second pixel point at the corresponding position are subjected to OR operation, so that the original image information of the infrared light image can be reserved, the abundant details of the visible light image are increased, the edge information of the infrared light image is enhanced, and the infrared light image is clearer and has better imaging effect.
It should be understood that the above-described image or operation process is only used for illustration, and other forms of or operation processes are also possible in other embodiments.
On the basis of the embodiment shown in fig. 1, the present application provides another embodiment of an image fusion method. Step S1041 is also included before step S104. It should be noted that the steps that are the same as those in the embodiment of fig. 1 are not repeated herein, please refer to the foregoing description.
S1041, performing time domain filtering processing on the gray level image to obtain a filtered gray level image.
In the present embodiment, the time-domain filtering process includes, but is not limited to, wiener filtering or kalman filtering. The noise in the gray level image can be removed through time domain filtering processing, discrete useless information in the gray level image is eliminated, and edge information is enhanced, so that the edge of the gray level image is clearer, and a better edge effect is achieved.
On the basis of the embodiment shown in fig. 1, fig. 5 shows a schematic flow chart of another image fusion method provided in the embodiment of the present application. As shown in fig. 5, the above step S104 is followed by steps S501 to S502. It should be noted that the steps that are the same as those in the embodiment of fig. 1 are not repeated herein, please refer to the foregoing description.
S501, acquiring a fused image corresponding to each frame of infrared light image;
and S502, splicing the fused images according to the same sequence as the infrared light images to obtain a real-time fused stream video.
With regard to the above-described S501 and S502, in the present embodiment, the above-described sequence may be the same frame number sequence as the infrared light image, or may be a time sequence in which the infrared light image is acquired. The fusion images are spliced according to the same sequence as the infrared light images to obtain a real-time fusion stream video, so that a user can watch clear infrared light video pictures, and the user experience is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 6 shows a block diagram of an image fusion apparatus 600 provided in the embodiment of the present application, corresponding to the image fusion method described in the above embodiment, and only shows the relevant parts in the embodiment of the present application for convenience of description.
Referring to fig. 6, the apparatus includes:
an obtaining module 601, configured to obtain a frame of infrared light image and a frame of visible light image under the same timestamp;
a clipping module 602, configured to clip a local size image of the visible light image that matches the infrared light image;
an extracting module 603, configured to extract edge information of the local size image, and generate a grayscale image with the edge information;
and a fusion module 604, configured to perform fusion operation on the infrared light image and the grayscale image to obtain a fusion image.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 7, the terminal device 7 of this embodiment includes: at least one processor 70 (only one shown in fig. 7), a memory 71, and a computer program 72 stored in the memory 71 and executable on the at least one processor 70, wherein the processor 70 implements the steps of any of the various image fusion method embodiments described above when executing the computer program 72.
The terminal device 7 may be a mobile phone, a desktop computer, a notebook, a palm computer, a thermal imaging camera, or other computing devices. The terminal device may include, but is not limited to, a processor 70, a memory 71. Those skilled in the art will appreciate that fig. 7 is only an example of the terminal device 7, and does not constitute a limitation to the terminal device 7, and may include more or less components than those shown, or combine some components, or different components, for example, and may further include input/output devices, network access devices, and the like.
The Processor 70 may be a Central Processing Unit (CPU), and the Processor 70 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may in some embodiments be an internal storage unit of the terminal device 7, such as a hard disk or a memory of the terminal device 7. In other embodiments, the memory 71 may also be an external storage device of the terminal device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the terminal device 7. The memory 71 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 71 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), random-access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An image fusion method, comprising:
acquiring a frame of infrared light image and a frame of visible light image under the same timestamp;
cutting out a local size image of the visible light image matched with the infrared light image;
extracting edge information of the local size image and generating a gray image with the edge information;
and carrying out fusion operation on the infrared light image and the gray level image to obtain a fusion image.
2. The image fusion method of claim 1, wherein said acquiring a frame of infrared light image and a frame of visible light image at the same timestamp comprises:
acquiring a frame of infrared light image and a preset continuous frame number of visible light images in real time, recording a first time stamp for acquiring the infrared light image and a second time stamp for acquiring each frame of the visible light image;
matching the first timestamp with each of the second timestamps;
and when the difference value of the first timestamp and the second timestamp is within a preset difference value range, judging that the first timestamp is the same as the second timestamp, and acquiring a visible light image corresponding to the second timestamp which is the same as the first timestamp.
3. The image fusion method of claim 1, wherein after acquiring a frame of infrared light image and a frame of visible light image under the same timestamp, further comprising:
and converting the formats of the infrared light image and the visible light image into a preset format.
4. The image fusion method of claim 1, wherein said cropping a partial-size image of a visible light image that matches the infrared light image comprises:
amplifying the infrared light image to a preset size;
matching the amplified infrared light image to a local size image of the visible light image by an affine transformation vector space principle;
and cutting out a local size image of the visible light image, and zooming the local size image to the preset size.
5. The image fusion method of claim 1, wherein the performing the fusion operation on the infrared light image and the grayscale image to obtain a fused image comprises:
acquiring the value of each first pixel point in the infrared light image and acquiring the value of each second pixel point in the gray level image;
and carrying out OR operation on the value of each first pixel point and the value of the second pixel point at the corresponding position to obtain the fused image.
6. The image fusion method according to claim 1, wherein before the fusion operation is performed on the infrared light image and the grayscale image to obtain a fused image, the method further comprises:
and performing time domain filtering processing on the gray level image to obtain a filtered gray level image.
7. The image fusion method according to claim 1, wherein after the fusion operation is performed on the infrared light image and the grayscale image to obtain a fused image, the method further comprises:
acquiring a fused image corresponding to each frame of the infrared light image;
and splicing the fused images according to the same sequence as the infrared light images to obtain a real-time fused stream video.
8. An image fusion apparatus, comprising:
the acquisition module is used for acquiring a frame of infrared light image and a frame of visible light image under the same timestamp;
the shearing module is used for shearing a local size image of the visible light image matched with the infrared light image;
the extraction module is used for extracting edge information of the local size image and generating a gray level image with the edge information;
and the fusion module is used for carrying out fusion operation on the infrared light image and the gray level image to obtain a fusion image.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN201910919511.6A 2019-09-26 2019-09-26 Image fusion method and device, terminal equipment and storage medium Pending CN110766706A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910919511.6A CN110766706A (en) 2019-09-26 2019-09-26 Image fusion method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910919511.6A CN110766706A (en) 2019-09-26 2019-09-26 Image fusion method and device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110766706A true CN110766706A (en) 2020-02-07

Family

ID=69330641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910919511.6A Pending CN110766706A (en) 2019-09-26 2019-09-26 Image fusion method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110766706A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340515A (en) * 2020-03-02 2020-06-26 北京京东振世信息技术有限公司 Characteristic information generation and article tracing method and device
CN111751002A (en) * 2020-05-15 2020-10-09 国网浙江省电力有限公司嘉兴供电公司 Intelligent charged equipment fault diagnosis method based on infrared thermal imaging
CN111798560A (en) * 2020-06-09 2020-10-20 同济大学 Three-dimensional real-scene model visualization method for infrared thermal image temperature measurement data of power equipment
CN112053314A (en) * 2020-09-04 2020-12-08 深圳市迈测科技股份有限公司 Image fusion method and device, computer equipment, medium and thermal infrared imager
CN112529987A (en) * 2020-09-14 2021-03-19 武汉高德智感科技有限公司 Method and system for fusing infrared image and visible light image of mobile phone terminal
CN114549570A (en) * 2022-03-10 2022-05-27 中国科学院空天信息创新研究院 Method and device for fusing optical image and SAR image
US20220198685A1 (en) * 2020-12-22 2022-06-23 Hon Hai Precision Industry Co., Ltd. Image fusion method and electronic device
WO2023130922A1 (en) * 2022-01-10 2023-07-13 荣耀终端有限公司 Image processing method and electronic device
CN117541629A (en) * 2023-06-25 2024-02-09 哈尔滨工业大学 Infrared image and visible light image registration fusion method based on wearable helmet
CN114666458B (en) * 2020-12-22 2024-07-02 富泰华工业(深圳)有限公司 Image fusion method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017020595A1 (en) * 2015-08-05 2017-02-09 武汉高德红外股份有限公司 Visible light image and infrared image fusion processing system and fusion method
CN106548467A (en) * 2016-10-31 2017-03-29 广州飒特红外股份有限公司 The method and device of infrared image and visual image fusion
CN107977924A (en) * 2016-10-21 2018-05-01 杭州海康威视数字技术股份有限公司 A kind of image processing method based on dual sensor imaging, system
CN108429887A (en) * 2017-02-13 2018-08-21 中兴通讯股份有限公司 A kind of image processing method and device
CN108765358A (en) * 2018-05-22 2018-11-06 烟台艾睿光电科技有限公司 The double light fusion methods and plug-in type thermal imager system of visible light and infrared light

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017020595A1 (en) * 2015-08-05 2017-02-09 武汉高德红外股份有限公司 Visible light image and infrared image fusion processing system and fusion method
CN107977924A (en) * 2016-10-21 2018-05-01 杭州海康威视数字技术股份有限公司 A kind of image processing method based on dual sensor imaging, system
CN106548467A (en) * 2016-10-31 2017-03-29 广州飒特红外股份有限公司 The method and device of infrared image and visual image fusion
CN108429887A (en) * 2017-02-13 2018-08-21 中兴通讯股份有限公司 A kind of image processing method and device
CN108765358A (en) * 2018-05-22 2018-11-06 烟台艾睿光电科技有限公司 The double light fusion methods and plug-in type thermal imager system of visible light and infrared light

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
巩稼民等: "基于邻域特征与SCM相结合的红外与可见光图像融合", 《红外技术》 *
巩稼民等: "基于邻域特征与SCM相结合的红外与可见光图像融合", 《红外技术》, no. 11, 19 November 2018 (2018-11-19) *
田岩 等: "《数字图像处理与分析》", 30 June 2009, pages: 166 - 171 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340515A (en) * 2020-03-02 2020-06-26 北京京东振世信息技术有限公司 Characteristic information generation and article tracing method and device
CN111340515B (en) * 2020-03-02 2023-09-26 北京京东振世信息技术有限公司 Feature information generation and article tracing method and device
CN111751002A (en) * 2020-05-15 2020-10-09 国网浙江省电力有限公司嘉兴供电公司 Intelligent charged equipment fault diagnosis method based on infrared thermal imaging
CN111798560B (en) * 2020-06-09 2023-09-01 同济大学 Visualization method for three-dimensional live-action model of infrared thermal image temperature measurement data of power equipment
CN111798560A (en) * 2020-06-09 2020-10-20 同济大学 Three-dimensional real-scene model visualization method for infrared thermal image temperature measurement data of power equipment
CN112053314A (en) * 2020-09-04 2020-12-08 深圳市迈测科技股份有限公司 Image fusion method and device, computer equipment, medium and thermal infrared imager
CN112053314B (en) * 2020-09-04 2024-02-23 深圳市迈测科技股份有限公司 Image fusion method, device, computer equipment, medium and thermal infrared imager
CN112529987A (en) * 2020-09-14 2021-03-19 武汉高德智感科技有限公司 Method and system for fusing infrared image and visible light image of mobile phone terminal
CN112529987B (en) * 2020-09-14 2023-05-26 武汉高德智感科技有限公司 Method and system for fusing infrared image and visible light image of mobile phone terminal
CN114666458A (en) * 2020-12-22 2022-06-24 富泰华工业(深圳)有限公司 Image fusion method and device, electronic equipment and storage medium
US20220198685A1 (en) * 2020-12-22 2022-06-23 Hon Hai Precision Industry Co., Ltd. Image fusion method and electronic device
CN114666458B (en) * 2020-12-22 2024-07-02 富泰华工业(深圳)有限公司 Image fusion method and device, electronic equipment and storage medium
WO2023130922A1 (en) * 2022-01-10 2023-07-13 荣耀终端有限公司 Image processing method and electronic device
CN114549570B (en) * 2022-03-10 2022-10-18 中国科学院空天信息创新研究院 Method and device for fusing optical image and SAR image
CN114549570A (en) * 2022-03-10 2022-05-27 中国科学院空天信息创新研究院 Method and device for fusing optical image and SAR image
CN117541629A (en) * 2023-06-25 2024-02-09 哈尔滨工业大学 Infrared image and visible light image registration fusion method based on wearable helmet
CN117541629B (en) * 2023-06-25 2024-06-11 哈尔滨工业大学 Infrared image and visible light image registration fusion method based on wearable helmet

Similar Documents

Publication Publication Date Title
CN110766706A (en) Image fusion method and device, terminal equipment and storage medium
CN108898567B (en) Image noise reduction method, device and system
CN109474780B (en) Method and device for image processing
JP2020535758A (en) Image processing methods, devices, and devices
CN108230245B (en) Image splicing method, image splicing device and electronic equipment
WO2016164166A1 (en) Automated generation of panning shots
CN113129241B (en) Image processing method and device, computer readable medium and electronic equipment
CN116324878A (en) Segmentation for image effects
CN110335216B (en) Image processing method, image processing apparatus, terminal device, and readable storage medium
JP5766077B2 (en) Image processing apparatus and image processing method for noise reduction
EP3798975A1 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
EP4181506A1 (en) Image fusion method and device
CN109040596B (en) Method for adjusting camera, mobile terminal and storage medium
CN105227838A (en) A kind of image processing method and mobile terminal
CN111131688B (en) Image processing method and device and mobile terminal
CN113630549B (en) Zoom control method, apparatus, electronic device, and computer-readable storage medium
Bailey et al. Fast depth from defocus from focal stacks
CN108322658B (en) Photographing method and device
CN108776800B (en) Image processing method, mobile terminal and computer readable storage medium
CN114445315A (en) Image quality enhancement method and electronic device
CN110717452B (en) Image recognition method, device, terminal and computer readable storage medium
CN112435223A (en) Target detection method, device and storage medium
CN108769521B (en) Photographing method, mobile terminal and computer readable storage medium
CN111726526B (en) Image processing method and device, electronic equipment and storage medium
CN113283319A (en) Method and device for evaluating face ambiguity, medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200207