CN110210541B - Image fusion method and device, and storage device - Google Patents

Image fusion method and device, and storage device Download PDF

Info

Publication number
CN110210541B
CN110210541B CN201910436319.1A CN201910436319A CN110210541B CN 110210541 B CN110210541 B CN 110210541B CN 201910436319 A CN201910436319 A CN 201910436319A CN 110210541 B CN110210541 B CN 110210541B
Authority
CN
China
Prior art keywords
image
fusion
weight
visible light
light image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910436319.1A
Other languages
Chinese (zh)
Other versions
CN110210541A (en
Inventor
李乾坤
卢维
殷俊
张兴明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201910436319.1A priority Critical patent/CN110210541B/en
Publication of CN110210541A publication Critical patent/CN110210541A/en
Application granted granted Critical
Publication of CN110210541B publication Critical patent/CN110210541B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/752Contour matching

Abstract

The application discloses an image fusion method and device and a storage device. The image fusion method comprises the following steps: acquiring a visible light image and an invisible light image which are obtained by shooting the same target scene; performing first fusion on the visible light image and the invisible light image to obtain an initial fusion image; respectively extracting first edge information of the initial fusion image and second edge information of the visible light image; comparing the first edge information with the second edge information, and respectively determining fusion weights of the initial fusion image and the visible light image based on the comparison result; and performing second fusion on the initial fusion image and the visible light image based on the fusion weight to obtain a final fusion image. According to the scheme, the final fusion image can keep the target scene information as much as possible.

Description

Image fusion method and device, and storage device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image fusion method and apparatus, and a storage device.
Background
Image Fusion (Image Fusion) is a process of synthesizing a plurality of images into a new Image by using a specific algorithm. At present, the image fusion technology has great application value in the fields of remote sensing detection, safety navigation, medical image analysis, anti-terrorism inspection, environmental protection, traffic monitoring, disaster detection and prediction and the like.
The image fusion technology mainly utilizes the correlation of a plurality of images on time and space and the complementarity of information, and aims to enable the obtained final fusion image to describe a target scene more comprehensively and clearly, thereby being beneficial to human eye identification or automatic detection of a machine. In view of this, how to make the final fusion image retain as much information as possible, so as to describe the target scene as comprehensively and clearly as possible becomes a problem to be solved urgently.
Disclosure of Invention
The technical problem mainly solved by the application is to provide an image fusion method, image fusion equipment and a storage device, so that the final fusion image can keep target scene information as much as possible.
In order to solve the above problem, a first aspect of the present application provides an image fusion method, including: acquiring a visible light image and an invisible light image which are obtained by shooting the same target scene; performing first fusion on the visible light image and the invisible light image to obtain an initial fusion image; respectively extracting first edge information of the initial fusion image and second edge information of the visible light image; comparing the first edge information with the second edge information, and respectively determining fusion weights of the initial fusion image and the visible light image based on the comparison result; and performing second fusion on the initial fusion image and the visible light image based on the fusion weight to obtain a final fusion image.
To solve the above problem, a second aspect of the present application provides an image fusion apparatus including: a memory and a processor coupled to each other; the processor is configured to execute the program instructions stored in the memory to implement the image fusion method of the first aspect.
In order to solve the above problems, a third aspect of the present application provides an image fusion device, including an obtaining module, a first fusion module, an edge extraction module, a weight determination module, and a second fusion module, where the obtaining module is configured to obtain a visible light image and an invisible light image captured of a same target scene; the first fusion module is used for carrying out first fusion on the visible light image and the invisible light image to obtain an initial fusion image; the edge extraction module is used for respectively extracting first edge information of the initial fusion image and second edge information of the visible light image; the weight determining module is used for comparing the first edge information with the second edge information and respectively determining the fusion weight of the initial fusion image and the fusion weight of the visible light image based on the comparison result; and the second fusion module is used for carrying out second fusion on the initial fusion image and the visible light image based on the fusion weight to obtain a final fusion image.
In order to solve the above problem, a fourth aspect of the present application provides a storage device, on which program instructions capable of being executed by a processor are stored, the program instructions being used to implement the image fusion method of the first aspect.
In the scheme, an initial fusion image is obtained based on first fusion of a visible light image and a invisible light image which are shot in the same target scene, first edge information obtained by performing edge extraction on the initial fusion image is compared with second edge information obtained by performing edge extraction on the visible light image, fusion weight when the initial fusion image and the visible light image are subjected to second fusion is determined, and finally, the initial fusion image and the visible light image are subjected to second fusion based on the obtained fusion weight, so that a final fusion image is obtained. Through the method, on the basis that the complementary information of the visible light image and the invisible light image is reserved in the initial fusion image, the local feature information is reserved by performing weighting processing based on the edge information of the initial fusion image and the visible light image, so that the final fusion image reserves as much target scene information as possible.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of an image fusion method according to the present application;
FIG. 2 is a schematic processing flow diagram of an embodiment of an image fusion method according to the present application;
FIG. 3 is a flowchart illustrating an embodiment of step S12 in FIG. 1;
FIG. 4 is a flowchart illustrating an embodiment of step S121 in FIG. 3;
FIG. 5 is a flowchart illustrating an embodiment of step S14 in FIG. 1;
FIG. 6 is a flowchart illustrating an embodiment of step S52 in FIG. 5;
FIG. 7 is a flowchart illustrating an embodiment of step S15 in FIG. 1;
FIG. 8 is a flowchart illustrating an embodiment of step S151 in FIG. 7;
FIG. 9 is a schematic diagram of a framework of an embodiment of the image fusion apparatus of the present application;
FIG. 10 is a block diagram of an embodiment of the storage device of the present application;
fig. 11 is a schematic diagram of a framework of another embodiment of the image fusion apparatus of the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of an image fusion method according to the present application. Specifically, the method may include:
step S11: and acquiring a visible light image and an invisible light image which are obtained by shooting the same target scene.
The visible light image is an image formed by imaging the object scene by sensing the reflection of light rays in a visible light waveband range by the imaging device. The visible light image can sufficiently reflect the color condition of the target scene. However, in actual engineering, the visible light image is susceptible to natural environments such as light and the like, and some detailed information is lost.
The invisible light image may be sensed by the image pickup device under irradiation of the object scene by the invisible light source. Invisible light refers to electromagnetic waves that are not perceivable by the human eye outside visible light, such as radio waves, infrared rays, ultraviolet rays, roentgen rays, gamma rays, and the like. In one implementation scenario, the invisible light image may be an Infrared image, and specifically, may be an image sensed by an imaging device under Near Infrared (NIR) radiation.
The image capture device may rely on its own lens and sensor (sensor) to obtain visible and invisible images of the target scene, such as a black light camera. In one implementation scenario, the camera device may be integrated with a single lens dual sensor (sensor), or a dual lens dual sensor (sensor), which are used to capture the visible light image and the invisible light image, respectively. In one implementation scenario, the visible light image and the invisible light image captured by the image capturing device may be acquired by connecting to the image capturing device having the above-mentioned "dual sensor", for example, the visible light image and the invisible light image captured by the image capturing device in Real Time may be acquired based on a Real Time Streaming Protocol (RSTP). In other implementation scenarios, the visible light image and the invisible light image captured by the image capturing device may also be obtained in an off-line manner, such as a removable storage medium, and the like, which is not limited in this embodiment.
In addition, the image pickup device capable of picking up the visible light image and the image pickup device capable of picking up the invisible light image can be controlled to respectively pick up the same target scene so as to acquire the visible light image and the invisible light image.
Step S12: and carrying out first fusion on the visible light image and the invisible light image to obtain an initial fusion image.
The visible light image can sufficiently reflect the color condition of the target scene, but is susceptible to natural conditions such as light and the like to lose some detail information, and the invisible light image is not susceptible to natural light, so that the initial image obtained by first fusing the visible light image and the invisible light image can retain the complementary information of both the visible light image and the invisible light image.
Step S13: and respectively extracting first edge information of the initial fusion image and second edge information of the visible light image.
The extraction of the edge information of the initial fusion image and the visible light image depends on an edge detection algorithm. Edge detection is a fundamental problem in the field of image processing and computer vision, and the purpose of edge detection is to identify points in a digital image where brightness variations are significant.
Currently, methods for edge detection are mainly classified into three categories: the first type is based on some fixed local operation method, such as: differentiation, fitting, etc.; the second type is a global extraction method based on the capability minimization criterion, which is characterized in that the problem is analyzed by a strict mathematical method, a one-dimensional value cost function is given as an optimal extraction basis, and edges are extracted from a global optimal viewpoint, such as a relaxation method and a neural network analysis method; the third category is an image edge extraction method represented by high and new technologies developed in recent years, such as wavelet transformation, mathematical morphology, fractal theory, and the like.
In practical engineering, edge detection can be implemented by using differential operators such as Roberts operator, Sobel operator, Prewitt operator, Krisch operator and the like, and can also be implemented by using Canny operator. Regarding the edge detection algorithm as the prior art in the field, the present embodiment is not described in detail herein.
Step S14: the first edge information and the second edge information are compared, and fusion weights of the initial fusion image and the visible light image are respectively determined based on the comparison result.
And comparing the first edge information of the obtained initial fusion image with the second edge information of the visible light image to determine the fusion weight of the initial fusion image and the visible light image, wherein the fusion weight has direct correlation with the edge information of the initial fusion image and the visible light image, so that the local feature information of the initial fusion image and the visible light image can be retained when the initial fusion image and the visible light image are subjected to second fusion based on the fusion weight.
Step S15: and performing second fusion on the initial fusion image and the visible light image based on the fusion weight to obtain a final fusion image.
In the method, an initial fusion image is obtained based on first fusion of a visible light image and a invisible light image which are shot in the same target scene, first edge information obtained by edge extraction of the initial fusion image is compared with second edge information obtained by edge extraction of the visible light image, fusion weight when the initial fusion image and the visible light image are subjected to second fusion is determined, and finally the initial fusion image and the visible light image are subjected to second fusion based on the obtained fusion weight, so that a final fusion image is obtained. Through the method, on the basis that the complementary information of the visible light image and the invisible light image is reserved in the initial fusion image, the local feature information is reserved by performing weighting processing based on the edge information of the initial fusion image and the visible light image, so that the final fusion image reserves as much target scene information as possible, and the target scene is described as comprehensively and clearly as possible.
The following describes, in detail, implementation steps of the image fusion method in accordance with the present application, with reference to a processing flow diagram of an embodiment of the image fusion method in accordance with fig. 2 and other flow diagrams.
In a first aspect:
in the first aspect, a specific implementation step of performing the first fusion on the visible light image and the invisible light image to obtain an initial fused image in step S12 according to the above embodiment of the present application will be specifically described.
Referring to fig. 3, fig. 3 is a schematic flowchart of step S12 in fig. 1. Specifically, the method may include:
step S121: color information and luminance information of the visible light image and luminance information of the invisible light image are extracted.
In one implementation scenario, before step S121, the method may further include registering the extracted visible light image with the invisible light image. Specifically, image registration is to map one image to another image by finding a spatial transformation for the two images such that points corresponding to the same position in space in the two images correspond one to one. Methods of image registration can be roughly classified into three categories: the first type is based on gray scale and template, the method directly adopts a correlation operation mode to calculate a correlation value to seek an optimal Matching position, template Matching (Blocking Matching) is to seek a sub-image similar to a template image from a known template image to another image, a Matching algorithm based on gray scale is also called as a correlation Matching algorithm, and a spatial two-dimensional sliding template is used for Matching, wherein the first type is commonly used as the method: a mean absolute difference algorithm, a sum of absolute differences algorithm, a sum of squared errors algorithm, a mean sum of squared errors algorithm, and the like; the second category is feature-based matching methods such as optical flow, Harr-like methods, and the like; the third type is a domain transform-based method such as walsh transform, wavelet transform, etc. The method for image registration is the prior art in the field, and the present embodiment is not described in detail herein.
Specifically, referring to fig. 4, in this embodiment, step S121 may be implemented by the following steps:
step S41: and respectively converting the visible light image and the invisible light image into a preset color space.
The preset color space may use an HSI (Hue-Saturation-brightness) color model, or may also use a YUV color model. Wherein, when the preset color space adopts the HSI color model, the hue represented by H and the saturation represented by D are represented as color information of the image, and the intensity or brightness represented by I is represented as brightness information of the image; when the preset color space adopts a YUV color model, the brightness represented by Y is represented as the brightness information of the image, and the chromaticity represented by U and the density represented by V are represented as the color information of the image.
As shown in fig. 2, image P1 represents a visible light image and image P2 represents a non-visible light image, corresponding to a resolution of 800 × 600 ppi. Converting both the visible light image P1 and the invisible light image P2 to YUV color models, it should be noted that, before this, both the visible light image P1 and the invisible light image P2 have been registered.
Step S42: and respectively separating the luminance component and the color component in the visible light image and the invisible light image after conversion to obtain the color component and the luminance component of the visible light image and the luminance component of the invisible light image.
For example, when both the visible light image P1 and the invisible light image P2 are converted to YUV color models, the luminance component (i.e., Y component) and the color component (i.e., U, V component) of the visible light image P1 and the invisible light image P2 are separated, resulting in the color component (U, V component) and the luminance component (Y component) of the visible light image P1, and the luminance component (Y component) of the invisible light image P2.
Step S122: and replacing the brightness information of the visible light image with the brightness information of the invisible light, and forming an initial fusion image by the color information of the visible light image and the replaced brightness information.
For example, the luminance information (i.e., the Y component) of the visible-light image P1 is replaced with the luminance information (i.e., the Y component) of the invisible-light image P2, and the initial fusion image P3 is composed of the color information (i.e., the U, V component) of the visible-light image P1 and the replaced luminance information (i.e., the luminance component: the Y component of the invisible-light image P2).
In the above manner, since the visible light image can sufficiently reflect the color condition of the target scene, but is susceptible to natural conditions such as light and the like, and some detailed information, such as brightness condition, is lost, and the imaging of the invisible light image is not susceptible to natural light, the initial fusion image is formed by replacing the brightness information of the visible light image with the brightness information of the invisible light and by combining the color information of the visible light image and the replaced brightness information, so that the initial fusion image retains better color information, and a data basis is provided for subsequently and continuously optimizing the brightness information.
In a second aspect:
second aspect, a specific implementation step of extracting the first edge information of the initial fusion image and the second edge information of the visible light image in step S13 according to the above embodiment of the present application will be specifically described.
Referring to fig. 2, step S13 in fig. 1 may specifically include: and performing edge extraction on the initial fusion image and the visible light image by adopting a preset edge extraction algorithm to correspondingly obtain a first edge image and a second edge image, wherein the first edge image comprises first edge information, and the second edge image comprises second edge information.
The preset edge extraction algorithm may be a differential operator such as Roberts operator, Sobel operator, Prewitt operator, Krisch operator, and other operators such as Canny operator, as described in the foregoing embodiments.
As shown in fig. 2, the initial fused image P3 is subjected to edge extraction by the sampling preset edge extraction algorithm to obtain a first edge image P4, and the visible light image is subjected to edge extraction by the sampling preset edge extraction algorithm to obtain a second edge image P5, and at this time, the original resolution of 800 × 600ppi is kept unchanged.
Further, the first edge image P4 contains first edge information, and the second edge image P5 contains second edge information.
In a third aspect:
the third aspect will specifically describe a specific implementation step of comparing the first edge information with the second edge information in step S14 of the above-described embodiment of the present application, and determining the fusion weights of the initial fusion image and the visible light image based on the comparison result, respectively.
Specifically, step S14 in fig. 1 may include: comparing the first edge information with the second edge information, and determining N sets of fusion weights W based on the comparison result, wherein each set of fusion weights WkIncluding corresponding to a resolution RkFirst fusion weight W of1kAnd a second fusion weight W2k
Specifically, the first edge information includes each pixel point p in the initial fusion image(i,j)A first edge characteristic value of; the second edge information comprises each pixel point p in the visible light image(i,j)The second edge characteristic value of (1). Referring to FIG. 2, the first edge information includes pixel points P in the initial fused image P3(i,j)The second edge information includes pixel points P in the visible light image P1(i,j)The second edge characteristic value of (1).
Referring to fig. 2 and 5 in combination, the above-mentioned step "compares the first edge information with the second edge information, and determines N sets of fusion weights W based on the comparison result, wherein each set of fusion weights WkIncluding corresponding to a resolution RkFirst fusion weight W of1kAnd a second fusion weight W2k", can be specifically implemented by the following steps:
step S51: respectively corresponding pixel points p in the first edge information and the second edge information(i,j)Comparing the edge characteristic values to obtain each pixel point p(i,j)Comparing the characteristic values of (1).
Referring to FIG. 2, the initial fused image P3 is edgedFirst edge information contained in the first edge image P4 obtained by edge extraction, and corresponding pixel point P in second edge information contained in the second edge image P5 obtained by edge extraction of the visible light image P1(i,j)For example, the edge feature value of the pixel P (1, 1) corresponding to the first edge image P4 is compared with the edge feature value of the pixel P (1, 1) corresponding to the second edge image P5, the edge feature value of the pixel P (1, 2) corresponding to the first edge image P4 is compared with the edge feature value of the pixel P (1, 2) corresponding to the second edge image P5, and so on until the comparison of all the pixels is completed.
Step S52: according to each pixel point P(i,f)Determining each pixel point p in the initial fusion image according to the feature value comparison result(i,j)The first sub-weight and each pixel point p in the visible light image(i,j)Wherein each pixel point p is a sub-weight of the first sub-weight(i,j)The first and second sub-weights of (a) form a first and second sub-weight set corresponding to the original resolution, respectively.
According to each pixel point p(i,j)Determining each pixel point p in the initial fusion image according to the feature value comparison result(i,j)The first sub-weight and each pixel point p in the visible light image(i,j)At this time, each pixel point p(i,j)The first and second sub-weights of (a) form a first and second sub-weight set corresponding to the original resolution, respectively.
Specifically, referring to fig. 6, step S52 can be implemented by the following steps:
step S521: judging pixel point p(i,j)Whether the corresponding first edge feature value is not less than the second edge feature value is determined, if yes, step S522 is performed, and if not, step S523 is performed.
Referring to fig. 2, for example, it is sequentially determined whether the first edge feature value corresponding to the pixel point p (1, 1) is not less than the second edge feature value, and it is determined whether the first edge feature value corresponding to the pixel point p (1, 2) is not less than the second edge feature value, and so on until the comparison between the first edge feature value and the second edge feature value corresponding to all the pixel points is completed.
Step S522: the pixel point p in the initial fusion image is processed(i,j)The first sub-weight is set as a first preset weight value, and the pixel point p in the visible light image is converted into a second preset weight value(i,j)The second sub-weight of (a) is set to a second preset weight value.
Referring to fig. 2, for example, if the first edge feature value corresponding to P (1, 1) is not less than the second edge feature value, the first sub-weight of the pixel point P (1, 1) in the initial fused image P3 is set as the first preset weight value, and the first preset weight value may be 1. In an implementation scenario, the first preset weight value may also be other positive numbers smaller than 1, for example: 0.9, 0.8, etc. In another implementation scenario, the first preset weight value may also be a positive number greater than 1. And setting a second sub-weight of the pixel point P (1, 1) in the visible light image P1 as a second preset weight value, where the second preset weight value may be 0. In an implementation scenario, the second preset weight value may also be other positive numbers smaller than 1, for example: 0.1, 0.2, etc. In an implementation scenario, the first preset weight value is greater than the second preset weight value by a preset threshold, for example, 0.5, and the specific value of the preset threshold is not limited in this embodiment.
Step S523: the pixel point p in the initial fusion image is processed(i,j)The first sub-weight is set as a second preset weight value, and the pixel point p in the visible light image is converted into a first sub-weight(i,j)The second sub-weight of (a) is set to a first preset weight value.
Referring to fig. 2, for example, if the first edge feature value corresponding to P (1, 1) is smaller than the second edge feature value, the first sub-weight of the pixel point P (1, 1) in the initial fused image P3 is set as the second preset weight value, and the second preset weight value may be 0. In an implementation scenario, the second preset weight value may also be other positive numbers smaller than 1, for example: 0.1, 0.2, etc. In another implementation scenario, the second preset weight value may also be a positive number greater than 1. And setting the second sub-weight of the pixel point P (1, 1) in the visible light image P1 as a first preset weight value, where the first preset weight value may be 1. In an implementation scenario, the first preset weight value may also be other positive numbers smaller than 1, for example: 0.9, 0.1, etc. In an implementation scenario, the first preset weight value is greater than the second preset weight value by a preset threshold, for example, 0.5, and the specific value of the preset threshold is not limited in this embodiment.
In one implementation scenario, the sum of the first preset weight value and the second preset weight value is 1. In another implementation scenario, the sum of the first preset weight value and the second preset weight value may not be 1, for example, 2, 3, 4, 5, and so on. Correspondingly, when the sum of the first preset weight value and the second preset weight value is 1, the subsequent weighting processing can be performed by adopting weighted summation; when the sum of the first preset weight value and the second preset weight value is not 1, a weighted average may be used for subsequent weighting processing.
Each pixel point p(i,j)Respectively form a first sub-weight set W corresponding to the original resolution11And a second set of sub-weights W21
Step S53: and the first preset sampling strategy is adopted to carry out down-sampling on the first sub-weight set corresponding to the original resolution ratio to obtain N-1 first sub-weight sets corresponding to different resolution ratios, and the first preset sampling strategy is adopted to carry out down-sampling on the second sub-weight set corresponding to the original resolution ratio to obtain N-1 second sub-weight sets corresponding to different resolution ratios.
The first predetermined sampling strategy may be a gaussian down-sampling algorithm, please refer to fig. 2, where the sampled gaussian down-sampling corresponds to the original resolution R1First set of sub-weights W11When down-sampling is carried out, the first sub-weight set W corresponding to the original resolution ratio is deleted11To obtain the next first set of sub-weights W12And by analogy, obtaining the Nth first sub-weight set W corresponding to different resolutions all the time1N. For the corresponding second set of sub-weights W21The process of (2) can be analogized to obtain the next second sub-weight set W22To the Nth second set of sub-weights W2NThe present embodiment is not described in detail herein.
In addition, gaussian down-sampling is a prior art in the art, and the present embodiment is not described herein again.
Step S54: and grouping the N first sub-weight sets and the N second sub-weights according to the resolution to obtain N groups of fusion weights W, wherein each first sub-weight set is a first fusion weight, and each second sub-weight set is a second fusion weight.
With continued reference to FIG. 2, for example, the first set of sub-weights W corresponding to the original resolution11And a second set of sub-weights W21Dividing into one group to obtain the 1 st group fusion weight W1And so on, the Nth first sub-weight set W1NAnd a second set of sub-weights W2NDividing into one group to obtain the Nth group of fusion weights WN. Wherein the first sub-weight set W11,W12,……,W1NCan be expressed as a first fusion weight W1kWherein k is an integer of 1 to N; second set of sub-weights W21,W22,……,W2NCan be expressed as a first fusion weight W2k
In a fourth aspect:
the fourth aspect will specifically describe a specific implementation step of performing the second fusion on the initial fusion image and the visible light image based on the fusion weight in step S15 to obtain a final fusion image.
Referring to fig. 2 and fig. 7 in combination, fig. 7 is a schematic flowchart illustrating an embodiment of step S15 in fig. 1. Specifically, the method may include:
step S151: obtaining N sets of layered images I1Wherein each group of layered images
Figure BDA0002070632850000111
Including corresponding to a resolution RkFirst layered image of
Figure BDA0002070632850000112
And a second layered image
Figure BDA0002070632850000113
Wherein the N first layered images include an initial fused image and an initial imageAt least one first sampled image of the fused image, the N second layered images including a visible light image and at least one second sampled image of the visible light image.
Referring to FIG. 2, for example, N first layered images
Figure BDA0002070632850000121
Including corresponding to the original resolution R1And at least one first sample image, N second layered images
Figure BDA0002070632850000122
Including the visible light image P1 and at least one second sample image.
Specifically, referring to fig. 8, in the present embodiment, the step S151 may include:
step S1511: and respectively carrying out down-sampling on the initial fusion image and the visible light image by adopting a second preset sampling strategy to obtain N-1 first sampling images corresponding to different resolutions and N-1 second sampling images corresponding to different resolutions.
The second pre-sampling strategy is a laplacian downsampling algorithm, which generally comprises the steps of upsampling an original image on the basis of a downsampled image obtained by downsampling the original image, subtracting the upsampled image from the original image, and repeating the steps for each layer of image. The specific algorithm for laplacian downsampling is the prior art in the art, and the details of this embodiment are not repeated here.
Referring to fig. 2, the initial fused image P3 and the visible light image P1 are down-sampled by a second preset sampling strategy to obtain N-1 first sampled images P32, P33, … … and P3N corresponding to different resolutions and N-1 second sampled images P12, P13, … … and P1N corresponding to different resolutions.
Step S1512: and forming N first layered images by the initial fusion image and the N-1 first sampling images, and forming N second layered images by the visible light image and the N-1 second sampling images.
Please continue toReferring collectively to FIG. 2, N first layered images are formed from the initial fused image P3 and N-1 first sampled images
Figure BDA0002070632850000123
N second layered images composed of the visible light image P1 and N-1 second sample images
Figure BDA0002070632850000124
Step S152: using corresponding same resolution RkCorresponding group fusion weight W ofkFor each group of layered images
Figure BDA0002070632850000125
Weighting to obtain a fused sub-image
Figure BDA0002070632850000126
Wherein the first fusion weight W1kAs a first layered image
Figure BDA0002070632850000127
A second fusion weight W2kAs a second layered image
Figure BDA0002070632850000128
The weight of (c).
Please refer to FIG. 2, with reference to the original resolution R1Is a hierarchical image of
Figure BDA0002070632850000129
Sampling the fusion weight W of the corresponding group1Weighting to obtain a fused sub-image
Figure BDA0002070632850000131
Wherein the first fusion weight W of the corresponding group11As a first layered image
Figure BDA0002070632850000132
I.e. the weight of the original resolution original fused image P3, to the corresponding groupSecond fusion weight W of21As a second layered image
Figure BDA0002070632850000133
The weight of (b), that is, the weight of the visible light image P1 with the original resolution, and so on, which is not described in detail herein. The same resolution in this embodiment and other embodiments of the present application means having the same width resolution LkAnd height resolution Hk
Specifically, the first fusion weight W1kIncluding a first layered image
Figure BDA0002070632850000134
Middle each pixel point p(i,j)First sub-weight of
Figure BDA0002070632850000135
Second fusion weight W2kIncluding a second layered image
Figure BDA0002070632850000136
Middle each pixel point p(i,j)Second sub-weight of
Figure BDA0002070632850000137
Step S152 in this embodiment may specifically include:
in this embodiment, each pixel point p(i,j)First sub-weight of
Figure BDA0002070632850000138
And a second sub-weight
Figure BDA0002070632850000139
Is 1, with the corresponding first sub-weight
Figure BDA00020706328500001310
And a second sub-weight
Figure BDA00020706328500001311
For the first layered image
Figure BDA00020706328500001312
And a second layered image
Figure BDA00020706328500001313
Corresponding pixel point p in(i,j)Is weighted and summed to obtain a fused sub-image
Figure BDA00020706328500001314
Middle corresponding pixel point p(i,j)The value of (c). Specifically, the following formula can be used to obtain the fused sub-image
Figure BDA00020706328500001315
Value of each pixel in the image
Figure BDA00020706328500001316
Figure BDA00020706328500001317
Wherein the content of the first and second substances,
Figure BDA00020706328500001318
is a first layered image
Figure BDA00020706328500001319
Middle pixel point p(i,j)The value of (a) is,
Figure BDA00020706328500001320
is a second layered image
Figure BDA00020706328500001321
Middle pixel point p(i,j)The value of (c). The pixel point p referred to in this embodiment and other embodiments of the present application(i,j)The value of (d) is the gray value of the pixel.
In one implementation scenario, each pixel point p(i,j)First sub-weight of
Figure BDA00020706328500001322
And a second sub-weight
Figure BDA00020706328500001323
If the sum of (1) is not 1, the first layered image may be processed
Figure BDA00020706328500001324
Middle pixel point p(i,j)Value of (A)
Figure BDA00020706328500001325
And a second layered image
Figure BDA00020706328500001326
Middle pixel point p(i,j)Value of (A)
Figure BDA00020706328500001327
Performing weighted average processing to obtain a fused sub-image
Figure BDA00020706328500001328
Value of each pixel in the image
Figure BDA00020706328500001329
Specifically, the calculation can be made by the following formula:
Figure BDA00020706328500001330
step S153: and performing image reconstruction on N fusion subimages obtained by respectively performing weighting processing on the N groups of layered images to obtain a final fusion image.
Wherein k is an integer of 1 to N. With continuing reference to fig. 2, N fused sub-images obtained by weighting the N component layered images respectively
Figure BDA0002070632850000141
Image reconstruction is performed to obtain a final fused image P6. Image reconstruction is the inverse process of image layering,image reconstruction is a prior art in the field, and the present embodiment is not described in detail herein.
Referring to fig. 9, fig. 9 is a schematic diagram of a framework of an embodiment of an image fusion apparatus according to the present application. Specifically, the image fusion apparatus in this embodiment includes a memory 910 and a processor 920 coupled to each other, and the processor 920 is configured to execute program instructions stored in the memory 910 to implement the steps of the image fusion method in any of the embodiments described above. Specifically, the processor 920 is configured to control the memory 910 to obtain a visible light image and an invisible light image captured of the same target scene, or, in an implementation scenario, the image fusion apparatus may further include a communication circuit, and the processor 920 is configured to control the communication circuit to obtain the visible light image and the invisible light image captured of the same target scene. The processor 920 is further configured to perform first fusion on the visible light image and the invisible light image to obtain an initial fusion image, the processor 920 is further configured to extract first edge information of the initial fusion image and second edge information of the visible light image, respectively, the processor 920 is further configured to compare the first edge information and the second edge information, and determine fusion weights of the initial fusion image and the visible light image based on the comparison result, respectively, and the processor 920 is further configured to perform second fusion on the initial fusion image and the visible light image based on the fusion weights to obtain a final fusion image.
The processor 920 controls the memory 910 and itself to implement the steps of any of the embodiments of the image fusion method described above. Processor 920 can also be referred to as a CPU (Central Processing Unit). The processor 920 may be an integrated circuit chip having signal processing capabilities. The Processor 920 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 920 may be implemented collectively by a plurality of circuit-forming chips.
In the method, an initial fusion image is obtained based on first fusion of a visible light image and a invisible light image which are shot in the same target scene, first edge information obtained by edge extraction of the initial fusion image is compared with second edge information obtained by edge extraction of the visible light image, fusion weight when the initial fusion image and the visible light image are subjected to second fusion is determined, and finally the initial fusion image and the visible light image are subjected to second fusion based on the obtained fusion weight, so that a final fusion image is obtained. Through the method, on the basis that the complementary information of the visible light image and the invisible light image is reserved in the initial fusion image, the local feature information is reserved by performing weighting processing based on the edge information of the initial fusion image and the visible light image, so that the final fusion image reserves as much target scene information as possible, and the target scene is described as comprehensively and clearly as possible.
Wherein, in one embodiment, the processor 920 is further configured to compare the first edge information with the second edge information and determine N sets of fusion weights W based on the comparison result, wherein each set of fusion weights WkIncluding corresponding to a resolution RkFirst fusion weight W of1kAnd a second fusion weight W2kThe processor 920 is further configured to obtain N sets of layered images I1Wherein each group of layered images
Figure BDA0002070632850000151
Including corresponding to a resolution RkFirst layered image of
Figure BDA0002070632850000152
And a second layered image
Figure BDA0002070632850000153
Wherein the N first layered images include an initial fused image and at least one first sampling image of the initial fused image, the N second layered images include a visible light image and at least one second sampling image of the visible light image, and the processor 920 is further configured to utilize the correspondingSame resolution RkCorresponding group fusion weight W ofkFor each group of layered images
Figure BDA0002070632850000154
Weighting to obtain a fused sub-image
Figure BDA0002070632850000155
Wherein the first fusion weight W1kAs a first layered image
Figure BDA0002070632850000156
A second fusion weight W2kAs a second layered image
Figure BDA0002070632850000157
The processor 920 is further configured to perform image reconstruction on N fusion sub-images obtained by performing weighting processing on the N groups of layered images, respectively, to obtain a final fusion image, where k is an integer from 1 to N.
Wherein, in another embodiment, the first fusion weight W1kIncluding a first layered image
Figure BDA0002070632850000158
Middle each pixel point p(i,j)First sub-weight of
Figure BDA0002070632850000159
Second fusion weight W2kIncluding a second layered image
Figure BDA00020706328500001510
Middle each pixel point p(i,j)Second sub-weight of
Figure BDA00020706328500001511
The processor 920 is further configured to utilize the respective first sub-weights
Figure BDA00020706328500001512
And a second sub-weight
Figure BDA00020706328500001513
For the first layered image
Figure BDA00020706328500001514
And a second layered image
Figure BDA00020706328500001515
Corresponding pixel point p in(i,j)Is weighted and summed to obtain a fused sub-image
Figure BDA00020706328500001516
Middle corresponding pixel point p(i,j)The value of (c).
In yet another embodiment, the first edge information includes pixel points p in the initial fused image(i,j)A first edge characteristic value of; the second edge information comprises each pixel point p in the visible light image(i,j)The processor 920 is further configured to respectively apply the first edge information and the corresponding pixel point p in the second edge information(i,j)Comparing the edge characteristic values to obtain each pixel point p(i,j)The processor 920 is further configured to compare the characteristic value of p with the characteristic value of p(i,j)Determining each pixel point p in the initial fusion image according to the feature value comparison result(i,j)The first sub-weight and each pixel point p in the visible light image(i,j)Wherein each pixel point p is a sub-weight of the first sub-weight(i,j)The processor 920 is further configured to perform downsampling on the first sub-weight set corresponding to the original resolution by using a first preset sampling strategy to obtain N-1 first sub-weight sets corresponding to different resolutions, perform downsampling on the second sub-weight set corresponding to the original resolution by using the first preset sampling strategy to obtain N-1 second sub-weight sets corresponding to different resolutions, and group the N first sub-weight sets and the N second sub-weights according to the resolutions to obtain N groups of fusion weights W, where each first sub-weight set is a first fusion weight and each second sub-weight set is a second fusion weight, and the processor 920 is further configured to group the N first sub-weight sets and the N second sub-weights according to the resolutions to obtain N groups of fusion weights WA second fusion weight.
Wherein, in another embodiment, the processor 920 is further configured to determine the pixel point p(i,j)When the corresponding first edge characteristic value is not less than the second edge characteristic value, the pixel point p in the initial fusion image is processed(i,j)The first sub-weight is set as a first preset weight value, and the pixel point p in the visible light image is converted into a second preset weight value(i,j)The second sub-weight of the pixel point p is set as a second preset weight value, or the processor 920 is further configured to determine the second sub-weight as the second preset weight value when the pixel point p is located(i,j)When the corresponding first edge characteristic value is smaller than the second edge characteristic value, the pixel point p in the initial fusion image is processed(i,j)The first sub-weight is set as a second preset weight value, and the pixel point p in the visible light image is converted into a first sub-weight(i,j)The second sub-weight of (a) is set as a first preset weight value, wherein the sum of the first preset weight value and the second preset weight value is 1.
In yet another embodiment, the processor 920 is further configured to perform down-sampling on the initial fusion image and the visible light image respectively by using a second preset sampling strategy to obtain N-1 first sampling images corresponding to different resolutions and N-1 second sampling images corresponding to different resolutions, and the processor 920 is further configured to form N first layered images from the initial fusion image and the N-1 first sampling images, and form N second layered images from the visible light image and the N-1 second sampling images.
In yet another embodiment, the processor 920 is further configured to extract color information and luminance information of the visible light image and luminance information of the invisible light image, and the processor 920 is further configured to replace the luminance information of the visible light image with the luminance information of the invisible light and compose an initial fusion image from the color information of the visible light image and the replaced luminance information.
In yet another embodiment, the processor 920 is further configured to convert the visible light image and the invisible light image into a preset color space, respectively, and the processor 920 is further configured to separate a luminance component and a color component in the converted visible light image and the converted invisible light image, respectively, to obtain a color component and a luminance component of the visible light image and a luminance component of the invisible light image.
In yet another embodiment, the processor 920 is further configured to perform edge extraction on the initial fusion image and the visible light image by using a preset edge extraction algorithm, and accordingly obtain a first edge image and a second edge image, where the first edge image includes first edge information, and the second edge image includes second edge information, in an implementation scenario, the image fusion apparatus may further include an image pickup device 930, such as a black light camera, and the processor 920 is further configured to control the image pickup device 930, such as the black light camera, to photograph a target scenario, so as to obtain the visible light image and the invisible light image. In one implementation scenario, the captured invisible light image is an infrared image, and in other implementation scenarios, the captured invisible light image may be an image other than an infrared image, such as a laser image.
Referring to fig. 10, fig. 10 is a schematic diagram of a memory device 1000 according to an embodiment of the present application. The memory device 1000 of the present application stores program instructions 1010 capable of being executed by a processor, and the program instructions 1010 are used for implementing steps of any of the embodiments of the image fusion method described above.
The storage device 1000 may be a medium that can store the program instructions 1010, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, or may be a server that stores the program instructions 1010, and the server may send the stored program instructions 1010 to other devices for operation, or may self-operate the stored program instructions 1010.
Referring to fig. 11, fig. 11 is a schematic diagram of a framework of an image fusion apparatus 1100 according to another embodiment of the present application. Specifically, the image fusion device 1100 includes an obtaining module 1110, a first fusion module 1120, an edge extraction module 1130, a weight determination module 1140, and a second fusion module 1150, where the obtaining module 1110 is configured to obtain a visible light image and an invisible light image captured of the same target scene; the first fusion module 1120 is configured to perform first fusion on the visible light image and the invisible light image to obtain an initial fusion image; the edge extraction module 1130 is configured to extract first edge information of the initial fusion image and second edge information of the visible light image, respectively; the weight determining module 1140 is configured to compare the first edge information with the second edge information and determine fusion weights of the initial fusion image and the visible light image based on the comparison result; the second fusion module 1150 is configured to perform second fusion on the initial fusion image and the visible light image based on the fusion weight to obtain a final fusion image.
By the method, the final fusion image can keep the target scene information as much as possible so as to describe the target scene as comprehensively and clearly as possible.
In one embodiment, the weight determining module 1140 is specifically configured to compare the first edge information with the second edge information and determine N sets of fusion weights W based on the comparison result, wherein each set of fusion weights WkIncluding corresponding to a resolution RkFirst fusion weight W of1kAnd a second fusion weight W2kThe second fusion module 1150 includes: an obtaining unit for obtaining N sets of layered images I1Wherein each group of layered images
Figure BDA0002070632850000181
Including corresponding to a resolution RkFirst layered image of
Figure BDA0002070632850000182
And a second layered image
Figure BDA0002070632850000183
The N first layered images comprise an initial fusion image and at least one first sampling image of the initial fusion image, and the N second layered images comprise a visible light image and at least one second sampling image of the visible light image; the second fusion module 1150 includes a weighting unit for utilizing the same resolution RkCorresponding group fusion weight W ofkFor each group of layered images
Figure BDA0002070632850000184
Weighting to obtain a fused sub-image
Figure BDA0002070632850000185
Wherein the first fusion weight W1kAs a first layered image
Figure BDA0002070632850000186
A second fusion weight W2kAs a second layered image
Figure BDA0002070632850000187
The weight of (c); the second fusion module 1150 further includes a reconstruction unit configured to perform image reconstruction on N fusion sub-images obtained by performing weighting processing on the N groups of layered images, respectively, to obtain a final fusion image; wherein k is an integer of 1 to N.
Wherein, in another embodiment, the first fusion weight W1kIncluding a first layered image
Figure BDA0002070632850000188
Middle each pixel point p(i,j)First sub-weight of
Figure BDA0002070632850000189
Second fusion weight W2kIncluding a second layered image
Figure BDA00020706328500001810
Middle each pixel point p(i,j)Second sub-weight of
Figure BDA00020706328500001811
The weighting unit of the second fusion module 1150 is particularly adapted to utilize the respective first sub-weights
Figure BDA00020706328500001812
And a second sub-weight
Figure BDA00020706328500001813
For the first layered image
Figure BDA00020706328500001814
And a second layered image
Figure BDA00020706328500001815
Corresponding pixel point p in(i,j)Is weighted and summed to obtain a fused sub-image
Figure BDA00020706328500001816
Middle corresponding pixel point p(i,j)The value of (c).
In yet another embodiment, the first edge information includes pixel points p in the initial fused image(i,j)A first edge characteristic value of; the second edge information comprises each pixel point p in the visible light image(i,j)The weight determination module 1140 includes: a comparing unit for comparing the corresponding pixel points p in the first edge information and the second edge information respectively(i,j)Comparing the edge characteristic values to obtain each pixel point p(i,j)The result of the characteristic value comparison of (1); the weight determination module 1140 further comprises a determination unit for determining the weight according to each pixel point p(i,j)Determining each pixel point p in the initial fusion image according to the feature value comparison result(i,j)The first sub-weight and each pixel point p in the visible light image(i,j)Wherein each pixel point p is a sub-weight of the first sub-weight(i,j)The first sub-weight and the second sub-weight respectively form a first sub-weight set and a second sub-weight set corresponding to the original resolution; the weight determining module 1140 further includes a sampling unit, configured to perform downsampling on the first sub-weight set corresponding to the original resolution by using a first preset sampling strategy to obtain N-1 first sub-weight sets corresponding to different resolutions, and perform downsampling on the second sub-weight set corresponding to the original resolution by using the first preset sampling strategy to obtain N-1 second sub-weight sets corresponding to different resolutions; the weight determining module 1140 further includes a grouping unit configured to group the N first sub-weight sets and the N second sub-weights according to the resolution to obtain N groups of fusion weights W, where each first sub-weight set is a first fusion weight and each second sub-weight set is a second fusion weight.
Wherein in yet another embodimentThe determining unit of the weight determining module 1140 is specifically configured to determine if the pixel point p is a pixel point p(i,j)If the corresponding first edge characteristic value is not less than the second edge characteristic value, the pixel point p in the initial fusion image is determined(i,j)The first sub-weight is set as a first preset weight value, and the pixel point p in the visible light image is converted into a second preset weight value(i,j)The second sub-weight of (2) is set as a second preset weight value; the weight determination module 1140 is further configured to determine if the pixel point p is a pixel point p(i,j)If the corresponding first edge characteristic value is smaller than the second edge characteristic value, the pixel point p in the initial fusion image is determined(i,j)The first sub-weight is set as a second preset weight value, and the pixel point p in the visible light image is converted into a first sub-weight(i,j)The second sub-weight of (2) is set as a first preset weight value; and the sum of the first preset weight value and the second preset weight value is 1.
In yet another embodiment, the obtaining unit of the second fusion module 1150 is specifically configured to perform downsampling on the initial fusion image and the visible light image respectively by using a second preset sampling strategy, so as to obtain N-1 first sampling images corresponding to different resolutions and N-1 second sampling images corresponding to different resolutions; and forming N first layered images by the initial fusion image and the N-1 first sampling images, and forming N second layered images by the visible light image and the N-1 second sampling images.
In yet another embodiment, the first fusion module 1120 is specifically configured to extract color information and luminance information of the visible light image and luminance information of the invisible light image; and replacing the brightness information of the visible light image with the brightness information of the invisible light image, and forming an initial fusion image by the color information of the visible light image and the replaced brightness information.
In yet another embodiment, the first fusion module 1120 is further configured to convert the visible light image and the invisible light image into a preset color space respectively; and respectively separating the luminance component and the color component in the visible light image and the invisible light image after conversion to obtain the color component and the luminance component of the visible light image and the luminance component of the invisible light image.
In another embodiment, the edge extraction module 1130 is specifically configured to perform edge extraction on the initial fusion image and the visible light image by using a preset edge extraction algorithm, and correspondingly obtain a first edge image and a second edge image, where the first edge image includes first edge information, and the second edge image includes second edge information.
In another embodiment, the obtaining module 1110 is specifically configured to capture a target scene by using a black light camera to obtain a visible light image and an invisible light image, where the invisible light image is an infrared image.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (12)

1. An image fusion method, comprising:
acquiring a visible light image and an invisible light image which are obtained by shooting the same target scene;
performing first fusion on the visible light image and the invisible light image to obtain an initial fusion image;
respectively extracting first edge information of the initial fusion image and second edge information of the visible light image;
comparing the first edge information with the second edge information, and respectively determining fusion weights of the initial fusion image and the visible light image based on the comparison result;
performing second fusion on the initial fusion image and the visible light image based on the fusion weight to obtain a final fusion image;
the comparing the first edge information and the second edge information and determining the fusion weight of the initial fusion image and the visible light image respectively based on the comparison result comprises:
comparing the first edge information with the second edge information, and determining N sets of fusion weights W based on the comparison result, wherein each set of fusion weights WkIncluding corresponding to a resolution RkFirst melting ofResultant weight W1kAnd a second fusion weight W2k
And performing second fusion on the initial fusion image and the visible light image based on the fusion weight to obtain a final fusion image, wherein the second fusion comprises:
obtaining N sets of layered images I1Wherein each group of layered images
Figure FDA0003100042680000011
Including corresponding to a resolution RkFirst layered image of
Figure FDA0003100042680000012
And a second layered image
Figure FDA0003100042680000013
Wherein the N first layered images comprise the initial fused image and at least one first sampled image of the initial fused image, and the N second layered images comprise the visible light image and at least one second sampled image of the visible light image;
using corresponding same resolution RkCorresponding group fusion weight W ofkFor each group of layered images
Figure FDA0003100042680000014
Weighting to obtain a fused sub-image
Figure FDA0003100042680000015
Wherein the first fusion weight W1kAs the first layered image
Figure FDA0003100042680000016
The second fusion weight W2kAs the second layered image
Figure FDA0003100042680000017
The weight of (c);
carrying out image reconstruction on N fusion subimages obtained by respectively carrying out the weighting processing on N groups of layered images to obtain a final fusion image;
wherein k is an integer of 1 to N.
2. The method of claim 1, wherein the first fusion weight W1kIncluding the first layered image
Figure FDA0003100042680000021
Middle each pixel point p(i,j)First sub-weight of
Figure FDA0003100042680000022
The second fusion weight W2kIncluding the second layered image
Figure FDA0003100042680000023
Middle each pixel point p(i,j)Second sub-weight of
Figure FDA0003100042680000024
Said utilization corresponds to the same resolution RkCorresponding group fusion weight W ofkFor each group of layered images
Figure FDA0003100042680000025
Weighting to obtain a fused sub-image
Figure FDA0003100042680000026
The method comprises the following steps:
using the corresponding first sub-weight
Figure FDA0003100042680000027
And the second sub-weight
Figure FDA0003100042680000028
To the aboveFirst layered image
Figure FDA0003100042680000029
And the second layered image
Figure FDA00031000426800000210
Corresponding pixel point p in(i,j)Is subjected to weighted summation to obtain the fusion sub-image
Figure FDA00031000426800000211
Middle corresponding pixel point p(i,j)The value of (c).
3. The method according to claim 2, wherein the first edge information comprises pixels p in the initial fused image(i,j)A first edge characteristic value of; the second edge information comprises each pixel point p in the visible light image(i,j)A second edge feature value of (a);
the comparing the first edge information with the second edge information and determining N sets of fusion weights W based on the comparison result includes:
respectively corresponding pixel points p in the first edge information and the second edge information(i,j)Comparing the edge characteristic values to obtain the p of each pixel point(i,j)The result of the characteristic value comparison of (1);
according to the pixel point p(i,j)Determining each pixel point p in the initial fusion image according to the feature value comparison result(i,j)The first sub-weight and each pixel point p in the visible light image(i,j)Wherein each pixel point p is a sub-weight of the first sub-weight(i,j)The first sub-weight and the second sub-weight respectively form a first sub-weight set and a second sub-weight set corresponding to the original resolution;
adopting a first preset sampling strategy to perform down-sampling on the first sub-weight set corresponding to the original resolution ratio to obtain N-1 first sub-weight sets corresponding to different resolution ratios, and adopting the first preset sampling strategy to perform down-sampling on the second sub-weight set corresponding to the original resolution ratio to obtain N-1 second sub-weight sets corresponding to different resolution ratios;
and grouping the N first sub-weight sets and the N second sub-weights according to the resolution to obtain N groups of fusion weights W, wherein each first sub-weight set is one first fusion weight, and each second sub-weight set is one second fusion weight.
4. The method of claim 3, wherein said each pixel point p is based on(i,j)Determining each pixel point p in the first fusion image according to the feature value comparison result(i,j)The first sub-weight and each pixel point p in the visible light image(i,j)The second sub-weight of (2), comprising:
if the pixel point p(i,j)If the corresponding first edge characteristic value is not less than the second edge characteristic value, the pixel point p in the initial fusion image is processed(i,j)The first sub-weight of the pixel point p is set as a first preset weight value, and the pixel point p in the visible light image is subjected to the color matching(i,j)The second sub-weight of (2) is set as a second preset weight value;
if the pixel point p(i,j)If the corresponding first edge characteristic value is smaller than the second edge characteristic value, the pixel point p in the initial fusion image is processed(i,j)The first sub-weight is set as the second preset weight value, and the pixel point p in the visible light image is set(i,j)The second sub-weight of (a) is set as the first preset weight value;
and the sum of the first preset weight value and the second preset weight value is 1.
5. The method of claim 1, wherein obtaining N sets of slice images I1The method comprises the following steps:
respectively performing down-sampling on the initial fusion image and the visible light image by adopting a second preset sampling strategy to obtain N-1 first sampling images corresponding to different resolutions and N-1 second sampling images corresponding to different resolutions;
and forming N first layered images by the initial fusion image and the N-1 first sampling images, and forming N second layered images by the visible light image and the N-1 second sampling images.
6. The method according to claim 1, wherein the first fusing the invisible light image and the visible light image to obtain an initial fused image comprises:
extracting color information and brightness information of the visible light image and brightness information of the invisible light image;
and replacing the brightness information of the visible light image with the brightness information of the invisible light image, and forming the initial fusion image by the color information of the visible light image and the replaced brightness information.
7. The method of claim 6,
the extracting of the color information and the brightness information of the visible light image and the brightness information of the invisible light image includes:
respectively converting the visible light image and the invisible light image into a preset color space;
and respectively separating the luminance component and the color component in the visible light image and the invisible light image after conversion to obtain the color component and the luminance component of the visible light image and the luminance component of the invisible light image.
8. The method of claim 1,
the respectively extracting first edge information of the initial fusion image and second edge information of the visible light image includes:
performing edge extraction on the initial fusion image and the visible light image by adopting a preset edge extraction algorithm to obtain a first edge image and a second edge image correspondingly, wherein the first edge image comprises the first edge information, and the second edge image comprises the second edge information;
the acquiring of the visible light image and the invisible light image obtained by shooting the same target scene comprises the following steps:
a black light camera is used for shooting a target scene to obtain a visible light image and an invisible light image, wherein the invisible light image is an infrared image.
9. An image fusion device, comprising a memory and a processor coupled to each other;
the processor is configured to execute the program instructions stored by the memory to implement the method of any of claims 1 to 8.
10. The apparatus according to claim 9, further comprising an image pickup device for picking up a visible light image and a non-visible light image.
11. An image fusion apparatus characterized by comprising:
the first acquisition module is used for acquiring a visible light image and an invisible light image which are obtained by shooting the same target scene;
the first fusion module is used for carrying out first fusion on the visible light image and the invisible light image to obtain an initial fusion image;
an edge extraction module, configured to extract first edge information of the initial fusion image and second edge information of the visible light image, respectively;
the weight determining module is used for comparing the first edge information with the second edge information and respectively determining the fusion weight of the initial fusion image and the fusion weight of the visible light image based on the comparison result;
the second fusion module is used for carrying out second fusion on the initial fusion image and the visible light image based on the fusion weight to obtain a final fusion image;
wherein the weight determination module comprises: a weight determination submodule for comparing the firstEdge information and the second edge information, and determining N groups of fusion weights W based on the comparison result, wherein each group of fusion weights WkIncluding corresponding to a resolution RkFirst fusion weight W of1kAnd a second fusion weight W2kWherein k is an integer of 1 to N;
wherein the second fusion module comprises:
a second acquisition module for acquiring N groups of layered images I1Wherein each group of layered images
Figure FDA0003100042680000051
Including corresponding to a resolution RkFirst layered image of
Figure FDA0003100042680000052
And a second layered image
Figure FDA0003100042680000053
Wherein the N first layered images include the initial fused image and at least one first sampled image of the initial fused image, and the N second layered images include the visible light image and at least one second sampled image of the visible light image, wherein k is an integer of 1 to N;
a weighting processing module for utilizing the same resolution RkCorresponding group fusion weight W ofkFor each group of layered images
Figure FDA0003100042680000054
Weighting to obtain a fused sub-image
Figure FDA0003100042680000055
Wherein the first fusion weight W1kAs the first layered image
Figure FDA0003100042680000056
The second fusion weight W2kAs the second layered image
Figure FDA0003100042680000057
Wherein k is an integer of 1 to N;
and the image reconstruction module is used for performing image reconstruction on the N fusion sub-images obtained by respectively performing the weighting processing on the N groups of layered images to obtain a final fusion image.
12. A storage device storing program instructions executable by a processor to perform the method of any one of claims 1 to 8.
CN201910436319.1A 2019-05-23 2019-05-23 Image fusion method and device, and storage device Active CN110210541B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910436319.1A CN110210541B (en) 2019-05-23 2019-05-23 Image fusion method and device, and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910436319.1A CN110210541B (en) 2019-05-23 2019-05-23 Image fusion method and device, and storage device

Publications (2)

Publication Number Publication Date
CN110210541A CN110210541A (en) 2019-09-06
CN110210541B true CN110210541B (en) 2021-09-03

Family

ID=67788443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910436319.1A Active CN110210541B (en) 2019-05-23 2019-05-23 Image fusion method and device, and storage device

Country Status (1)

Country Link
CN (1) CN110210541B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7316809B2 (en) * 2019-03-11 2023-07-28 キヤノン株式会社 Image processing device, image processing device control method, system, and program
CN112017252A (en) * 2019-05-31 2020-12-01 华为技术有限公司 Image processing method and related equipment
CN110796629B (en) * 2019-10-28 2022-05-17 杭州涂鸦信息技术有限公司 Image fusion method and system
CN111563552B (en) * 2020-05-06 2023-09-05 浙江大华技术股份有限公司 Image fusion method, related device and apparatus
US11457116B2 (en) * 2020-06-17 2022-09-27 Ricoh Company, Ltd. Image processing apparatus and image reading method
CN116228618B (en) * 2023-05-04 2023-07-14 中科三清科技有限公司 Meteorological cloud image processing system and method based on image recognition

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104683767A (en) * 2015-02-10 2015-06-03 浙江宇视科技有限公司 Fog penetrating image generation method and device
CN106600572A (en) * 2016-12-12 2017-04-26 长春理工大学 Adaptive low-illumination visible image and infrared image fusion method
CN107580163A (en) * 2017-08-12 2018-01-12 四川精视科技有限公司 A kind of twin-lens black light camera
WO2018048231A1 (en) * 2016-09-08 2018-03-15 Samsung Electronics Co., Ltd. Method and electronic device for producing composite image
CN108364272A (en) * 2017-12-30 2018-08-03 广东金泽润技术有限公司 A kind of high-performance Infrared-Visible fusion detection method
CN109712102A (en) * 2017-10-25 2019-05-03 杭州海康威视数字技术股份有限公司 A kind of image interfusion method, device and image capture device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7826736B2 (en) * 2007-07-06 2010-11-02 Flir Systems Ab Camera and method for use with camera
US8619082B1 (en) * 2012-08-21 2013-12-31 Pelican Imaging Corporation Systems and methods for parallax detection and correction in images captured using array cameras that contain occlusions using subsets of images to perform depth estimation
WO2016109585A1 (en) * 2014-12-31 2016-07-07 Flir Systems, Inc. Image enhancement with fusion
CN105069768B (en) * 2015-08-05 2017-12-29 武汉高德红外股份有限公司 A kind of visible images and infrared image fusion processing system and fusion method
CN107103331B (en) * 2017-04-01 2020-06-16 中北大学 Image fusion method based on deep learning
TWI630559B (en) * 2017-06-22 2018-07-21 佳世達科技股份有限公司 Image capturing device and image capturing method
CN107248150A (en) * 2017-07-31 2017-10-13 杭州电子科技大学 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area
CN107730482B (en) * 2017-09-28 2021-07-06 电子科技大学 Sparse fusion method based on regional energy and variance
CN109670522A (en) * 2018-09-26 2019-04-23 天津工业大学 A kind of visible images and infrared image fusion method based on multidirectional laplacian pyramid

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104683767A (en) * 2015-02-10 2015-06-03 浙江宇视科技有限公司 Fog penetrating image generation method and device
WO2018048231A1 (en) * 2016-09-08 2018-03-15 Samsung Electronics Co., Ltd. Method and electronic device for producing composite image
CN106600572A (en) * 2016-12-12 2017-04-26 长春理工大学 Adaptive low-illumination visible image and infrared image fusion method
CN107580163A (en) * 2017-08-12 2018-01-12 四川精视科技有限公司 A kind of twin-lens black light camera
CN109712102A (en) * 2017-10-25 2019-05-03 杭州海康威视数字技术股份有限公司 A kind of image interfusion method, device and image capture device
CN108364272A (en) * 2017-12-30 2018-08-03 广东金泽润技术有限公司 A kind of high-performance Infrared-Visible fusion detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《A Real-Time FPGA Implementation of Visible Near Infrared Fusion Based Image Enhancement》;Mohamed Awad;《2018 25th IEEE International Conference on Image Processing (ICIP)》;20180906;第2381-8549页 *
《基于FPDEs与CBF的红外与可见光图像融合》;李昌兴;《计算机科学》;20190315;第46卷(第1期);第297-302页 *

Also Published As

Publication number Publication date
CN110210541A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110210541B (en) Image fusion method and device, and storage device
Blum et al. An Overview of lmage Fusion
Liu et al. MLFcGAN: Multilevel feature fusion-based conditional GAN for underwater image color correction
Acharya et al. Image processing: principles and applications
CN106846289A (en) A kind of infrared light intensity and polarization image fusion method based on conspicuousness migration with details classification
WO2021225472A2 (en) Joint objects image signal processing in temporal domain
Iwasokun et al. Image enhancement methods: a review
Rao et al. An Efficient Contourlet-Transform-Based Algorithm for Video Enhancement.
Zhang et al. Salient target detection based on the combination of super-pixel and statistical saliency feature analysis for remote sensing images
Ito et al. Compressive epsilon photography for post-capture control in digital imaging
Julliand et al. Automated image splicing detection from noise estimation in raw images
Petrovic Multilevel image fusion
Zeng et al. Review of image fusion algorithms for unconstrained outdoor scenes
Laha et al. Near-infrared depth-independent image dehazing using haar wavelets
CN114463379A (en) Dynamic capturing method and device for video key points
Kour et al. A review on image processing
Zheng A channel-based color fusion technique using multispectral images for night vision enhancement
Ballabeni et al. Intensity histogram equalisation, a colour-to-grey conversion strategy improving photogrammetric reconstruction of urban architectural heritage
Goud et al. Evaluation of image fusion of multi focus images in spatial and frequency domain
Zheng An exploration of color fusion with multispectral images for night vision enhancement
Malviya et al. Wavelet based multi-focus image fusion
Aguilar-Ponce et al. Pixel-level image fusion scheme based on linear algebra
Liang et al. Image Fusion with Spatial Frequency.
Chaudhuri et al. Frequency and spatial domains adaptive-based enhancement technique for thermal infrared images
Kumari et al. Systematic Review of Image Computing Under Some Basic and Improvised Techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant