CN112258434A - Detail-preserving multi-exposure image fusion algorithm in static scene - Google Patents

Detail-preserving multi-exposure image fusion algorithm in static scene Download PDF

Info

Publication number
CN112258434A
CN112258434A CN202011064707.0A CN202011064707A CN112258434A CN 112258434 A CN112258434 A CN 112258434A CN 202011064707 A CN202011064707 A CN 202011064707A CN 112258434 A CN112258434 A CN 112258434A
Authority
CN
China
Prior art keywords
exposure
image
pixels
fusion
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011064707.0A
Other languages
Chinese (zh)
Inventor
罗林欢
梁国开
熊国锟
邓国豪
刘剑
白徐欢
陈小倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd filed Critical Guangzhou Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority to CN202011064707.0A priority Critical patent/CN112258434A/en
Publication of CN112258434A publication Critical patent/CN112258434A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a fusion algorithm of detail-preserving multi-exposure images in a static scene, which comprises the steps of firstly, obtaining a plurality of exposure images in the static scene; inputting the exposure images into an exposure sequence according to the increasing order of the exposure; calculating absolute exposure weight of the exposure image according to the exposure parameter of each pixel of the exposure image in the input exposure sequence to obtain a weight distribution map; carrying out threshold processing on pixels in the weight distribution map, and reserving pixels with proper exposure; smoothing the weight distribution graph after threshold processing by using a multi-resolution fusion tool; the invention obtains a weight distribution graph based on the absolute exposure weight, and after threshold processing and correction, the final fusion result is obtained by guiding the fusion process by the image Laplacian pyramid, the information of most objects is retained, and good picture performance is presented.

Description

Detail-preserving multi-exposure image fusion algorithm in static scene
Technical Field
The invention relates to the technical fields of image fusion technology, multi-exposure fusion technology, dynamic scene image processing and the like, in particular to a detail-preserving multi-exposure image fusion algorithm in a static scene.
Background
In recent years, outdoor line inspection and defect detection of the unmanned aerial vehicle need to be performed based on images shot by the unmanned aerial vehicle, and therefore whether as many details as possible can be recovered from the images shot by the unmanned aerial vehicle is very important for the results of intelligent inspection and detection. And because the unmanned aerial vehicle works in an outdoor natural environment, the dynamic range of an outdoor real natural scene can span about nine orders of magnitude. Therefore, under a certain exposure setting, a single low dynamic range image shot by a common digital camera cannot show the whole dynamic range of a certain natural scene, and the detailed information of a low-brightness or high-brightness area can be lost in a scene with strong brightness change. In order to improve the accuracy of intelligent routing inspection and defect detection of the unmanned aerial vehicle and capture outdoor scene details as much as possible, different exposure parameters can be used for multiple exposures of the same scene, detail information in various brightness ranges can be respectively captured, and then multiple images with different exposure parameters, which retain details in different brightness areas, are fused. This is the multi-exposure fusion technique.
The current direct multiple exposure image fusion algorithms are mainly divided into two categories: a transform domain based fusion method (referred to as a transform domain method for short) and a spatial domain based fusion method (referred to as a spatial domain method for short). The transform domain method is to transform the image sequence to a transform domain for fusion and then restore the image sequence to the original image sequence. The spatial domain method is to extract useful information of each part directly in the image domain for fusion.
However, the above prior art all adopt complex algorithms and cannot guarantee good results in terms of both detail and saturation.
Disclosure of Invention
In view of the above problems in the background art, an algorithm for fusion of detail-preserving multi-exposure images in a static scene is provided. The absolute exposure weight is determined by improving a Butterworth low-pass filtering curve, then pixel correction is carried out on a weight distribution graph, irrelevant pixels are removed, the effect of exposure moderate pixels is enhanced, the threshold value correction absolute exposure weight is set, then the image is subjected to smoothing processing through a Laplacian image pyramid, and a detail-enhanced multiple exposure fusion image can be obtained.
The invention relates to a fusion algorithm for detail-preserving multi-exposure images in a static scene, which comprises the following steps:
s1, acquiring a plurality of exposure images in a static scene;
s2 inputting the exposure images into an exposure sequence according to the ascending order of exposure;
s3, calculating the absolute exposure weight of the exposure image according to the exposure parameter of each pixel of the exposure image in the input exposure sequence to obtain a weight distribution graph;
s4, carrying out threshold processing on the pixels in the weight distribution map, and reserving pixels with proper exposure;
s5 is performed to smooth the weight distribution map after the threshold processing by using the multi-resolution fusion tool.
The invention provides an efficient multi-exposure fusion algorithm, which is essentially characterized in that a proper weight distribution map is constructed according to a multi-exposure sequence, good pixels are given enough weight values, and useless or sudden pixel weight values are eliminated. And obtaining a weight distribution graph based on the absolute exposure weight, and guiding the fusion process by the Laplacian pyramid of the image after threshold processing and correction to obtain a final fusion result. The static scene fusion algorithm of the invention reserves the information of most objects and presents good picture performance.
The exposure level refers to an exposure level of an image. The exposure degree may include overexposure, normal exposure, and underexposure. The exposure level is also called exposure value, which represents all camera aperture shutter combinations that can give the same exposure.
The exposure image refers to an automatic exposure image of a photographing device under normal illumination and scene conditions in the prior art. In order to select one image from a plurality of images as a reference object for combining the plurality of images, a normal exposure image may be used as a reference image, and other images such as an overexposed image and an underexposed image may be used as non-reference images.
The input exposure sequence refers to a series of images obtained by arranging a plurality of exposure images in an order of increasing exposure.
The pixel refers to a minimum unit in an image represented by a sequence of numbers, called a pixel. The image has continuous gray-scale tones, and if the image is magnified by several times, the continuous tones are actually composed of a plurality of small square points with similar colors, and the small square points are the minimum units for forming the image: a pixel. Such smallest graphical elements show on the screen a usually single colored dot. The higher the pixel, the richer the color plate it possesses, and the more the reality of color can be expressed.
The threshold processing is a method for realizing image segmentation in image processing, and common threshold segmentation methods include a simple threshold, an adaptive threshold and the like. The image threshold segmentation is a widely applied segmentation technology, which uses the difference of gray characteristics between a target area to be extracted from an image and a background thereof, regards the image as a combination of two types of areas (the target area and the background area) with different gray levels, and selects a reasonable threshold to determine whether each pixel point in the image belongs to the target area or the background area, thereby generating a corresponding binary image.
The static scene means that all exposure images describe the same scene exactly in the whole exposure sequence, and any inconsistent objects, such as moving objects, do not exist. The only difference between the different exposed images in the exposure sequence is the exposure level, which can cause underexposure or overexposure in different areas of the scene.
The weight distribution graph can embody the texture complexity weight of each pixel point in the image, and the image edge is generally realized by performing gradient operation on the image.
The multi-resolution fusion is a common image processing method, and the Laplacian pyramid structure provided by the invention is widely applied to multi-scale image fusion.
The smoothing process means that the originally continuous weight distribution is changed into discontinuous weight blocks after the image is subjected to threshold processing, so that the image needs to be smoothed by a multi-resolution fusion tool.
Specifically, the absolute exposure weight is calculated from a modified butterworth curve:
Figure BDA0002713420280000031
wherein, Ii(x, y) represents the normalized gray-scale value, α, of the pixel at (x, y) in the ith exposure image in the input exposure sequencei(x, y) represents the absolute exposure weight of the pixel, I0And n is a parameter for controlling the shape of the curve, take I0=0.5,n=1。
Further, the thresholding the pixels in the weight profile, the retaining of the moderately exposed pixels comprising:
the threshold is set as follows:
Figure BDA0002713420280000032
wherein Ii(x, y) is the normalized gray-scale value of the pixel located at (x, y) in the ith exposure image in the input exposure sequence;
reserving moderate exposure pixels through threshold processing, wherein the moderate exposure pixels are pixels with gray values between 0.1 and 0.9; removing underexposed pixels and overexposed pixels, wherein the underexposed pixels are pixels with the gray value smaller than 0.1, and the overexposed pixels are pixels with the gray value larger than 0.9;
and normalizing the weight distribution graph to obtain the weight distribution graph with proper exposure.
Further, when the pixels in the weight distribution map are subjected to threshold processing, if a plurality of pixels do not belong to a moderate exposure area in the whole input exposure sequence, the sum of the exposure weights of the plurality of pixels is made to be 0, the pixels with the weight sum of 0 are marked, and the pixels with the gray value closest to 0.5 are selected as the image fusion result.
Further, the step of smoothing the weight distribution map after the threshold processing by the multi-resolution fusion tool includes:
carrying out fuzzy filtering on each exposure image in the input exposure sequence by using a Gaussian low-pass filtering template with the size of 5 multiplied by 5;
the 5 × 5 gaussian low-pass filtering template is:
Figure BDA0002713420280000033
1/2 downsampling the exposure image;
obtaining a Gaussian image pyramid with the size reduced by half in sequence;
respectively up-sampling the images in the Gaussian image pyramid, and subtracting the images in the l-1 layer of the pyramid after up-sampling the images in the l-1 layer of the pyramid to obtain a detail image as the l-1 layer of the Laplacian pyramid;
the fusion process algorithm is as follows:
Figure BDA0002713420280000041
where N is the number of exposure images in the input exposure sequence, L { R }LFor the first layer of the laplacian pyramid of the fusion result,
Figure BDA0002713420280000042
is the first layer in the Gaussian pyramid of the normalized weight distribution graph of the kth exposure image in the input exposure sequence,
Figure BDA0002713420280000043
exposing for inputThe first layer of the laplacian pyramid of the kth image in the sequence;
for calculating separately the exposure images in the input exposure sequence
Figure BDA0002713420280000044
And
Figure BDA0002713420280000045
and correspondingly multiplying the same layer of the two pyramids of the same exposure image, accumulating the products of N layers of the whole N-times exposure sequence to obtain the Laplacian pyramid of the first layer of the fusion result, and recovering the L { R } of the fusion result according to the inverse process constructed by the Laplacian pyramid.
Further, in the fusion process, the detail image represented by the laplacian pyramid is added to the original image by a coefficient of 0.25 times; to achieve a balance between information preservation and artifact removal, a 6-level laplacian pyramid is employed.
Further, the present invention provides a readable storage medium having a control program stored thereon, characterized in that: the control program is executed by a processor to realize a detail-preserving multi-exposure image fusion algorithm in a static scene as described in any one of the above items.
Further, the present invention provides a computer control system, including a storage, a processor, and a control program stored in the storage and executable by the processor, wherein: when the processor executes the control program, the detail-preserving multi-exposure image fusion algorithm in the static scene is realized.
In order that the invention may be more clearly understood, specific embodiments thereof will be described hereinafter with reference to the accompanying drawings.
Drawings
FIG. 1 is a flowchart of a detail-preserving multi-exposure image fusion algorithm in a static scene according to an embodiment of the present invention;
fig. 2 is a comparison graph of the result of the static scene image fusion method in the outdoor scene according to the embodiment of the present invention and the result of the prior art.
Detailed Description
Please refer to fig. 1, which is a flowchart illustrating a detail-preserving multi-exposure image fusion algorithm in a static scene according to an embodiment of the present invention.
In recent years, many experts and scholars have conducted intensive research into multi-exposure image correlation algorithms. The prior art provides a Laplacian pyramid-based multi-exposure image fusion algorithm, which takes contrast, saturation and exposure fitness as weight measurement factors, but is easy to lose local detail information. A generalized random walk multi-exposure image fusion method based on a probability model is also provided, and local detail is easy to lose. The prior art also comprises a multi-exposure image fusion algorithm based on guide filtering, wherein the image is divided into a global layer and a detail layer, and the weights of the global layer and the detail layer are respectively constructed by adopting the guide filtering. The method and the device solve the problem of detail preservation during fusion of multiple exposure image sequences of the static scene, and therefore research is conducted on a detail preservation multiple exposure image fusion algorithm in the static scene.
The invention relates to a fusion algorithm for detail-preserving multi-exposure images in a static scene, which comprises the following steps:
s1, acquiring a plurality of exposure images in a static scene;
s2 inputting the exposure images into an exposure sequence according to the ascending order of exposure;
s3, calculating the absolute exposure weight of the exposure image according to the exposure parameter of each pixel of the exposure image in the input exposure sequence to obtain a weight distribution graph;
s4, carrying out threshold processing on the pixels in the weight distribution map, and reserving pixels with proper exposure;
s5 is performed to smooth the weight distribution map after the threshold processing by using the multi-resolution fusion tool.
The invention provides an efficient multiple exposure fusion algorithm, which is characterized in that a Butterworth low-pass filter curve is firstly improved to determine an absolute exposure weight, then a threshold is set for eliminating irrelevant pixels and enhancing the effect of exposure pixels with proper degree, and the absolute exposure weight is corrected by the threshold. And finally, fusing the six layers of images by using the Laplacian pyramid to obtain a fused image with enhanced details. The experimental result shows that the algorithm is simple and effective.
Image edge processing is generally implemented by performing gradient operations on an image. Since different objects have different texture detail features and therefore have different gradient information, if different object contents are described near two pixels, their gradient directions will be obviously different. According to the relation, the degree of difference and similarity of the pixel contents can be judged according to the size of the included angle in the gradient direction.
The Absolute Exposure Weight (AEW) refers to the Weight calculated by each pixel of an image according to its own Exposure parameter. And weighting and adding the pixel values at the same position by using the normalized weights to obtain a final undistorted fusion image. The absolute exposure weight of the invention is calculated by an improved Butterworth curve:
Figure BDA0002713420280000051
wherein, Ii(x, y) represents the normalized gray-scale value, α, of the pixel at (x, y) in the ith exposure image in the input exposure sequencei(x, y) represents the absolute exposure weight of the pixel, I0And n is a parameter for controlling the shape of the curve, and in the embodiment of the invention, I is taken0=0.5,n=1。
The thresholding of the pixels in the weight profile, the retaining of moderately exposed pixels comprising:
to exclude extraneous pixels and enhance the effect of moderately exposed pixels, the threshold is set as follows:
Figure BDA0002713420280000052
wherein Ii(x, y) is the normalized gray-scale value of the pixel at (x, y) in the ith exposure image in the input exposure sequence;
Reserving moderate exposure pixels through threshold processing, wherein the moderate exposure pixels are pixels with gray values between 0.1 and 0.9; removing underexposed pixels and overexposed pixels, wherein the underexposed pixels are pixels with the gray value smaller than 0.1, and the overexposed pixels are pixels with the gray value larger than 0.9;
and normalizing the weight distribution graph to obtain the weight distribution graph with proper exposure.
If a plurality of pixels do not belong to the moderate exposure area in the whole input exposure sequence when the pixels in the weight distribution map are subjected to threshold processing, the information of the pixels can be lost in the result after the threshold processing. In this case, the sum of the pixel weights is 0. Therefore, the pixels can be found by utilizing the rule and the pixel with the gray value closest to 0.5 is selected to be directly used as the final fusion result.
Thresholding can change an otherwise continuous weight distribution into discrete weight blocks, which therefore need to be smoothed by a multi-resolution fusion tool. The embodiment of the invention adopts the Laplacian pyramid of the standard image to participate in the fusion process. The laplacian pyramid can be used to seamlessly fuse images, and the step of smoothing comprises:
carrying out fuzzy filtering on each exposure image in the input exposure sequence by using a Gaussian low-pass filtering template with the size of 5 multiplied by 5;
the 5 × 5 gaussian low-pass filtering template is:
Figure BDA0002713420280000061
1/2 down-sampling the exposure image;
repeating the fuzzy filtering and down-sampling processes until a Gaussian image pyramid with the size reduced by half in sequence is obtained;
respectively up-sampling the images in the Gaussian image pyramid, and subtracting the images in the l-1 layer of the pyramid after up-sampling the images in the l-1 layer of the pyramid to obtain a detail image as the l-1 layer of the Laplacian pyramid; for the same image, the top level images of the gaussian image pyramid and the laplacian pyramid are the same. According to the normalized weight distribution graph controlled by the relative exposure weight, a Gaussian pyramid of each weight image can be constructed; the laplacian pyramid is computed from each input image.
The fusion process algorithm is as follows:
Figure BDA0002713420280000062
where N is the number of exposure images in the input exposure sequence, L { R }LFor the first layer of the laplacian pyramid of the fusion result,
Figure BDA0002713420280000063
is the first layer in the Gaussian pyramid of the normalized weight distribution graph of the kth exposure image in the input exposure sequence,
Figure BDA0002713420280000064
the first layer of the Laplacian pyramid of the kth image in the input exposure sequence;
for calculating separately the exposure images in the input exposure sequence
Figure BDA0002713420280000065
And
Figure BDA0002713420280000066
and correspondingly multiplying the same layer of the two pyramids of the same exposure image, accumulating the products of N layers of the whole N-times exposure sequence to obtain the Laplacian pyramid of the first layer of the fusion result, and recovering the L { R } according to the inverse process constructed by the Laplacian pyramid to obtain the final fusion result.
In order to improve the detail enhancement effect, in the fusion process, the detail image represented by the Laplacian pyramid is added with the original image by a coefficient of 0.25 times; if the coefficient is too low, the enhancement effect is weaker; if the coefficient is too high, the image will be distorted and lose the natural feeling.
By changing the number of pyramid layers, it can be found that some information may be lost if the number of pyramid layers is too large, and obvious halo artifacts may be caused by incomplete smoothing if the number of pyramid layers is too small. Through contrast experiments, the embodiment of the invention adopts 6 layers of Laplacian pyramids, and achieves relative balance between information retention and artifact elimination.
Please refer to fig. 2, which is a comparison graph of the result of the static scene image fusion method according to the embodiment of the present invention in the outdoor scene compared with the prior art.
The leftmost column of fig. 2 is an input exposure sequence, and the four images on the right side are the image fusion results of the three prior arts and the present scheme, respectively.
Fig. 2a shows a low complexity detail-preserving algorithm based on the pixel domain in the prior art. FIG. 2b is a block blending method in the prior art, in which pixel blocks with the same size and the maximum grayscale entropy are selected from an input sequence, and the best pixel blocks at different positions are blended to form a result image. Fig. 2c is a multi-scale image enhancement technique in the prior art, which improves the information retention of bright and dark areas by using a smoothed weight distribution gaussian pyramid.
FIG. 2d shows the fusion result of the embodiment of the present invention.
Although the result in fig. 2a retains all details, the overall picture effect is dark due to the interference of underexposed and overexposed pixel weights; FIG. 2b and FIG. 2c show similar results, with good visual perception, but losing detail information of the tower, wherein the result of FIG. 2b shows a slightly worse appearance in the left side of the brighter cloud; the result of fig. 2d of the present invention, however, can preserve more details such as the tower and the brighter cloud portion while ensuring good color saturation.
The result of the invention can completely deal with the interference of overexposure or underexposure in the static scene for image fusion, and simultaneously, the influence on detail retention is reduced to the minimum.
Compared with the prior art, the key of the invention lies in the algorithm of the absolute exposure weight, how to carry out threshold processing and normalization on the absolute exposure weight, and how to find out the weight and select the direct fusion result of the pixel with 0. The invention provides an efficient multiple exposure fusion algorithm, which constructs a proper weight distribution map according to multiple exposure sequences, gives enough weight to good pixels and simultaneously eliminates useless or sudden pixel weight. And obtaining a weight distribution graph based on the absolute exposure weight, and guiding the fusion process by the Laplacian pyramid of the image after threshold processing and correction to obtain a final fusion result. The static scene fusion algorithm of the invention reserves the information of most objects and presents good picture performance. The relative balance between information retention and artifact elimination is achieved, the threshold processing of the static area of the exposure image is more accurate and reliable than the prior art, and the method is more scientific and efficient than the method for artificially selecting the reference image in the prior art; in the aspect of multi-resolution fusion tool selection, a mature Laplacian pyramid fusion technology is adopted, and a contrast experiment is preferably carried out on six layers of Laplacian pyramids, so that the detail enhancement effect is improved, and the situations of image distortion and the like are avoided.
The present invention is not limited to the above-described embodiments, and various modifications and variations of the present invention are included in the scope of the claims and the equivalent technology of the present invention if they do not depart from the spirit and scope of the present invention.

Claims (8)

1. A detail-preserving multi-exposure image fusion algorithm in a static scene comprises the following steps:
acquiring a plurality of exposure images in a static scene;
inputting the exposure images into an exposure sequence according to the increasing order of the exposure;
calculating absolute exposure weight of the exposure image according to the exposure parameter of each pixel of the exposure image in the input exposure sequence to obtain a weight distribution map;
carrying out threshold processing on pixels in the weight distribution map, and reserving pixels with proper exposure;
and smoothing the weight distribution graph after threshold processing by using a multi-resolution fusion tool.
2. The detail-preserving multi-exposure image fusion algorithm in the static scene according to claim 1, characterized in that: the absolute exposure weight is calculated from the modified butterworth curve:
Figure FDA0002713420270000011
wherein, Ii(x, y) represents the normalized gray-scale value, α, of the pixel at (x, y) in the ith exposure image in the input exposure sequencei(x, y) represents the absolute exposure weight of the pixel, I0And n is a parameter for controlling the shape of the curve, take I0=0.5,n=1。
3. The detail-preserving multi-exposure image fusion algorithm in the static scene according to claim 1, wherein the thresholding is performed on the pixels in the weight distribution map, and the step of retaining the moderate exposure pixels comprises:
the threshold is set as follows:
Figure FDA0002713420270000012
wherein Ii(x, y) is the normalized gray-scale value of the pixel located at (x, y) in the ith exposure image in the input exposure sequence;
reserving moderate exposure pixels through threshold processing, wherein the moderate exposure pixels are pixels with gray values between 0.1 and 0.9; removing underexposed pixels and overexposed pixels, wherein the underexposed pixels are pixels with the gray value smaller than 0.1, and the overexposed pixels are pixels with the gray value larger than 0.9;
and normalizing the weight distribution graph to obtain the weight distribution graph with proper exposure.
4. The detail-preserving multi-exposure image fusion algorithm in the static scene according to claim 1, characterized in that: if a plurality of pixels do not belong to the moderate exposure area in the whole input exposure sequence when the pixels in the weight distribution graph are subjected to threshold processing, the sum of the exposure weights of the plurality of pixels is made to be 0, the pixels with the weight sum of 0 are marked, and the pixels with the gray value closest to 0.5 are selected as the image fusion result.
5. The detail-preserving multi-exposure image fusion algorithm in the static scene according to claim 1, characterized in that: the step of smoothing the weight distribution map after threshold processing by the multi-resolution fusion tool comprises the following steps:
carrying out fuzzy filtering on each exposure image in the input exposure sequence by using a Gaussian low-pass filtering template with the size of 5 multiplied by 5;
the 5 × 5 gaussian low-pass filtering template is:
Figure FDA0002713420270000021
1/2 downsampling the exposure image;
obtaining a Gaussian image pyramid with the size reduced by half in sequence;
respectively up-sampling the images in the Gaussian image pyramid, and subtracting the images in the l-1 layer of the pyramid after up-sampling the images in the l-1 layer of the pyramid to obtain a detail image as the l-1 layer of the Laplacian pyramid;
the fusion process algorithm is as follows:
Figure FDA0002713420270000022
where N is the number of exposure images in the input exposure sequence, L { R }LFor the first layer of the laplacian pyramid of the fusion result,
Figure FDA0002713420270000023
for exposure of the kth frame in the input exposure sequenceThe first layer in the gaussian pyramid of the normalized weight distribution map of the light image,
Figure FDA0002713420270000024
the first layer of the Laplacian pyramid of the kth image in the input exposure sequence;
for calculating separately the exposure images in the input exposure sequence
Figure FDA0002713420270000025
And
Figure FDA0002713420270000026
and correspondingly multiplying the same layer of the two pyramids of the same exposure image, accumulating the products of N layers of the whole N-times exposure sequence to obtain the Laplacian pyramid of the first layer of the fusion result, and recovering the L { R } of the fusion result according to the inverse process constructed by the Laplacian pyramid.
6. The fusion algorithm for detail-preserving multi-exposure images in static scenes according to claim 5, characterized in that the detail images represented by the Laplacian pyramid are added to the original image by a factor of 0.25 times during the fusion process; to achieve a balance between information preservation and artifact removal, a 6-level laplacian pyramid is employed.
7. A readable storage medium having a control program stored thereon, characterized in that: the control program when executed by a processor implements a detail-preserving multi-exposure image fusion algorithm in a static scene as claimed in any one of claims 1 to 6.
8. A computer control system comprising a memory, a processor, and a control program stored in said memory and executable by said processor, characterized in that: the processor implements the detail-preserving multi-exposure image fusion algorithm in the static scene according to any one of claims 1 to 6 when executing the control program.
CN202011064707.0A 2020-09-30 2020-09-30 Detail-preserving multi-exposure image fusion algorithm in static scene Pending CN112258434A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011064707.0A CN112258434A (en) 2020-09-30 2020-09-30 Detail-preserving multi-exposure image fusion algorithm in static scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011064707.0A CN112258434A (en) 2020-09-30 2020-09-30 Detail-preserving multi-exposure image fusion algorithm in static scene

Publications (1)

Publication Number Publication Date
CN112258434A true CN112258434A (en) 2021-01-22

Family

ID=74233543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011064707.0A Pending CN112258434A (en) 2020-09-30 2020-09-30 Detail-preserving multi-exposure image fusion algorithm in static scene

Country Status (1)

Country Link
CN (1) CN112258434A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113092489A (en) * 2021-05-20 2021-07-09 鲸朵(上海)智能科技有限公司 System and method for detecting appearance defects of battery

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160191158A1 (en) * 2012-12-27 2016-06-30 Panasonic Intellectual Property Corporation Of America Transmitting method, transmitting apparatus, and program
CN107845128A (en) * 2017-11-03 2018-03-27 安康学院 A kind of more exposure high-dynamics image method for reconstructing of multiple dimensioned details fusion
CN109754377A (en) * 2018-12-29 2019-05-14 重庆邮电大学 A kind of more exposure image fusion methods
US20200265567A1 (en) * 2019-02-18 2020-08-20 Samsung Electronics Co., Ltd. Techniques for convolutional neural network-based multi-exposure fusion of multiple image frames and for deblurring multiple image frames

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160191158A1 (en) * 2012-12-27 2016-06-30 Panasonic Intellectual Property Corporation Of America Transmitting method, transmitting apparatus, and program
CN107845128A (en) * 2017-11-03 2018-03-27 安康学院 A kind of more exposure high-dynamics image method for reconstructing of multiple dimensioned details fusion
CN109754377A (en) * 2018-12-29 2019-05-14 重庆邮电大学 A kind of more exposure image fusion methods
US20200265567A1 (en) * 2019-02-18 2020-08-20 Samsung Electronics Co., Ltd. Techniques for convolutional neural network-based multi-exposure fusion of multiple image frames and for deblurring multiple image frames

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HAYAT NAILA 等: "Ghost-free multi exposure image fusion technique using dense SIFT descriptor and guided filter", 《JOURNAL OF VISUAL COMMUNIATION AND IMAGE REPRESENTATION》, 31 December 2019 (2019-12-31), pages 295 - 308 *
乔闹生 等: "广义全变分与巴特沃斯高通滤波融合图像去噪", 《计算机工程与应用》, vol. 50, no. 19, 5 May 2014 (2014-05-05), pages 20 - 22 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113092489A (en) * 2021-05-20 2021-07-09 鲸朵(上海)智能科技有限公司 System and method for detecting appearance defects of battery

Similar Documents

Publication Publication Date Title
US11055827B2 (en) Image processing apparatus and method
Galdran Image dehazing by artificial multiple-exposure image fusion
US10666873B2 (en) Exposure-related intensity transformation
US11361459B2 (en) Method, device and non-transitory computer storage medium for processing image
CN110599433B (en) Double-exposure image fusion method based on dynamic scene
US20170365046A1 (en) Algorithm and device for image processing
Gallo et al. Artifact-free high dynamic range imaging
CN112785534A (en) Ghost-removing multi-exposure image fusion method in dynamic scene
Gao et al. Single image dehazing via self-constructing image fusion
Mondal et al. Image dehazing by joint estimation of transmittance and airlight using bi-directional consistency loss minimized FCN
CN113039576A (en) Image enhancement system and method
Steffens et al. Cnn based image restoration: Adjusting ill-exposed srgb images in post-processing
CN115953321A (en) Low-illumination image enhancement method based on zero-time learning
Bhukhanwala et al. Automated global enhancement of digitized photographs
CN110580696A (en) Multi-exposure image fast fusion method for detail preservation
CN115063331A (en) No-ghost multi-exposure image fusion algorithm based on multi-scale block LBP operator
CN112258434A (en) Detail-preserving multi-exposure image fusion algorithm in static scene
CN117391987A (en) Dim light image processing method based on multi-stage joint enhancement mechanism
CN114998173B (en) Space environment high dynamic range imaging method based on local area brightness adjustment
CN111105350A (en) Real-time video splicing method based on self homography transformation under large parallax scene
CN114862698B (en) Channel-guided real overexposure image correction method and device
Wang et al. An exposure fusion approach without ghost for dynamic scenes
US11625886B2 (en) Storage medium storing program, training method of machine learning model, and image generating apparatus
CN114339064A (en) Bayesian optimization exposure control method based on entropy weight image gradient
Gopika et al. Visibility enhancement of hazy image using depth estimation concept

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination