CN112381724A - Image width dynamic enhancement method based on multi-exposure fusion framework - Google Patents

Image width dynamic enhancement method based on multi-exposure fusion framework Download PDF

Info

Publication number
CN112381724A
CN112381724A CN202011109462.9A CN202011109462A CN112381724A CN 112381724 A CN112381724 A CN 112381724A CN 202011109462 A CN202011109462 A CN 202011109462A CN 112381724 A CN112381724 A CN 112381724A
Authority
CN
China
Prior art keywords
image
over
tolight
todark
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011109462.9A
Other languages
Chinese (zh)
Inventor
魏紫薇
徐亮
午建军
蒋鑫
闫川洋子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XiAn Institute of Optics and Precision Mechanics of CAS
Original Assignee
XiAn Institute of Optics and Precision Mechanics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XiAn Institute of Optics and Precision Mechanics of CAS filed Critical XiAn Institute of Optics and Precision Mechanics of CAS
Priority to CN202011109462.9A priority Critical patent/CN112381724A/en
Publication of CN112381724A publication Critical patent/CN112381724A/en
Pending legal-status Critical Current

Links

Classifications

    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an image enhancement method, and provides an image wide dynamic enhancement method based on a multi-exposure fusion framework to solve the technical problems that in the image enhancement technology, especially wide dynamic enhancement, when the conditions of over-dark and over-bright exist in an image at the same time, the brightness of over-dark areas cannot be improved, and the brightness of over-bright areas cannot be reduced.

Description

Image width dynamic enhancement method based on multi-exposure fusion framework
Technical Field
The invention relates to an image enhancement method, in particular to an image wide dynamic enhancement method based on a multi-exposure fusion framework.
Background
In many outdoor scenes, the camera cannot guarantee that all pixels are well exposed due to the limited dynamic range, so the image enhancement technology is widely applied in image processing, and generally can make the input image have better visual effect and better fit for a specific algorithm.
Especially, wide dynamic enhancement is taken as an enhancement technology, which can not only display information of low-illumination areas in images, but also weaken the phenomenon of overhigh brightness. Li M, Liu J, Yang W et al, "Structure-correcting Low-Light Image Enhancement via Robust Model [ J ] published in IEEE Transactions on Image Processing", propose a Low-illumination Image contrast Enhancement method, which can enhance the brightness of an over-dark area and simultaneously maintain the contrast of an originally well-exposed area. However, when considering the phenomenon of over-brightness and over-darkness in the image, the brightness of the over-bright area of the image after applying the method is also increased, which may cause the area to be excessively enhanced. Therefore, the difficulty of wide dynamic image enhancement lies in how to improve the image quality when the image has both over-dark and over-bright conditions, and ensure that the brightness of the over-dark area is improved and the brightness of the over-bright area is reduced, while the existing methods and similar methods proposed by the literature are difficult to distinguish the accurate ranges of the over-dark and over-bright areas and meet the requirements for wide dynamic image range enhancement.
Disclosure of Invention
The invention provides an image wide dynamic enhancement method based on a multi-exposure fusion frame, aiming at solving the technical problem that when the conditions of over-dark and over-bright exist in an image at the same time in the image enhancement technology, especially wide dynamic enhancement, the brightness of an over-bright area can not be reduced while the brightness of the over-dark area is improved.
In order to achieve the purpose, the invention provides the following technical scheme:
the image wide dynamic enhancement method based on the multi-exposure fusion frame is characterized by comprising the following steps of:
s1 weight matrix estimation in image fusion
S1.1, determining the brightness components of the over-dark and over-bright areas respectively
Luminance component L of excessively dark regiontolight(x) Comprises the following steps:
Figure BDA0002728098800000021
wherein, Pc(x) The gray values of all pixel points of the image are shown, and x represents a pixel;
the luminance component of the over-bright region is:
Figure BDA0002728098800000022
s1.2, obtaining a window weight matrix
Calculating to obtain a window weight matrix Md(x) Comprises the following steps:
Figure BDA0002728098800000023
d∈{h,v}
w (x) represents a local window with a pixel point x as a center, h represents the horizontal direction, v represents the vertical direction, and epsilon is a denominator compensation constant; l (y) is the pixel value of the luminance component image at pixel point y;
s1.3, obtaining optimized scene illumination
Respectively converting the brightness component L of the over-dark areatolight(x) And a luminance component L of an over-bright regiontodark(x) Substituting L (x) in the following formula, obtaining corresponding T (x), and respectively recording the T (x) as the optimized scene illumination T of the dark areatolight(x) Optimized scene illumination T for sum over-bright areatodark(x):
Figure BDA0002728098800000024
Wherein T (x) is the scene illumination at pixel point x, Ttolight(x) Is the scene illumination, T, at the pixel point x of the over-dark areatodark(x) The scene illumination at the pixel point x in the over-bright area is obtained;
s1.4, solving scene illumination T
Solving the following linear function to obtain the scene illumination T:
Figure BDA0002728098800000025
wherein I is an identity matrix; m isdIs Md(x) Vectorization results of (a);
Figure BDA0002728098800000026
is that
Figure BDA0002728098800000027
Vectorization results of (a); t is the vectorization result of T; diag is a diagonal matrix constructed with vectors; ddIs a Toeplitz matrix obtained from a discrete gradient operator of forward differences;
s1.5, solving a weight matrix
Respectively optimizing scene illumination T of too-dark areatolight(x) Optimized scene illumination T for sum over-bright areatodark(x) Substituting the T in the following formula to obtain corresponding W which are respectively the weight matrix W of the image dark areatolightAnd an over-bright region weight matrix Wtodark
W=Tμ
Wherein mu is the enhancement degree, and the value is 1/4-1/2;
s2, brightness transformation function and optimal exposure rate of image
S2.1, determining a brightness transformation function g
According to a camera response model corrected by Beta-Gamma, obtaining a brightness transformation function as follows:
Figure BDA0002728098800000031
wherein, P is an input image, a and b are respectively a first fixed parameter and a second fixed parameter of a camera response model based on Beta-Gamma correction; β and γ are a first model parameter and a second model parameter calculated from the first fixed parameter a, the second fixed parameter b, and the exposure k, respectively;
s2.2, determining the optimal exposure rate
S2.2.1 removing the imagePartial pixels to obtain a dark image, and extracting low-illumination pixels Qtolight
Qtolight={P(x)|Ttolight(x)<0.5}
Wherein p (x) represents an input image;
s2.2.2 removing part of pixels in the image to obtain a bright image, and extracting high-illumination pixels Q from the bright imagetodark
Qtodark={P(x)|Ttodark(x)>0.7}
S2.2.3, determining a low-luminance pixel brightness component BtolightAnd a high luminance pixel brightness component Btodark
The luma component B is set to:
Figure BDA0002728098800000032
wherein Q isr,QgAnd QbRespectively representing three color channels of the image, respectively converting the low-illumination pixel QtolightCorresponding three color channels and high luminance pixel QtodarkSubstituting the corresponding three color channels to obtain a low-illumination pixel brightness component BtolightAnd a high luminance pixel brightness component Btodark
S2.2.4, the image entropy H (B) is set as:
Figure BDA0002728098800000041
wherein p isiIs the ith in the histogram of the lightness component B, and N represents the value range of the horizontal axis of the histogram of the lightness component B;
s2.2.5, calculating the optimum exposure rate by the following formula
Figure BDA0002728098800000042
Figure BDA0002728098800000043
Respectively converting the low illumination pixel brightness component BtolightAnd a high luminance pixel brightness component BtodarkSubstituting into P in the transformation function g (P, k) obtained in step S2.1, and obtaining low-illumination pixels Q according to H (B) in step S2.2.4tolightCorresponding optimal exposure k1And high luminance pixel QtodarkCorresponding optimal exposure k2
S3, image enhancement processing
S3.1, obtaining the image P with the brightness of the image over-dark area improved through the following formula1
Figure BDA0002728098800000044
Wherein c is an index of three color channels;
s3.2, obtaining the image P with the brightness of the over-bright area of the image reduced through the following formula2
Figure BDA0002728098800000045
S3.3, obtaining a final enhanced result image R through the following formula:
Figure BDA0002728098800000046
further, in step S1.4, the following linear function is solved to obtain the scene illuminance T, specifically, a multi-resolution preprocessing conjugate gradient solver (o (n)) is used to perform an optimization solution.
Further, the step S1.3 is specifically to down-sample the image, and then respectively down-sample the luminance component L of the too-dark areatolight(x) And a luminance component L of an over-bright regiontodark(x) Substituting into L (x) in the following formula to obtain the corresponding optimized scene illumination, and respectively recording as the optimized scene illumination T of the too dark areatolight(x) Optimized scene illumination T for sum over-bright areatodark(x):
Figure BDA0002728098800000047
And then restoring the scene illumination to the original size through upsampling.
Further, in step S1.5, the value of μ is 1/2.
Further, in step S1.2, the value of w (x) is 5.
Further, in step S2.2.5, the calculating of the optimal exposure rate
Figure BDA0002728098800000051
In particular by a one-dimensional minimizer.
Further, in step S2.2.5, the image is resized to 50 × 50 pixels by performing the one-dimensional minimization calculation.
Further, in step S2.1, the value of the first fixed parameter a is-0.3293, and the value of the second fixed parameter b is 1.1258.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention relates to an image wide dynamic enhancement method based on a multi-exposure fusion frame, which designs a weight matrix for image fusion by using an illumination estimation technology, provides a method for synthesizing a multi-exposure image by using a camera response model, improves the brightness of an over-dark area and reduces the brightness of an over-bright area by searching for the optimal exposure rate, and finally fuses an input image and a multi-exposure image according to the weight matrix to obtain a final enhancement result image; the method of the invention aims at the problem that when the over-dark condition and the over-bright condition exist in the image at the same time, the partial image area is over-enhanced or the contrast ratio after the enhancement is insufficient, respectively designs the weight matrix and the exposure rate which are suitable for the two conditions aiming at the over-dark condition and the over-bright condition, not only improves the brightness of the over-dark area, but also reduces the brightness of the over-bright area while keeping the original well-exposed area contrast ratio, obtains a better enhancement effect, and verifies that the image brightness distortion degree is reduced compared with the prior method, thereby providing convenience for the algorithm of the subsequent image processing.
2. When the scene illumination is solved, a multi-resolution preprocessing conjugate gradient solver (O (N)) is adopted, so that the algorithm is more efficient.
3. When the optimal scene illumination is solved, the image is firstly subjected to down-sampling, and then the image is restored to the original size through up-sampling after the solution, so that the enhanced image subjected to down-sampling has no difference in vision, but the calculation efficiency is greatly improved.
4. According to the invention, when the optimal exposure rate is calculated through the one-dimensional minimizer, the size of the image is adjusted to 50 pixels by 50 pixels, so that the calculation efficiency can be effectively improved.
Detailed Description
The technical solutions of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it should be understood that the described embodiments are not intended to limit the present invention.
The invention provides an image wide dynamic enhancement method based on a multi-exposure fusion frame, which mainly comprises the following four processes: weight matrix estimation, luminance transformation function determination, image exposure determination and multi-exposure image fusion in image fusion.
(1) Weight matrix estimation in image fusion
The multi-exposure fusion is to fuse images with different exposure degrees to achieve the purpose of image enhancement, and in order to ensure that the contrast of an excessively dark area and an excessively bright area is enhanced and simultaneously the contrast of an originally well exposed area can be still kept, the weights of different exposure images in a fusion frame need to be designed. In general, the weight matrix has a positive correlation with the scene illuminance, and thus the size of the weight matrix can be calculated by estimating the scene illuminance. And respectively calculating image brightness component values aiming at the conditions of over-darkness and over-brightness in the image, adopting the brightness components as initial estimation of scene illumination, and finally correcting the scene illumination by solving the optimization problem.
Pixels in well exposed areas are assigned larger weight values and pixels in too dark and too bright areas are assigned smaller weight values. Since high-illumination areas are more likely to be well-exposed areas, they should be assigned a larger weight value to maintain their contrast. The weight matrix W is defined as follows:
W=Tμ (1)
where T is the scene illumination and μ represents the degree of enhancement. In order to retain well-exposed areas after image enhancement, μ can be set to 1/2, and can be in the range of 1/4-1/2.
The scene illumination T is estimated by an optimization method, and since the luminance component can be used as an estimated value of the scene illumination, the luminance component is used as an initial estimation of the scene illumination. Aiming at the two conditions of brightness improvement of an image over-dark area and brightness reduction of an image over-bright area, the brightness component calculation method is different, but the subsequent optimization method of the scene illumination map is the same.
When the brightness of the over-dark area is improved, the brightness component L of the over-dark areatolight(x) Is defined as:
Figure BDA0002728098800000061
when the brightness of the over-bright area is reduced, the brightness component L of the over-bright areatodark(x) Comprises the following steps:
Figure BDA0002728098800000062
wherein x is a pixel point, Pc(x) The gray values of all the pixel points of the image are obtained. The ideal luminance values should remain locally consistent in areas with similar structures, i.e. the scene luminance T should preserve the dominant structures in the image and remove texture edges. The estimate of T is corrected by solving the following optimization problem, as shown in the following equation:
Figure BDA0002728098800000071
wherein | | xi | purple2And |. non conducting phosphor1Are each l2And l1Norm, first derivative filter of
Figure BDA0002728098800000075
Including horizontal filtering
Figure BDA0002728098800000076
And vertical filtering
Figure BDA0002728098800000077
M is a weight matrix and λ is a coefficient, where λ is preferably 1, where the first term in the above equation is to minimize the error between the initial map L and the modified map T, and the second term is a smoothing term.
The design of the weight matrix M is important for correcting the scene illumination, the edges in the local window have similar directional gradients compared with the complex texture in the window, therefore, the weight of the window containing the main edge should be smaller than that of the window containing only the texture, and finally, the window weight matrix M is usedd(x) The design is as follows:
Figure BDA0002728098800000072
| x | is an absolute value operator, w (x) indicates a local window centered on a pixel x, and ∈ is a denominator compensation constant, which is a very small constant to avoid the case where the denominator is 0, preferably ∈ ═ 0.001, w (x) is 5, l (y) is a pixel value of the luminance component image at a pixel point y, h is a horizontal direction, and v is a vertical direction.
To reduce computational complexity, we approximate equation (4) as:
Figure BDA0002728098800000073
since λ is 1, it can be hidden in equation (6). It can be seen that this equation (6) involves only quadratic terms. Let mdL, t and
Figure BDA0002728098800000078
respectively represent MdL, T and
Figure BDA0002728098800000079
the vectorization result of (c) can be obtained directly by solving the following linear function:
Figure BDA0002728098800000074
where element-by-element division, I is the identity matrix, the operator Diag is the diagonal matrix constructed with vectors, DdIs a Toeplitz matrix obtained from discrete gradient operators of forward differences.
Since the above scene illumination optimization is time-consuming, a multi-resolution preprocessing conjugate gradient solver (o (n)) can be used as a preferable solution to solve the optimization problem, so that the algorithm is more efficient. In addition, in order to further accelerate the algorithm, the input image is firstly subjected to down-sampling, then the T is solved, then the scene illumination is restored to the original size through up-sampling, the enhanced image subjected to down-sampling is not different visually, and the calculation efficiency is greatly improved.
(2) Luminance transformation function and image exposure determination
Since photographs taken at different exposures have a high correlation, a camera response model is used to accurately describe the correlation between the images. A brightness transformation function expression can be obtained according to the camera response model, and the mapping relation between two images with different exposure degrees can be calculated through the brightness transformation function. In order to enhance the contrast of both the over-dark and over-bright areas, the optimal exposure rate needs to be found, and then the exposed image is calculated through a brightness transformation function.
Determining a luminance transformation function
The correlation between images taken at different exposures is accurately described using a camera response model so that a series of images can be generated from the input images. The mapping function between two images differing only in exposure is called the luminance transformation function (BTF), given an exposure ratio kiAnd a luminance transformation functionG, the input image P may be mapped to the ith image in the exposure set:
Pi=g(P,ki) (8)
based on a Beta-Gamma corrected camera response model, the brightness transformation function is
Figure BDA0002728098800000081
Where β and γ are two model parameters that can be calculated from the camera parameters a, b, and the exposure k, and a, b are parameters in the camera response model based on Beta-Gamma correction, typically fixed values. Assuming that the camera information is unknown, fixed camera parameters suitable for most cameras can be used, i.e.
a=-0.3293,b=1.1258。
Determining exposure rate
In order to ensure that the contrast of the over-dark area and the contrast of the over-bright area are both enhanced, the optimal exposure rate is found, the brightness of the over-dark area of the original image is respectively improved, and the brightness of the over-bright area is reduced.
Firstly, removing well-exposed and over-bright pixels in an image to obtain an overall darker image, and extracting low-illumination pixels Q according to the following formulatolight
Qtolight={P(x)|Ttolight(x)<0.5} (10)
Wherein Q istolightOnly the darker pixels are included, and p (x) is the input image, i.e. the original input image.
And secondly, removing the pixels with good exposure and too dark in the image to obtain an overall too bright image. Extracting high illumination pixel Q according totodark
Qtodark={P(x)|Ttodark(x)>0.7} (11)
Wherein Q istodarkOnly pixels that are too bright are included.
The brightness of the images at different exposures varies greatly, but the colors are substantially the same, so only the brightness component is considered when estimating the optimal exposure rate. The luma component B is defined as the geometric mean of the three color channels:
Figure BDA0002728098800000091
wherein Qr、QrAnd QbThe red, green and blue channels, representing the input image respectively, use geometric means rather than arithmetic means or weighted arithmetic means because it has the same luminance transfer function model parameters, i.e. β and γ, over the three color channels, as shown in equation (13):
Figure BDA0002728098800000092
well exposed images have higher visibility than images that are too dark or too bright and can provide more information, and therefore, optimal exposure
Figure BDA0002728098800000093
The most information should be provided. The information quantity is measured by using the image entropy, and the image entropy is defined as 1
·
Figure BDA0002728098800000094
Wherein p isiIs the ith of the histogram of B, the histogram pair
Figure BDA0002728098800000095
Is counted, and N is the range of values on the horizontal axis of the histogram, typically set to 256.
Computing optimal exposure by maximizing image entropy for enhanced brightness
Figure BDA0002728098800000096
Figure BDA0002728098800000097
Optimum exposure rate
Figure BDA0002728098800000098
The solution can be performed by a one-dimensional minimizer, and the input image can be further resized to 50 × 50 when k is optimized for improved computational efficiency.
(3) Multiple exposure image fusion
In order to obtain an image with good overall exposure, the images of multiple exposures are fused. In the embodiment of the invention, the fusion frame only comprises two images to reduce the calculation complexity, and different exposure images are subjected to weighted fusion according to the weight value according to the weight matrix estimated in the first step to obtain the final enhanced result image.
For the case that there are both over-dark and over-bright phenomena in the image, although some over-dark areas can be improved by increasing the exposure amount, the imaging effect of the over-bright areas is not improved, and at the same time, the well-exposed areas may be over-exposed, and in order to obtain the overall well-exposed image, we can fuse the following images:
Figure BDA0002728098800000101
where N is the number of pictures, PiIs the ith image in the exposure set, WiIs the weight map of the ith image, c is the index of the three color channels, and R is the enhancement result image. The weight of the well exposed pixel is larger, the weight of the over-dark and over-bright pixel is smaller, and the weight is normalized
Figure BDA0002728098800000102
In the invention, only two images are fused to reduce the calculation complexity, and the weight matrix W and the optimal exposure rate are obtained according to the previous two steps
Figure BDA0002728098800000103
And a luminance transformation function g,the fusion framework for improving the brightness of the image in the too-dark area can be obtained as follows:
Figure BDA0002728098800000104
wherein, P1Is the image with the brightness of the over-dark area raised, P is the original input image, c is the index of the three color channels, k1Is made of low-illumination pixels QtolightAnd (5) obtaining the optimal exposure rate.
The fusion framework of the brightness reduction of the over-bright area of the image is as follows:
Figure BDA0002728098800000105
the overall fusion framework is as follows:
Figure BDA0002728098800000106
wherein, R is the enhanced result image finally obtained by the invention.
The above description is only an embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent structural changes made by using the content of the present specification, or any other related technical fields directly or indirectly, are included in the scope of the present invention.

Claims (8)

1. An image wide dynamic enhancement method based on a multi-exposure fusion frame is characterized by comprising the following steps:
s1 weight matrix estimation in image fusion
S1.1, determining the brightness components of the over-dark and over-bright areas respectively
Luminance component L of excessively dark regiontolight(x) Comprises the following steps:
Figure FDA0002728098790000011
wherein, Pc(x) The gray values of all pixel points of the image are shown, x represents a pixel, and R, G and B respectively represent three color channels;
luminance component L of an over-bright regiontodark(x) Comprises the following steps:
Figure FDA0002728098790000012
s1.2, obtaining a window weight matrix
Calculating to obtain a window weight matrix Md(x) Comprises the following steps:
Figure FDA0002728098790000013
w (x) represents a local window with a pixel point x as a center, h represents the horizontal direction, v represents the vertical direction, and epsilon is a denominator compensation constant; l (y) is the pixel value of the luminance component image at pixel point y;
s1.3, obtaining optimized scene illumination
Respectively converting the brightness component L of the over-dark areatolight(x) And a luminance component L of an over-bright regiontodark(x) L (x) in the following formula, and corresponding T (x) are obtained and denoted as Ttolight(x) And Ttodark(x):
Figure FDA0002728098790000014
Wherein T (x) is the scene illumination at pixel point x, Ttolight(x) Is the scene illumination, T, at the pixel point x of the over-dark areatodark(x) The scene illumination at the pixel point x in the over-bright area is obtained;
s1.4, solving scene illumination T
Solving the following linear function to obtain the scene illumination T:
Figure FDA0002728098790000021
wherein I is an identity matrix; m isdIs Md(x) Vectorization results of (a);
Figure FDA0002728098790000022
is that
Figure FDA0002728098790000023
Vectorization results of (a); t is the vectorization result of T; diag is a diagonal matrix constructed with vectors; ddIs a Toeplitz matrix obtained from a discrete gradient operator of forward differences;
s1.5, solving a weight matrix
Respectively optimizing scene illumination T of too-dark areatolight(x) Optimized scene illumination T for sum over-bright areatodark(x) Substituting the T in the following formula to obtain corresponding W which are respectively the weight matrix W of the image dark areatolightAnd an over-bright region weight matrix Wtodark
W=Tμ
Wherein mu is the enhancement degree, and the value is 1/4-1/2;
s2, brightness transformation function and optimal exposure rate of image
S2.1, determining a brightness transformation function g
According to a camera response model corrected by Beta-Gamma, obtaining a brightness transformation function as follows:
Figure FDA0002728098790000024
wherein, P is an input image, a and b are respectively a first fixed parameter and a second fixed parameter of a camera response model based on Beta-Gamma correction; β and γ are a first model parameter and a second model parameter calculated from the first fixed parameter a, the second fixed parameter b, and the exposure k, respectively;
s2.2, determining the optimal exposure rate
S2.2.1 removing part of pixels in the image to obtainTo a dark image, extracting low-illumination pixels Q therefromtolight
Qtolight={P(x)|Ttolight(x)<0.5}
Wherein p (x) represents an input image;
s2.2.2 removing part of pixels in the image to obtain a bright image, and extracting high-illumination pixels Q from the bright imagetodark
Qtodark={P(x)|Ttodark(x)>0.7}
S2.2.3, determining a low-luminance pixel brightness component BtolightAnd a high luminance pixel brightness component Btodark
The luma component B is set to:
Figure FDA0002728098790000031
wherein Q isr,QgAnd QbRespectively representing three color channels of the image, respectively converting the low-illumination pixel QtolightCorresponding three color channels and high luminance pixel QtodarkSubstituting the corresponding three color channels to obtain a low-illumination pixel brightness component BtolightAnd a high luminance pixel brightness component Btodark
S2.2.4, the image entropy H (B) is set as:
Figure FDA0002728098790000032
wherein p isiIs the ith in the histogram of the lightness component B, and N represents the value range of the horizontal axis of the histogram of the lightness component B;
s2.2.5, calculating the optimum exposure rate by the following formula
Figure FDA0002728098790000033
Figure FDA0002728098790000034
Respectively converting the low illumination pixel brightness component BtolightAnd a high luminance pixel brightness component BtodarkSubstituting into P in the transformation function g (P, k) obtained in step S2.1, and obtaining low-illumination pixels Q according to H (B) in step S2.2.4tolightCorresponding optimal exposure k1And high luminance pixel QtodarkCorresponding optimal exposure k2
S3, image enhancement processing
S3.1, obtaining the image P with the brightness of the image over-dark area improved through the following formula1
Figure FDA0002728098790000035
Wherein c is an index of three color channels;
s3.2, obtaining the image P with the brightness of the over-bright area of the image reduced through the following formula2
Figure FDA0002728098790000036
S3.3, obtaining a final enhanced result image R through the following formula:
Figure FDA0002728098790000037
2. the image wide dynamic enhancement method based on the multi-exposure fusion framework as claimed in claim 1, characterized in that: in step S1.4, the following linear function is solved to obtain the scene illuminance T, specifically, a multi-resolution preprocessing conjugate gradient solver (o (n)) is used to perform an optimization solution.
3. The method of claim 1, wherein the method for image wide dynamic enhancement based on multi-exposure fusion framework is characterized in that: the step S1.3 is specifically to down-sample the image, and then respectively down-sample the luminance component L of the too-dark areatolight(x) And a luminance component L of an over-bright regiontodark(x) Substituting into L (x) in the following formula to obtain the corresponding optimized scene illumination, and respectively recording as the optimized scene illumination T of the too dark areatolight(x) Optimized scene illumination T for sum over-bright areatodark(x):
Figure FDA0002728098790000041
And then restoring the scene illumination to the original size through upsampling.
4. The method for image wide dynamic enhancement based on multi-exposure fusion framework as claimed in claim 1, 2 or 3, characterized in that: in step S1.5, the value of μ is 1/2.
5. The method for image wide dynamic enhancement based on multi-exposure fusion framework as claimed in claim 4, characterized in that: in step S1.2, the value of w (x) is 5.
6. The method for image wide dynamic enhancement based on multi-exposure fusion framework as claimed in claim 5, characterized in that: in step S2.2.5, the calculating of the optimal exposure rate
Figure FDA0002728098790000042
In particular by a one-dimensional minimizer.
7. The method for image wide dynamic enhancement based on multi-exposure fusion framework as claimed in claim 6, characterized in that: in step S2.2.5, the one-dimensional minimizer calculates the size of the image to 50 × 50 pixels.
8. The method for image wide dynamic enhancement based on multi-exposure fusion framework as claimed in claim 7, characterized in that: in step S2.1, the value of the first fixed parameter a is-0.3293, and the value of the second fixed parameter b is 1.1258.
CN202011109462.9A 2020-10-16 2020-10-16 Image width dynamic enhancement method based on multi-exposure fusion framework Pending CN112381724A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011109462.9A CN112381724A (en) 2020-10-16 2020-10-16 Image width dynamic enhancement method based on multi-exposure fusion framework

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011109462.9A CN112381724A (en) 2020-10-16 2020-10-16 Image width dynamic enhancement method based on multi-exposure fusion framework

Publications (1)

Publication Number Publication Date
CN112381724A true CN112381724A (en) 2021-02-19

Family

ID=74579917

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011109462.9A Pending CN112381724A (en) 2020-10-16 2020-10-16 Image width dynamic enhancement method based on multi-exposure fusion framework

Country Status (1)

Country Link
CN (1) CN112381724A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256516A (en) * 2021-05-18 2021-08-13 中国科学院长春光学精密机械与物理研究所 Image enhancement method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107833184A (en) * 2017-10-12 2018-03-23 北京大学深圳研究生院 A kind of image enchancing method for merging framework again based on more exposure generations
CN110827225A (en) * 2019-11-13 2020-02-21 山东科技大学 Non-uniform illumination underwater image enhancement method based on double exposure frame

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107833184A (en) * 2017-10-12 2018-03-23 北京大学深圳研究生院 A kind of image enchancing method for merging framework again based on more exposure generations
CN110827225A (en) * 2019-11-13 2020-02-21 山东科技大学 Non-uniform illumination underwater image enhancement method based on double exposure frame

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHENQIANG YING 等: "A New Image Contrast Enhancement Algorithm Using Exposure Fusion Framework", 《INTERNATIONAL CONFERENCE ON COMPUTER ANALYSIS OF IMAGES AND PATTERNS》 *
司马紫菱等: "基于模拟多曝光融合的低照度图像增强方法", 《计算机应用》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256516A (en) * 2021-05-18 2021-08-13 中国科学院长春光学精密机械与物理研究所 Image enhancement method

Similar Documents

Publication Publication Date Title
CN112419181B (en) Method for enhancing detail of wide dynamic infrared image
EP1924966B1 (en) Adaptive exposure control
CN109754377B (en) Multi-exposure image fusion method
KR101026577B1 (en) A system and process for generating high dynamic range video
WO2019071981A1 (en) Image enhancement method based on multi-exposure generation and re-fusion frame
CN106897981A (en) A kind of enhancement method of low-illumination image based on guiding filtering
CN112330531B (en) Image processing method, image processing device, electronic equipment and storage medium
CN112734650B (en) Virtual multi-exposure fusion based uneven illumination image enhancement method
JP4021261B2 (en) Image processing device
CN113129391B (en) Multi-exposure fusion method based on multi-exposure image feature distribution weight
CN115731146B (en) Multi-exposure image fusion method based on color gradient histogram feature optical flow estimation
CN113222866B (en) Gray scale image enhancement method, computer readable medium and computer system
CN112785534A (en) Ghost-removing multi-exposure image fusion method in dynamic scene
CN115883755A (en) Multi-exposure image fusion method under multi-type scene
CN116188339A (en) Retinex and image fusion-based scotopic vision image enhancement method
CN110009574B (en) Method for reversely generating high dynamic range image from low dynamic range image
CN112381724A (en) Image width dynamic enhancement method based on multi-exposure fusion framework
Han et al. Automatic illumination and color compensation using mean shift and sigma filter
Huang et al. An end-to-end dehazing network with transitional convolution layer
JP4664938B2 (en) Image processing apparatus, image processing method, and program
JP4147155B2 (en) Image processing apparatus and method
JP3731741B2 (en) Color moving image processing method and processing apparatus
CN111640068A (en) Unsupervised automatic correction method for image exposure
JP4769332B2 (en) Image processing apparatus, image processing method, and program
CN114240767A (en) Image wide dynamic range processing method and device based on exposure fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210219

RJ01 Rejection of invention patent application after publication