CN110211077B - Multi-exposure image fusion method based on high-order singular value decomposition - Google Patents

Multi-exposure image fusion method based on high-order singular value decomposition Download PDF

Info

Publication number
CN110211077B
CN110211077B CN201910396691.4A CN201910396691A CN110211077B CN 110211077 B CN110211077 B CN 110211077B CN 201910396691 A CN201910396691 A CN 201910396691A CN 110211077 B CN110211077 B CN 110211077B
Authority
CN
China
Prior art keywords
image
brightness
exposure
fusion
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910396691.4A
Other languages
Chinese (zh)
Other versions
CN110211077A (en
Inventor
李黎
骆挺
徐海勇
吴圣聪
何周燕
张君君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University Shangyu Science and Engineering Research Institute Co Ltd
Original Assignee
Hangzhou Dianzi University Shangyu Science and Engineering Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University Shangyu Science and Engineering Research Institute Co Ltd filed Critical Hangzhou Dianzi University Shangyu Science and Engineering Research Institute Co Ltd
Priority to CN201910396691.4A priority Critical patent/CN110211077B/en
Publication of CN110211077A publication Critical patent/CN110211077A/en
Application granted granted Critical
Publication of CN110211077B publication Critical patent/CN110211077B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a multi-exposure image fusion method based on high-order singular value decomposition, which divides a brightness channel image of an exposure image into overlapped brightness blocks, obtains a kernel tensor and a third mode factor matrix of the brightness blocks by utilizing the high-order singular value decomposition, further obtains an eigen coefficient and an activity level measure of the brightness blocks, obtains a fused brightness block according to a first mode factor matrix, a second mode factor matrix, the eigen coefficient and the activity level measure of the brightness blocks, and performs linear transformation on the obtained brightness channel image to obtain a fused brightness channel image; obtaining a fused first chrominance channel image by calculating a fusion coefficient of a pixel point in the first chrominance channel image of the exposure image; obtaining a fused second chrominance channel image by calculating a fusion coefficient of a pixel point in the second chrominance channel image of the exposure image; obtaining a fused image according to the fused images of the three channels; the advantage is that better detail texture and rich color information can be obtained.

Description

Multi-exposure image fusion method based on high-order singular value decomposition
Technical Field
The invention relates to an image fusion technology, in particular to a multi-exposure image fusion method based on high-order singular value decomposition.
Background
The process of combining information from two or more images of the same scene into one more informative image is called image fusion. Multi-Exposure Image Fusion (MEF) is one of the classic applications of Image Fusion. Images of natural scenes typically have a greater dynamic range than images taken with digital cameras due to the dynamic range limitations of digital cameras. High Dynamic Range (HDR) imaging techniques estimate a Camera Response Function (CRF) from a plurality of Low Dynamic Range (LDR) images, and then reconstruct a High Dynamic Range image using the inverse of the Camera Response Function. Since most standard displays in current use are low dynamic range, after the high dynamic range image is acquired, a tone mapping process is required to compress the dynamic range of the high dynamic range image for display, however, the computational complexity of this process is high and the quality of the high dynamic range image depends on the computational accuracy of the camera response function. Therefore, multi-exposure image fusion is an effective and convenient alternative to complex high dynamic range imaging techniques.
Multi-exposure image fusion fuses a series of differently exposed images to obtain a high quality low dynamic range image without the need for camera response function recovery and tone mapping. In the document a.goshttasby, "Fusion of multi-exposure images," Image and Vision Computing, vol.23, pp.611-618,2005 "(multi-exposure Image Fusion), a block-level Fusion method is used, where the Image is divided into uniform blocks and a minimum average method is used to fuse out the best Image block, however, this method has poor Image contrast and saturation after Fusion. In the context of b.gu, w.li, j.wong, m.zhu, and m.wang, "Gradient field multi-exposure image fusion for high dynamic range image visualization," j.vis.com.image reproduction, vol.23, No.4, pp.604-610, May 2012. (Gradient field multi-exposure image fusion for high dynamic range image visualization) it is proposed to modify the Gradient field by an iterative method, obtain the result by solving the poisson equation and then linearly stretching to the common range, using a method of two averaging filters and multi-scale non-linear compression, but this method is prone to artifact. An effective scene synthesis method using an edge-preserving filter is proposed in the text of "binary filter based synthesis for variable exposure photosgragraphic," in proc eurography, 2009, pp.1-4 "(variable exposure photosynthesizing based on Bilateral filter), such as Bilateral filter, where the color of an image fused by the method is easily distorted and the hue of the image is dark as a whole because no limitation is placed on the global brightness uniformity. In k.ma, h.li, z.wang and d.meng, "Robust Multi-Exposure Image Fusion: a Structural Patch composition application," IEEE trans.image process ", vol.26, No.5, pp.2519-2532, May 2017 (Multi-Exposure Image Fusion: a Structural block Decomposition method), it is proposed to fuse Multi-Exposure images using a Structural block Decomposition method, but this method does not easily obtain texture information and the effect of removing artifacts is not satisfactory.
Disclosure of Invention
The invention aims to solve the technical problem of providing a multi-exposure image fusion method based on high-order singular value decomposition, which respectively processes brightness and chroma in the multi-exposure image fusion process and can obtain better detail texture and rich color information.
The technical scheme adopted by the invention for solving the technical problems is as follows: a multi-exposure image fusion method based on high-order singular value decomposition is characterized by comprising the following steps:
step 1: selecting D different exposure images with the width of M and the height of N; then converting each exposure image from RGB color space to YCbCr color space to obtain brightness channel image, first chroma channel image and second chroma channel image of each exposure image, and recording the brightness channel image, the first chroma channel image and the second chroma channel image of the d exposure image as Yd、Cbd、Crd(ii) a Wherein D is a positive integer, D is more than 1, D is a positive integer, the initial value of D is1, and D is more than or equal to 1 and less than or equal to D;
step 2: acquiring brightness channel images of all exposure images after brightness channel image fusion, wherein the specific process is as follows:
step 2_ 1: with a width of
Figure GDA0002826763970000021
And has a height of
Figure GDA0002826763970000022
The sliding window slides in the brightness channel image of each exposure image by taking the step length as r pixel points, divides the brightness channel image of each exposure image into L brightness blocks, and divides Y intodThe ith luminance block in (1) is denoted as Bd,i(ii) a And recording each pixel in the luminance channel image of each exposure image in the luminance block division processThe number of overlapping times of dots; then, the brightness blocks at the same position in the brightness channel images of all the exposure images are formed into a size of
Figure GDA0002826763970000023
L tensors are obtained in total, each tensor corresponds to D luminance blocks, and the tensor composed of the ith luminance block in the luminance channel images of all the exposure images is recorded as ai(ii) a Wherein the content of the first and second substances,
Figure GDA0002826763970000031
r is a positive integer, and r is a positive integer,
Figure GDA0002826763970000032
min () is a minimum function, i is a positive integer, the initial value of i is1, i is more than or equal to 1 and is less than or equal to L;
step 2_ 2: performing a higher order singular value decomposition on each tensor, for AiTo A, aiCarrying out high-order singular value decomposition to obtain Ai=Si×1Ui×2Vi×3Wi(ii) a Wherein S isiIs represented by AiNuclear tensor of, UiIs represented by AiOf the first mode factor matrix, ViIs represented by AiA second mode factor matrix of WiIs represented by AiSymbol "", a third pattern factor matrix1"first mode product sign, sign of tensor"2"second mode product sign, sign as tensor3"is the sign of the third mode product of the tensor;
step 2_ 3: obtaining the characteristic coefficient of each brightness block corresponding to each tensor, and comparing AiThe characteristic coefficient of the corresponding d-th luminance block is recorded as
Figure GDA0002826763970000033
Wherein A isiThe corresponding d-th brightness block is Bd,i
Figure GDA0002826763970000034
Is represented by AiCorresponding d-th luminance blockI.e. Bd,iThe nuclear tensor of (a);
step 2_ 4: calculating the activity level measure of each brightness block corresponding to each tensor, and calculating AiThe activity level measure of the corresponding d-th luminance block is noted as
Figure GDA0002826763970000035
Wherein m is a positive integer, the initial value of m is1,
Figure GDA0002826763970000036
n is a positive integer, n has an initial value of 1,
Figure GDA0002826763970000037
the symbol "|" is an absolute value-taking symbol,
Figure GDA0002826763970000038
to represent
Figure GDA0002826763970000039
The middle subscript is the value at (m, n);
step 2_ 5: obtaining a fusion coefficient matrix of each tensor, and calculating AiThe fusion coefficient matrix of (2) is recorded as Ei
Figure GDA00028267639700000310
Wherein k is a weight index, and k belongs to (0, 1)];
Step 2_ 6: calculating the fused brightness block corresponding to each tensor, and calculating AiThe corresponding fused luminance block is marked as Fi,Fi=Ui×Ei×(Vi)T(ii) a Wherein (V)i)TIs a ViTransposing;
step 2_ 7: acquiring an overlapped brightness channel image formed by the L fused brightness blocks according to the obtained L fused brightness blocks, and recording the image as YoutIs a reaction of YoutThe pixel value of the pixel point with the middle coordinate position (x, Y) is recorded as Yout(x, y); then Y is putoutDividing the superposed pixel value of each pixel point by the superposed times of the pixel points to obtain the brightness fluxRoad image, note
Figure GDA0002826763970000041
Will be provided with
Figure GDA0002826763970000042
The pixel value of the pixel point with the middle coordinate position (x, y) is recorded as
Figure GDA0002826763970000043
Wherein, Yout
Figure GDA0002826763970000044
The widths of the N-type glass are M and the heights of the N-type glass are N, x is more than or equal to 1 and less than or equal to M, and y is more than or equal to 1 and less than or equal to N;
step 2_ 8: to pair
Figure GDA0002826763970000045
Performing linear transformation optimization to obtain brightness channel image after brightness channel image fusion of all exposure images, and recording as
Figure GDA0002826763970000046
Will be provided with
Figure GDA0002826763970000047
The pixel value of the pixel point with the middle coordinate position (x, y) is recorded as
Figure GDA0002826763970000048
Figure GDA0002826763970000049
Wherein, YminTo represent
Figure GDA00028267639700000410
Minimum pixel value of (1), YmaxTo represent
Figure GDA00028267639700000411
Maximum pixel value of;
and step 3: acquiring a first chrominance channel image obtained by fusing the first chrominance channel images of all the exposure images, wherein the specific process comprises the following steps:
step 3_ 1: calculating the fusion coefficient of each pixel point in the first color channel image of each exposure image, and converting CbdThe fusion coefficient of the pixel point with the middle coordinate position (x, y) is recorded as
Figure GDA00028267639700000412
Figure GDA00028267639700000413
Wherein x is more than or equal to 1 and less than or equal to M, y is more than or equal to 1 and less than or equal to N, the symbol "|" is an absolute value symbol, Cbd(x, y) denotes CbdThe middle coordinate position is the pixel value of the pixel point of (x, y);
step 3_ 2: calculating the fused first color channel image of all the exposure images, and recording as
Figure GDA00028267639700000414
Figure GDA00028267639700000415
And 4, step 4: acquiring a second chrominance channel image after the fusion of the second chrominance channel images of all the exposure images, wherein the specific process comprises the following steps:
step 4_ 1: calculating the fusion coefficient of each pixel point in the second color channel image of each exposure image, and converting Cr into CrdThe fusion coefficient of the pixel point with the middle coordinate position (x, y) is recorded as
Figure GDA00028267639700000416
Figure GDA00028267639700000417
Wherein, Crd(x, y) represents CrdThe middle coordinate position is the pixel value of the pixel point of (x, y);
step 4_ 2: calculating the second chroma channel image after the fusion of the second chroma channel images of all the exposure images, and recording as
Figure GDA0002826763970000051
Figure GDA0002826763970000052
And 5: will be provided with
Figure GDA0002826763970000053
And converting the formed images of the YCbCr color space from the YCbCr color space to the RGB color space to obtain a fusion image of the multi-exposure images.
In the step 2_1
Figure GDA0002826763970000054
r=2。
In the step 2_5, k is 0.5.
Compared with the prior art, the invention has the advantages that:
1) the method converts the RGB image into the YCbCr image, and respectively carries out fusion processing on the brightness channel and the chrominance channel, thereby avoiding the influence of the change of the brightness information on the chrominance information, and keeping better texture detail and color information of the fused color image.
2) The method uses a high-Order Singular Value Decomposition (HOSVD) technology in the fusion of the luminance channels, is an efficient data Decomposition technology, and can well retain the structural information of data.
Drawings
FIG. 1 is a block diagram of an overall implementation of the method of the present invention;
FIG. 2a is a fused image obtained by fusing a multi-exposure image sequence "LightHouse" using the method of the present invention;
fig. 2b is a fused image obtained by fusing a multi-exposure image sequence "LightHouse" by using a gsaverage method;
fig. 2c is a fused image obtained by fusing the multi-exposure image sequence "LightHouse" using the Gu12 method;
fig. 2d is a fused image obtained by fusing the multi-exposure image sequence "LightHouse" by the Li12 method;
fig. 2e is a fused image obtained by fusing the multi-exposure image sequence "LightHouse" by the Li13 method;
fig. 2f is a fused image obtained by fusing a multi-exposure image sequence "LightHouse" by using the lsaveage method;
fig. 2g is a fused image obtained by fusing a multi-exposure image sequence "LightHouse" using a Raman09 method;
fig. 2h is a fused image obtained by fusing the sequence of multi-exposure images "LightHouse" with the use of the von ikakis 11;
FIG. 3a is a fused image obtained by fusing a multi-exposure image sequence "Madison" using the method of the present invention;
FIG. 3b is a fused image obtained by fusing a multiple-exposure image sequence "Madison" using the gsaverage method;
FIG. 3c is a fused image obtained by fusing a sequence of multi-exposure images "Madison" using the Gu12 method;
FIG. 3d is a fused image obtained by fusing a multi-exposure image sequence "Madison" using the Li12 method;
FIG. 3e is a fused image obtained by fusing a multi-exposure image sequence "Madison" using the Li13 method;
fig. 3f is a fused image obtained by fusing a multi-exposure image sequence "Madison" by using the lsaverage method;
FIG. 3g is a fused image obtained by fusing a multi-exposure image sequence "Madison" using a Raman09 method;
FIG. 3h is a fused image obtained by fusing a sequence of multi-exposure images "Madison" using Vonikakis 11;
FIG. 4 shows the size of the sliding window, i.e. the size of the luminance block is set to 11 × 11, and the step size of the sliding window is set to 2, the different values k to QAB/FThe influence of (a);
FIG. 5 is a graph of weighting index values of0.5 size pair Q of different luminance blocks with step size of sliding window set to 2AB/FThe influence of (a);
FIG. 6 is a diagram of the step size of the sliding window versus Q when the weight index takes on a value of 0.5 and the size of the luminance block is set to 11 × 11AB/FThe influence of (c).
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
The invention provides a multi-exposure image fusion method based on high-order singular value decomposition, the overall implementation block diagram of which is shown in figure 1, and the method comprises the following steps:
step 1: selecting D different exposure images with the width of M and the height of N; then converting each exposure image from RGB color space to YCbCr color space to obtain brightness channel (Y) image, first chrominance channel (Cb) image and second chrominance channel (Cr) image of each exposure image, and correspondingly recording the brightness channel image, the first chrominance channel image and the second chrominance channel image of the d exposure image as Yd、Cbd、Crd(ii) a Wherein D is a positive integer, D is more than 1, if D is 100, D is a positive integer, the initial value of D is1, and D is more than or equal to 1 and less than or equal to D.
Step 2: acquiring brightness channel images of all exposure images after brightness channel image fusion, wherein the specific process is as follows:
step 2_ 1: with a width of
Figure GDA0002826763970000071
And has a height of
Figure GDA0002826763970000072
The sliding window slides in the brightness channel image of each exposure image by taking the step length as r pixel points, divides the brightness channel image of each exposure image into L brightness blocks, and divides Y intodThe ith luminance block in (1) is denoted as Bd,i(ii) a Overlapping can occur in the process of dividing the brightness blocks, different pixel points can be overlapped for different times, and therefore, each pixel in the brightness channel image of each exposure image in the process of dividing the brightness blocks is recordedThe number of overlapping times of dots; then, the brightness blocks at the same position in the brightness channel images of all the exposure images are formed into a size of
Figure GDA0002826763970000073
L tensors are obtained in total, each tensor corresponds to D luminance blocks, and the tensor composed of the ith luminance block in the luminance channel images of all the exposure images is recorded as ai(ii) a Wherein the content of the first and second substances,
Figure GDA0002826763970000074
Figure GDA0002826763970000075
taking in general
Figure GDA0002826763970000076
In this example take
Figure GDA0002826763970000077
r is a positive integer, and r is a positive integer,
Figure GDA0002826763970000078
min () is a minimum function, in this embodiment, r is 2, i is a positive integer, i has an initial value of 1, and i is greater than or equal to 1 and less than or equal to L.
Step 2_ 2: performing a higher order singular value decomposition on each tensor, for AiTo A, aiCarrying out high-order singular value decomposition to obtain Ai=Si×1Ui×2Vi×3Wi(ii) a Wherein S isiIs represented by AiNuclear tensor of, UiIs represented by AiOf the first mode factor matrix, ViIs represented by AiA second mode factor matrix of WiIs represented by AiSymbol "", a third pattern factor matrix1"first mode product sign, sign of tensor"2"second mode product sign, sign as tensor3"is the sign of the product of the third mode of the tensor.
Step 2_ 3: obtaining each brightness corresponding to each tensorCharacteristic coefficient of block, AiThe characteristic coefficient of the corresponding d-th luminance block is recorded as
Figure GDA0002826763970000079
Wherein A isiThe corresponding d-th brightness block is Bd,i
Figure GDA00028267639700000710
Is represented by AiCorresponding d-th luminance block, i.e. Bd,iNuclear tensor of, i.e. Bd,iCan be expressed as
Figure GDA00028267639700000711
(Vi)TIs a ViThe transposing of (1).
Step 2_ 4: calculating the activity level measure of each brightness block corresponding to each tensor, and calculating AiThe activity level measure of the corresponding d-th luminance block is noted as
Figure GDA00028267639700000712
Wherein m is a positive integer, the initial value of m is1,
Figure GDA0002826763970000081
n is a positive integer, n has an initial value of 1,
Figure GDA0002826763970000082
the symbol "|" is an absolute value-taking symbol,
Figure GDA0002826763970000083
to represent
Figure GDA0002826763970000084
The middle subscript is the value at (m, n).
Step 2_ 5: obtaining a fusion coefficient matrix of each tensor, and calculating AiThe fusion coefficient matrix of (2) is recorded as Ei
Figure GDA0002826763970000085
Wherein k is a weight index, and k belongs to (0, 1)]In this example takek=0.5。
Step 2_ 6: calculating the fused brightness block corresponding to each tensor, and calculating AiThe corresponding fused luminance block is marked as Fi,Fi=Ui×Ei×(Vi)T(ii) a Wherein (V)i)TIs a ViThe transposing of (1).
Step 2_ 7: acquiring an overlapped brightness channel image formed by the L fused brightness blocks according to the obtained L fused brightness blocks, and recording the image as YoutIs a reaction of YoutThe pixel value of the pixel point with the middle coordinate position (x, Y) is recorded as Yout(x, y); then Y is putoutDividing the pixel value of each superposed pixel point by the superposition times of the pixel point to obtain a brightness channel image, and recording the brightness channel image as
Figure GDA0002826763970000086
Will be provided with
Figure GDA0002826763970000087
The pixel value of the pixel point with the middle coordinate position (x, y) is recorded as
Figure GDA0002826763970000088
Such as YoutThe overlapping times of the pixel points with the (x, y) middle coordinate position is 3, namely the pixel points belong to 3 brightness blocks, and the pixel values are 50, 40 and 80 respectively, so that the pixel values are
Figure GDA0002826763970000089
Is (50+40+80) divided by 3; wherein, Yout
Figure GDA00028267639700000810
The widths of the N-type glass are M and the heights of the N-type glass are N, x is more than or equal to 1 and less than or equal to M, and y is more than or equal to 1 and less than or equal to N.
Step 2_ 8: to ensure
Figure GDA00028267639700000811
Occupy the entire range of the luminance channel to obtain a higher contrast image, pair
Figure GDA00028267639700000812
Performing linear transformation optimization to obtain brightness channel image after brightness channel image fusion of all exposure images, and recording as
Figure GDA00028267639700000813
Will be provided with
Figure GDA00028267639700000814
The pixel value of the pixel point with the middle coordinate position (x, y) is recorded as
Figure GDA00028267639700000815
Figure GDA00028267639700000816
Wherein, YminTo represent
Figure GDA00028267639700000817
Minimum pixel value of (1), YmaxTo represent
Figure GDA00028267639700000818
The maximum pixel value of (1).
And step 3: acquiring a first chrominance channel image obtained by fusing the first chrominance channel images of all the exposure images, wherein the specific process comprises the following steps:
step 3_ 1: calculating the fusion coefficient of each pixel point in the first color channel image of each exposure image, and converting CbdThe fusion coefficient of the pixel point with the middle coordinate position (x, y) is recorded as
Figure GDA0002826763970000091
Figure GDA0002826763970000092
Wherein x is more than or equal to 1 and less than or equal to M, y is more than or equal to 1 and less than or equal to N, the symbol "|" is an absolute value symbol, Cbd(x, y) denotes CbdThe middle coordinate position is the pixel value of the pixel point of (x, y); cbdThe closer the pixel value of the pixel point in (1) is to 128, the less color information the pixel point carries, and thus CbdThe fusion coefficient of each pixel point in the image data is represented byThe absolute value of the difference between the pixel value of the pixel and 128 is determined.
Step 3_ 2: calculating the fused first color channel image of all the exposure images, and recording as
Figure GDA0002826763970000093
Figure GDA0002826763970000094
And 4, step 4: acquiring a second chrominance channel image after the fusion of the second chrominance channel images of all the exposure images, wherein the specific process comprises the following steps:
step 4_ 1: calculating the fusion coefficient of each pixel point in the second color channel image of each exposure image, and converting Cr into CrdThe fusion coefficient of the pixel point with the middle coordinate position (x, y) is recorded as
Figure GDA0002826763970000095
Figure GDA0002826763970000096
Wherein, Crd(x, y) represents CrdThe middle coordinate position is the pixel value of the pixel point of (x, y); cr (chromium) componentdThe closer the pixel value of the pixel point in (1) is to 128, the less color information the pixel point carries, and therefore, CrdThe fusion coefficient of each pixel point in (1) is determined by the absolute value of the difference between the pixel value of the pixel point and 128.
Step 4_ 2: calculating the second chroma channel image after the fusion of the second chroma channel images of all the exposure images, and recording as
Figure GDA0002826763970000097
Figure GDA0002826763970000098
And 5: will be provided with
Figure GDA0002826763970000099
Image conversion from YCbCr color spaceAnd (5) switching to an RGB color space to obtain a fused image of the multi-exposure image.
To further illustrate the feasibility and effectiveness of the method of the present invention, the following experiments were conducted.
There are ten multi-exposure image sequences of different scenes with high contrast and detail using Balloons, belgium House, Cadik, Candle, Cave, House, Kluki, Lamp, LightHouse, Madison. Table 1 shows information for each sequence of multi-exposure images, including name, spatial resolution, and number of exposure images.
TABLE 1 Multi-Exposure image sequences
Multi-exposure image sequence Size of exposed image
Balloons 512×339×9
BelgiumHouse 512×384×9
Cadik 512×384×15
Candle 512×364×3
Cave 512×384×3
House 512×340×4
Kluki 512×341×3
Lamp 512×342×6
LightHouse 512×340×3
Madison 512×384×30
Seven classical multi-exposure image fusion algorithms are selected to be compared with the method disclosed by the invention so as to verify the feasibility and effectiveness of the method disclosed by the invention. Seven classical multi-exposure Image fusion algorithms are respectively the global averaging method abbreviated as gsaverage, the method proposed in b.gu, w.li, j.wong, m.zhu, and m.wang, "Gradient field multi-exposure images fusion for high dynamic range Image fusion" gu.v.com.image.reproduction, vol.23, No.4, pp.604-610, May 2012 (for high dynamic range Image fusion), the Gradient field multi-exposure Image fusion for high dynamic range Image fusion "Gu 12, the method proposed in z.g.li, j.h.zheng, and s.rahardja," Detail-enhanced exposure Image fusion "IEEE trans.image, process.21, No.11, No. 4672-4676, 467v.467" (for "Image fusion), the local averaging method abbreviated as" Image fusion "n.32, n.g.l.l.v.n.21, n.11, pp.636, n.467.32, the method proposed in" Image fusion for high dynamic range Image fusion "c.g.g.l.l.l.l, No. 3, the local averaging method of Image fusion," n.g.g.g.g.l.g.g.10, p.c.c.c.c.c.12, the method of "filtering Image fusion of Image fusion, the method of" c. 2-b.32, the method of Image fusion of "c. n.g.g.g.c. 3, the method of Image fusion, the method of" c. n.32, the method of Image fusion of "c.c. n.g.c. n.c. 3, the method of the same, the same, the method proposed in the text "Bilateral filter Based composition for variable exposure photographical," in proc. europatics, 2009, pp.1-4 "(Bilateral filter Based variable exposure photography synthesis) is abbreviated Raman09, the method proposed in v.von ikakis, o.bouzos, i.andreadis," Multi-exposure Image Fusion Based on Illumination optimization, "Specialized Information publishing Association, pp.135-142, Heraklion, create, green, 2011 (Illumination Estimation Based Multi-exposure Image Fusion) is abbreviated Vonikakis 11.
1) Subjective evaluation
The method, the gsaverage method, the Gu12 method, the Li12 method, the Li13 method, the lsaverage method, the Raman09 method and the Vonikakis11 method are respectively used for fusing a multi-exposure image sequence 'LightHouse', and fused images obtained by the eight methods are correspondingly shown in FIGS. 2a to 2 h. From FIG. 2a, it can be seen that the fused image obtained by the method of the present invention has good contrast and color information; it can be seen from fig. 2b and 2g that the gsaverage method and the Raman09 method have lower contrast in the sky part, and the stone region is darker and cannot display more detailed texture; from fig. 2c, it can be seen that the color of the whole fused image obtained by the Gu12 method is obviously distorted, and the color of the fused image is gray, which is completely different from the actual color; it can be seen from fig. 2d that the Li12 method shows good color and contrast in the sky part, but the color of the stone region is distorted; it can be seen from fig. 2e that the Li13 method can achieve better global contrast, but this method has halo artifacts around the house; it can be seen from fig. 2f that the effect of the fused image obtained by the lsaverage method is the worst, and the whole fused image is seriously distorted on the detail texture; it can be seen from fig. 2h that the von ikakis11 method can retain good texture details in lighter areas, but the texture details in darker areas of the stone are lost.
The method, the gsaverage method, the Gu12 method, the Li12 method, the Li13 method, the lsaverage method, the Raman09 method and the Vonikakis11 method are respectively used for fusing a multi-exposure image sequence 'Madison', and fused images obtained by the eight methods are correspondingly shown in the figures 3a to 3 h. As can be seen from FIG. 3a, the fused image obtained by the method of the present invention has good global contrast, and the portrait pillar retains rich texture details; from fig. 3g and 3h, it can be seen that the fused images obtained by the Raman09 method and the von ikakis11 method are overall darker in hue and do not effectively show the texture of dark areas; it can be seen from fig. 3b that the gsaverage method does not show up clearly on the portrait and that the pillars and windows are darker in tint; from fig. 3c, it can be seen that the Gu12 method, although it can better show the contour texture, has severe distortion in color, and the color of the fused image is generally grayish; it can be seen from fig. 3d that the Li12 method retains good luminance information, but some regions are too bright to show fine texture; it can be seen from fig. 3f that the lsaverage method still has severe texture distortion; it can be seen from fig. 3e that the Li13 method can maintain good brightness, but the global contrast is not high enough.
2) Objective evaluation
The multi-exposure image sequences of "balloon", "belgium House", "cablek", "Candle", "Cave", "House", "Kluki", "Lamp", "LightHouse" and "Madison" are respectively fused by the method of the present invention, the gsaverage method, the Gu12 method, the Li12 method, the Li13 method, the lsaverage method, the Raman09 method and the von ikakis11 method.
Q, set forth in C.S. Xydeas and V.Petrovic, "Objective image fusion performance measure," Electron.Lett., vol.36, No.4, pp.308-309, Feb.2000. (Objective image fusion performance measure) is used hereinAB/FAs an objective quality evaluation index. QAB/FIs an objective evaluation index widely used for evaluating the quality of a fused image, mainly used for analyzing edge information of the fused image, and QAB/FA larger value represents a better quality of the fused image. Table 2 shows the use of QAB/FThe values of the fused images obtained using the different fusion methods were evaluated, with the largest two values in each group shown in bold. As can be seen from Table 2, the process of the present invention performed similarly to the Li13 process and was significantly superior to the other processes.
TABLE 2 use of QAB/FEvaluating values of fused images obtained using different fusion methods
Figure GDA0002826763970000121
And analyzing the influence of the weight index k, the size of the brightness block, namely the size of the sliding window, and the step length of the sliding window.
1) Influence of the weighting index k
In the method of the present invention, k is set to 0.5 during the acquisition of the luminance channel fusion image. Fig. 4 shows different values of k versus Q when the size of the luminance block is set to 11 × 11 and the step size of the sliding window is set to 2AB/FIn fig. 4, the abscissa represents the value of k and the ordinate represents Q for ten sets of fused imagesAB/FAverage value of (a). As can be seen from FIG. 4, as the value of k becomes larger, QAB/FThe value of (A) is first larger and then smaller, and when k is 0.5, Q isAB/FThe value of (c) is maximum.
2) Influence of the size of the luminance block, i.e. the size of the sliding window
In the method of the present invention, the size of the sliding window is set to 11 × 11 during the acquisition of the luminance channel fusion image. FIG. 5 shows the size of different luminance blocks versus Q with a weight index of 0.5 and a sliding window step size of 2AB/FIn fig. 5, the abscissa represents the size of the luminance block, and the ordinate represents Q of the ten sets of fused imagesAB/FAverage value of (a). From FIG. 5, Q can be seenAB/FThe value of (a) increases with the size of the luminance block, ranging from 3 to 8 pixels, QAB/FThe change trend of the value is large, and the change curve is relatively steep; the size of the luminance block ranges from 8 to 12 pixels, QAB/FThe change trend of the value of (A) is not very large, and the change curve is relatively flat.
3) Effect of step size of sliding Window
In the method of the invention, the step size of the sliding window is set to 2 in the acquisition process of the brightness channel fusion image. FIG. 6 shows the step size of the sliding window versus Q for a weight index of 0.5 and a luminance block size, i.e., a sliding window size of 11 × 11AB/FThe abscissa of fig. 6 represents the size of the step of the sliding window, and the ordinate represents Q of the ten sets of fused imagesAB/FAverage value of (a). As can be seen from FIG. 6, Q is obtained at step sizes of 1 and 2 for the sliding windowAB/FIs substantially the same, and as the step size of the sliding window increases, QAB/FThe value of (c) generally tends to decrease.

Claims (3)

1. A multi-exposure image fusion method based on high-order singular value decomposition is characterized by comprising the following steps:
step 1: selecting D different exposure images with the width of M and the height of N; then converting each exposure image from RGB color space to YCbCr color space to obtain brightness channel image, first chroma channel image and second chroma channel image of each exposure image, and recording the brightness channel image, the first chroma channel image and the second chroma channel image of the d exposure image as Yd、Cbd、Crd(ii) a Wherein D is a positive integer, D is more than 1, D is a positive integer, the initial value of D is1, and D is more than or equal to 1 and less than or equal to D;
step 2: acquiring brightness channel images of all exposure images after brightness channel image fusion, wherein the specific process is as follows:
step 2_ 1: with a width of
Figure FDA0002826763960000011
And has a height of
Figure FDA0002826763960000012
The sliding window slides in the brightness channel image of each exposure image by taking the step length as r pixel points, divides the brightness channel image of each exposure image into L brightness blocks, and divides Y intodThe ith luminance block in (1) is denoted as Bd,i(ii) a Recording the overlapping times of each pixel point in the brightness channel image of each exposure image in the brightness block dividing process; then, the brightness blocks at the same position in the brightness channel images of all the exposure images are formed into a size of
Figure FDA0002826763960000013
Obtaining L tensors, each tensor corresponds to D brightness blocks, and the first tensors in the brightness channel images of all the exposed imagesThe tensor composed of i luminance blocks is denoted as Ai(ii) a Wherein the content of the first and second substances,
Figure FDA0002826763960000014
r is a positive integer, and r is a positive integer,
Figure FDA0002826763960000015
min () is a minimum function, i is a positive integer, the initial value of i is1, i is more than or equal to 1 and is less than or equal to L;
step 2_ 2: performing a higher order singular value decomposition on each tensor, for AiTo A, aiCarrying out high-order singular value decomposition to obtain Ai=Si×1Ui×2Vi×3Wi(ii) a Wherein S isiIs represented by AiNuclear tensor of, UiIs represented by AiOf the first mode factor matrix, ViIs represented by AiA second mode factor matrix of WiIs represented by AiSymbol "", a third pattern factor matrix1"first mode product sign, sign of tensor"2"second mode product sign, sign as tensor3"is the sign of the third mode product of the tensor;
step 2_ 3: obtaining the characteristic coefficient of each brightness block corresponding to each tensor, and comparing AiThe characteristic coefficient of the corresponding d-th luminance block is recorded as
Figure FDA0002826763960000016
Figure FDA0002826763960000017
Wherein A isiThe corresponding d-th brightness block is Bd,i
Figure FDA0002826763960000018
Is represented by AiCorresponding d-th luminance block, i.e. Bd,iThe nuclear tensor of (a);
step 2_ 4: calculating the activity level measure of each brightness block corresponding to each tensor, and calculating AiCorresponding d lightActivity level measure of degree block is recorded as
Figure FDA0002826763960000021
Figure FDA0002826763960000022
Wherein m is a positive integer, the initial value of m is1,
Figure FDA0002826763960000023
n is a positive integer, n has an initial value of 1,
Figure FDA0002826763960000024
the symbol "|" is an absolute value-taking symbol,
Figure FDA0002826763960000025
to represent
Figure FDA0002826763960000026
The middle subscript is the value at (m, n);
step 2_ 5: obtaining a fusion coefficient matrix of each tensor, and calculating AiThe fusion coefficient matrix of (2) is recorded as Ei
Figure FDA0002826763960000027
Wherein k is a weight index, and k belongs to (0, 1)];
Step 2_ 6: calculating the fused brightness block corresponding to each tensor, and calculating AiThe corresponding fused luminance block is marked as Fi,Fi=Ui×Ei×(Vi)T(ii) a Wherein (V)i)TIs a ViTransposing;
step 2_ 7: acquiring an overlapped brightness channel image formed by the L fused brightness blocks according to the obtained L fused brightness blocks, and recording the image as YoutIs a reaction of YoutThe pixel value of the pixel point with the middle coordinate position (x, Y) is recorded as Yout(x, y); then Y is putoutThe pixel value of each superposed pixel point is divided by the pixelThe number of overlapping points gives the luminance channel image, which is recorded as
Figure FDA0002826763960000028
Will be provided with
Figure FDA0002826763960000029
The pixel value of the pixel point with the middle coordinate position (x, y) is recorded as
Figure FDA00028267639600000210
Wherein, Yout
Figure FDA00028267639600000211
The widths of the N-type glass are M and the heights of the N-type glass are N, x is more than or equal to 1 and less than or equal to M, and y is more than or equal to 1 and less than or equal to N;
step 2_ 8: to pair
Figure FDA00028267639600000212
Performing linear transformation optimization to obtain brightness channel image after brightness channel image fusion of all exposure images, and recording as
Figure FDA00028267639600000213
Will be provided with
Figure FDA00028267639600000214
The pixel value of the pixel point with the middle coordinate position (x, y) is recorded as
Figure FDA00028267639600000215
Figure FDA00028267639600000216
Wherein, YminTo represent
Figure FDA00028267639600000217
Minimum pixel value of (1), YmaxTo represent
Figure FDA00028267639600000218
Maximum pixel value of;
and step 3: acquiring a first chrominance channel image obtained by fusing the first chrominance channel images of all the exposure images, wherein the specific process comprises the following steps:
step 3_ 1: calculating the fusion coefficient of each pixel point in the first color channel image of each exposure image, and converting CbdThe fusion coefficient of the pixel point with the middle coordinate position (x, y) is recorded as
Figure FDA0002826763960000031
Figure FDA0002826763960000032
Wherein x is more than or equal to 1 and less than or equal to M, y is more than or equal to 1 and less than or equal to N, the symbol "|" is an absolute value symbol, Cbd(x, y) denotes CbdThe middle coordinate position is the pixel value of the pixel point of (x, y);
step 3_ 2: calculating the fused first color channel image of all the exposure images, and recording as
Figure FDA0002826763960000033
Figure FDA0002826763960000034
And 4, step 4: acquiring a second chrominance channel image after the fusion of the second chrominance channel images of all the exposure images, wherein the specific process comprises the following steps:
step 4_ 1: calculating the fusion coefficient of each pixel point in the second color channel image of each exposure image, and converting Cr into CrdThe fusion coefficient of the pixel point with the middle coordinate position (x, y) is recorded as
Figure FDA0002826763960000035
Figure FDA0002826763960000036
Wherein, Crd(x, y) represents CrdThe middle coordinate position is the pixel value of the pixel point of (x, y);
step 4_ 2: calculating the second chroma channel image after the fusion of the second chroma channel images of all the exposure images, and recording as
Figure FDA0002826763960000037
Figure FDA0002826763960000038
And 5: will be provided with
Figure FDA0002826763960000039
And converting the formed images of the YCbCr color space from the YCbCr color space to the RGB color space to obtain a fusion image of the multi-exposure images.
2. The method for multi-exposure image fusion based on high-order singular value decomposition as claimed in claim 1, wherein said step 2_1 is performed by taking
Figure FDA00028267639600000310
r=2。
3. The method for multi-exposure image fusion based on higher-order singular value decomposition according to claim 1 or 2, wherein k in step 2_5 is 0.5.
CN201910396691.4A 2019-05-13 2019-05-13 Multi-exposure image fusion method based on high-order singular value decomposition Active CN110211077B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910396691.4A CN110211077B (en) 2019-05-13 2019-05-13 Multi-exposure image fusion method based on high-order singular value decomposition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910396691.4A CN110211077B (en) 2019-05-13 2019-05-13 Multi-exposure image fusion method based on high-order singular value decomposition

Publications (2)

Publication Number Publication Date
CN110211077A CN110211077A (en) 2019-09-06
CN110211077B true CN110211077B (en) 2021-03-09

Family

ID=67787093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910396691.4A Active CN110211077B (en) 2019-05-13 2019-05-13 Multi-exposure image fusion method based on high-order singular value decomposition

Country Status (1)

Country Link
CN (1) CN110211077B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111105383B (en) * 2019-11-12 2023-04-21 杭州电子科技大学 Three-color vision-oriented image fusion color enhancement method
CN112562020B (en) * 2020-12-23 2024-06-07 绍兴图信物联科技有限公司 TIFF image and half-tone image format conversion method based on least square method
CN112837254B (en) * 2021-02-25 2024-06-11 普联技术有限公司 Image fusion method and device, terminal equipment and storage medium
CN116452437B (en) * 2023-03-20 2023-11-14 荣耀终端有限公司 High dynamic range image processing method and electronic equipment
CN117391985B (en) * 2023-12-11 2024-02-20 安徽数分智能科技有限公司 Multi-source data information fusion processing method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109636767A (en) * 2018-11-30 2019-04-16 深圳市华星光电半导体显示技术有限公司 More exposure image fusion methods

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881854B (en) * 2015-05-20 2017-10-31 天津大学 High dynamic range images fusion method based on gradient and monochrome information
GB2544786A (en) * 2015-11-27 2017-05-31 Univ Of East Anglia Method and system for generating an output image from a plurality of corresponding input image channels
US10186023B2 (en) * 2016-01-25 2019-01-22 Qualcomm Incorporated Unified multi-image fusion approach
CN106373105B (en) * 2016-09-12 2020-03-24 广东顺德中山大学卡内基梅隆大学国际联合研究院 Multi-exposure image artifact removing fusion method based on low-rank matrix recovery
CN106875352B (en) * 2017-01-17 2019-08-30 北京大学深圳研究生院 A kind of enhancement method of low-illumination image
CN107370910B (en) * 2017-08-04 2019-09-24 西安邮电大学 Minimum surround based on optimal exposure exposes set acquisition methods

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109636767A (en) * 2018-11-30 2019-04-16 深圳市华星光电半导体显示技术有限公司 More exposure image fusion methods

Also Published As

Publication number Publication date
CN110211077A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110211077B (en) Multi-exposure image fusion method based on high-order singular value decomposition
TWI704524B (en) Method and device for image polishing
CN102970549B (en) Image processing method and image processing device
RU2400815C2 (en) Method of enhancing digital image quality
Lee et al. A space-variant luminance map based color image enhancement
CN113129391B (en) Multi-exposure fusion method based on multi-exposure image feature distribution weight
CN106780417A (en) A kind of Enhancement Method and system of uneven illumination image
CN105809643B (en) A kind of image enchancing method based on adaptive block channel extrusion
CN110335221B (en) Multi-exposure image fusion method based on unsupervised learning
CN103295206B (en) A kind of twilight image Enhancement Method and device based on Retinex
CN108280836B (en) Image processing method and device
CN111260580A (en) Image denoising method based on image pyramid, computer device and computer readable storage medium
CN110009574B (en) Method for reversely generating high dynamic range image from low dynamic range image
CN112102166B (en) Combined super-resolution, color gamut expansion and inverse tone mapping method and equipment
CN113706393A (en) Video enhancement method, device, equipment and storage medium
CN115809966A (en) Low-illumination image enhancement method and system
CN112435184A (en) Haze sky image identification method based on Retinex and quaternion
Zhang et al. Multi-scale-based joint super-resolution and inverse tone-mapping with data synthesis for UHD HDR video
CN110545414B (en) Image sharpening method
Huang et al. Quaternion screened Poisson equation for low-light image enhancement
CN116630198A (en) Multi-scale fusion underwater image enhancement method combining self-adaptive gamma correction
CN116563133A (en) Low-illumination color image enhancement method based on simulated exposure and multi-scale fusion
Siddiqui et al. Hierarchical color correction for camera cell phone images
CN113256533B (en) Self-adaptive low-illumination image enhancement method and system based on MSRCR
CN105741248A (en) Method for removing hazed and degraded part in image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Li Li

Inventor after: Wu Shengcong

Inventor after: Luo Ting

Inventor after: Xu Haiyong

Inventor after: He Zhouyan

Inventor after: Zhang Junjun

Inventor before: Li Li

Inventor before: Luo Ting

Inventor before: Xu Haiyong

Inventor before: Wu Shengcong

Inventor before: He Zhouyan

Inventor before: Zhang Junjun

CB03 Change of inventor or designer information