CN110211077A - A kind of more exposure image fusion methods based on Higher-order Singular value decomposition - Google Patents

A kind of more exposure image fusion methods based on Higher-order Singular value decomposition Download PDF

Info

Publication number
CN110211077A
CN110211077A CN201910396691.4A CN201910396691A CN110211077A CN 110211077 A CN110211077 A CN 110211077A CN 201910396691 A CN201910396691 A CN 201910396691A CN 110211077 A CN110211077 A CN 110211077A
Authority
CN
China
Prior art keywords
image
exposure
brightness
fusion
channel image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910396691.4A
Other languages
Chinese (zh)
Other versions
CN110211077B (en
Inventor
李黎
骆挺
徐海勇
吴圣聪
何周燕
张君君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou University Of Electronic Science And Technology Shangyu Institute Of Science And Engineering Co Ltd
Original Assignee
Hangzhou University Of Electronic Science And Technology Shangyu Institute Of Science And Engineering Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou University Of Electronic Science And Technology Shangyu Institute Of Science And Engineering Co Ltd filed Critical Hangzhou University Of Electronic Science And Technology Shangyu Institute Of Science And Engineering Co Ltd
Priority to CN201910396691.4A priority Critical patent/CN110211077B/en
Publication of CN110211077A publication Critical patent/CN110211077A/en
Application granted granted Critical
Publication of CN110211077B publication Critical patent/CN110211077B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

More exposure image fusion methods based on Higher-order Singular value decomposition that the invention discloses a kind of, its luminance block that luminance channel image of exposure image is divided into overlapping, the core tensor sum the third mode factor matrix of luminance block is obtained using Higher-order Singular value decomposition, and then the characteristic coefficient and Activity Level of acquisition luminance block are estimated, estimated according to the first and second pattern factor matrixes and characteristic coefficient and Activity Level of luminance block and obtains fused luminance block, linear transformation is carried out to obtained luminance channel image, obtains fused luminance channel image;The fusion coefficients of the pixel in the first chroma channel image by calculating exposure image, obtain fused first chroma channel image;The fusion coefficients of the pixel in the second chroma channel image by calculating exposure image, obtain fused second chroma channel image;Blending image is obtained according to the image in fused three channels;Advantage is to obtain preferable detail textures and colouring information abundant.

Description

Multi-exposure image fusion method based on high-order singular value decomposition
Technical Field
The invention relates to an image fusion technology, in particular to a multi-exposure image fusion method based on high-order singular value decomposition.
Background
The process of combining information from two or more images of the same scene into one more informative image is called image fusion. Multi-Exposure Image Fusion (MEF) is one of the classic applications of Image Fusion. Images of natural scenes typically have a greater dynamic range than images taken with digital cameras due to the dynamic range limitations of digital cameras. High Dynamic Range (HDR) imaging techniques estimate a Camera Response Function (CRF) from a plurality of Low Dynamic Range (LDR) images, and then reconstruct a High Dynamic Range image using the inverse of the Camera Response Function. Since most standard displays in current use are low dynamic range, after the high dynamic range image is acquired, a tone mapping process is required to compress the dynamic range of the high dynamic range image for display, however, the computational complexity of this process is high and the quality of the high dynamic range image depends on the computational accuracy of the camera response function. Therefore, multi-exposure image fusion is an effective and convenient alternative to complex high dynamic range imaging techniques.
Multi-exposure image fusion fuses a series of differently exposed images to obtain a high quality low dynamic range image without the need for camera response function recovery and tone mapping. In the document a.goshttasby, "Fusion of multi-exposures," Image and Vision Computing, vol.23, pp.611-618,2005 "(multi-exposure Image Fusion), a block-level Fusion method is used, where the Image is divided into uniform blocks and a minimum average method is used to fuse out the best Image block, however, this method has poor Image contrast and saturation after Fusion. In the context of b.gu, w.li, j.wong, m.zhu, and m.wang, "Gradient field multi-exposure image fusion for high dynamic range image visualization," j.vis.com.image. retrieval, vol.23, No.4, pp.604-610, May2012. (Gradient field multi-exposure image fusion for high dynamic range image visualization) it is proposed to modify the Gradient field by an iterative method, obtain the result by solving the poisson equation and then linearly stretching to the common range, using a method of two averaging filters and multi-scale non-linear compression, but this method is prone to artifacts. An effective scene synthesis method using an edge-preserving filter is proposed in the text of "binary filter based synthesis for variable exposure photosynthesizer," inproc eurogrics, 2009, pp.1-4 "(variable exposure photosynthesizer based on Bilateral filter), such as Bilateral filter, where the color of an image fused by the method is easily distorted and the hue of the image is dark as a whole because no limitation is made on the global brightness uniformity. In k.ma, h.li, z.wang and d.meng, "Robust Multi-Exposure Image Fusion: a Structural patch composition application," IEEE trans.image process ", vol.26, No.5, pp.2519-2532, May 2017 (Multi-Exposure Image Fusion: a Structural block decomposition method), it is proposed to fuse Multi-Exposure images using a Structural block decomposition method, but this method does not easily obtain texture information and the effect of removing artifacts is not satisfactory.
Disclosure of Invention
The invention aims to solve the technical problem of providing a multi-exposure image fusion method based on high-order singular value decomposition, which respectively processes brightness and chroma in the multi-exposure image fusion process and can obtain better detail texture and rich color information.
The technical scheme adopted by the invention for solving the technical problems is as follows: a multi-exposure image fusion method based on high-order singular value decomposition is characterized by comprising the following steps:
step 1: selecting D different exposure images with the width of M and the height of N; then converting each exposure image from RGB color space to YCbCr color space to obtain brightness channel image, first chroma channel image and second chroma channel image of each exposure image, and recording the brightness channel image, the first chroma channel image and the second chroma channel image of the d exposure image as Yd、Cbd、Crd(ii) a Wherein D is a positive integer, D is more than 1, D is a positive integer, the initial value of D is1, and D is more than or equal to 1 and less than or equal to D;
step 2: acquiring brightness channel images of all exposure images after brightness channel image fusion, wherein the specific process is as follows:
step 2_ 1: with a width ofAnd has a height ofThe sliding window slides in the brightness channel image of each exposure image by taking the step length as r pixel points, divides the brightness channel image of each exposure image into L brightness blocks, and divides Y intodThe ith luminance block in (1) is denoted as Bd,i(ii) a Recording the overlapping times of each pixel point in the brightness channel image of each exposure image in the brightness block dividing process; then, the brightness blocks at the same position in the brightness channel images of all the exposure images are formed into a size ofL tensors are obtained in total, each tensor corresponds to D luminance blocks, and the tensor composed of the ith luminance block in the luminance channel images of all the exposure images is recorded as ai(ii) a Wherein,r is a positive integer, and r is a positive integer,min () is a minimum function, i is a positive integer, the initial value of i is1, i is more than or equal to 1 and is less than or equal to L;
step 2_ 2: performing a higher order singular value decomposition on each tensor, for AiTo A, aiCarrying out high-order singular value decomposition to obtain Ai=Si×1Ui×2Vi×3Wi(ii) a Wherein S isiIs represented by AiNuclear tensor of, UiIs represented by AiOf the first mode factor matrix, ViIs represented by AiA second mode factor matrix of WiIs represented by AiSymbol "", a third pattern factor matrix1"first mode product sign, sign of tensor"2"second mode product sign, sign as tensor3"is the sign of the third mode product of the tensor;
step 2_ 3: obtaining each brightness block corresponding to each tensorA characteristic coefficient ofiThe characteristic coefficient of the corresponding d-th luminance block is recorded asWherein A isiThe corresponding d-th brightness block is Bd,iIs represented by AiCorresponding d-th luminance block, i.e. Bd,iThe nuclear tensor of (a);
step 2_ 4: calculating the activity level measure of each brightness block corresponding to each tensor, and calculating AiThe activity level measure of the corresponding d-th luminance block is noted asWherein m is a positive integer, the initial value of m is1,n is a positive integer, n has an initial value of 1,the symbol "|" is an absolute value-taking symbol,to representThe middle subscript is the value at (m, n);
step 2_ 5: obtaining a fusion coefficient matrix of each tensor, and calculating AiThe fusion coefficient matrix of (2) is recorded as EiWherein k is a weight index, and k belongs to (0, 1)];
Step 2_ 6: calculating the fused brightness block corresponding to each tensor, and calculating AiCorrespond toThe fused luminance block of (1) is denoted as Fi,Fi=Ui×Ei×(Vi)T(ii) a Wherein (V)i)TIs a ViTransposing;
step 2_ 7: acquiring an overlapped brightness channel image formed by the L fused brightness blocks according to the obtained L fused brightness blocks, and recording the image as YoutIs a reaction of YoutThe pixel value of the pixel point with the middle coordinate position (x, Y) is recorded as Yout(x, y); then Y is putoutDividing the pixel value of each superposed pixel point by the superposition times of the pixel point to obtain a brightness channel image, and recording the brightness channel image asWill be provided withThe pixel value of the pixel point with the middle coordinate position (x, y) is recorded asWherein, YoutThe widths of the N-type glass are M and the heights of the N-type glass are N, x is more than or equal to 1 and less than or equal to M, and y is more than or equal to 1 and less than or equal to N;
step 2_ 8: to pairPerforming linear transformation optimization to obtain brightness channel image after brightness channel image fusion of all exposure images, and recording asWill be provided withThe pixel value of the pixel point with the middle coordinate position (x, y) is recorded as Wherein, YminTo representMinimum pixel value of (1), YmaxTo representMaximum pixel value of;
and step 3: acquiring a first chrominance channel image obtained by fusing the first chrominance channel images of all the exposure images, wherein the specific process comprises the following steps:
step 3_ 1: calculating the fusion coefficient of each pixel point in the first color channel image of each exposure image, and converting CbdThe fusion coefficient of the pixel point with the middle coordinate position (x, y) is recorded as Wherein x is more than or equal to 1 and less than or equal to M, y is more than or equal to 1 and less than or equal to N, the symbol "|" is an absolute value symbol, Cbd(x, y) denotes CbdThe middle coordinate position is the pixel value of the pixel point of (x, y);
step 3_ 2: calculating the fused first color channel image of all the exposure images, and recording as
And 4, step 4: acquiring a second chrominance channel image after the fusion of the second chrominance channel images of all the exposure images, wherein the specific process comprises the following steps:
step 4_ 1: calculating the fusion coefficient of each pixel point in the second color channel image of each exposure image, and converting Cr into CrdThe fusion coefficient of the pixel point with the middle coordinate position (x, y) is recorded as Wherein, Crd(x, y) represents CrdThe middle coordinate position is the pixel value of the pixel point of (x, y);
step 4_ 2: calculating the second chroma channel image after the fusion of the second chroma channel images of all the exposure images, and recording as
And 5: will be provided withAnd converting the formed images of the YCbCr color space from the YCbCr color space to the RGB color space to obtain a fusion image of the multi-exposure images.
In the step 2_1r=2。
In the step 2_5, k is 0.5.
Compared with the prior art, the invention has the advantages that:
1) the method converts the RGB image into the YCbCr image, and respectively carries out fusion processing on the brightness channel and the chrominance channel, thereby avoiding the influence of the change of the brightness information on the chrominance information, and keeping better texture detail and color information of the fused color image.
2) The method uses a High Order Singular Value Decomposition (HOSVD) technology in the fusion of the luminance channels, which is an efficient data Decomposition technology and can well retain the structural information of data.
Drawings
FIG. 1 is a block diagram of an overall implementation of the method of the present invention;
FIG. 2a is a fused image obtained by fusing a multi-exposure image sequence "LightHouse" using the method of the present invention;
fig. 2b is a fused image obtained by fusing a multi-exposure image sequence "LightHouse" by using a gsaverage method;
fig. 2c is a fused image obtained by fusing the multi-exposure image sequence "LightHouse" using the Gu12 method;
fig. 2d is a fused image obtained by fusing the multi-exposure image sequence "LightHouse" by the Li12 method;
fig. 2e is a fused image obtained by fusing the multi-exposure image sequence "LightHouse" by the Li13 method;
fig. 2f is a fused image obtained by fusing a multi-exposure image sequence "LightHouse" by using the lsaveage method;
fig. 2g is a fused image obtained by fusing a multi-exposure image sequence "LightHouse" using a Raman09 method;
fig. 2h is a fused image obtained by fusing the sequence of multi-exposure images "LightHouse" with the use of the von ikakis 11;
FIG. 3a is a fused image obtained by fusing a multi-exposure image sequence "Madison" using the method of the present invention;
FIG. 3b is a fused image obtained by fusing a multiple-exposure image sequence "Madison" using the gsaverage method;
FIG. 3c is a fused image obtained by fusing a sequence of multi-exposure images "Madison" using the Gu12 method;
FIG. 3d is a fused image obtained by fusing a multi-exposure image sequence "Madison" using the Li12 method;
FIG. 3e is a fused image obtained by fusing a multi-exposure image sequence "Madison" using the Li13 method;
fig. 3f is a fused image obtained by fusing a multi-exposure image sequence "Madison" by using the lsaverage method;
FIG. 3g is a fused image obtained by fusing a multi-exposure image sequence "Madison" using a Raman09 method;
FIG. 3h is a fused image obtained by fusing a sequence of multi-exposure images "Madison" using Vonikakis 11;
FIG. 4 shows the size of the sliding window, i.e. the size of the luminance block is set to 11 × 11, and the step size of the sliding window is set to 2, the different values k to QAB/FThe influence of (a);
FIG. 5 is a graph of the size of different luminance blocks vs. Q with a weight index of 0.5 and a sliding window step size of 2AB/FThe influence of (a);
FIG. 6 is a diagram of the step size of the sliding window versus Q when the weight index takes on a value of 0.5 and the size of the luminance block is set to 11 × 11AB/FThe influence of (c).
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
The invention provides a multi-exposure image fusion method based on high-order singular value decomposition, the overall implementation block diagram of which is shown in figure 1, and the method comprises the following steps:
step 1: selecting D different exposure images with the width of M and the height of N; then converting each exposure image from RGB color space to YCbCr color space to obtain brightness channel (Y) image, first chrominance channel (Cb) image and second chrominance channel (Cr) image of each exposure image, and correspondingly recording the brightness channel image, the first chrominance channel image and the second chrominance channel image of the d exposure image as Yd、Cbd、Crd(ii) a Wherein D is a positive integer, D is more than 1, if D is 100, D is a positive integer, the initial value of D is1, and D is more than or equal to 1 and less than or equal to D.
Step 2: acquiring brightness channel images of all exposure images after brightness channel image fusion, wherein the specific process is as follows:
step 2_ 1: with a width ofAnd has a height ofThe sliding window slides in the brightness channel image of each exposure image by taking the step length as r pixel points, divides the brightness channel image of each exposure image into L brightness blocks, and divides Y intodThe ith luminance block in (1) is denoted as Bd,i(ii) a Overlapping occurs in the luminance block division process, and different pixel points are overlapped for different times, so that the overlapping times of each pixel point in the luminance channel image of each exposure image in the luminance block division process are recorded; then, the brightness blocks at the same position in the brightness channel images of all the exposure images are formed into a size ofL tensors are obtained in total, each tensor corresponds to D luminance blocks, and the tensor composed of the ith luminance block in the luminance channel images of all the exposure images is recorded as ai(ii) a Wherein, taking in generalIn this example taker is a positive integer, and r is a positive integer,min () is a minimum function, in this embodiment, r is 2, i is a positive integer, i has an initial value of 1, and i is greater than or equal to 1 and less than or equal to L.
Step 2_ 2: performing a higher order singular value decomposition on each tensor, for AiTo A, aiCarrying out high-order singular value decomposition to obtain Ai=Si×1Ui×2Vi×3Wi(ii) a Wherein S isiIs represented by AiNuclear tensor of, UiIs represented by AiOf the first mode factor matrix, ViIs represented by AiA second mode factor matrix of WiIs represented by AiSymbol "", a third pattern factor matrix1"first mode product sign, sign of tensor"2"second mode product sign, sign as tensor3"is the sign of the product of the third mode of the tensor.
Step 2_ 3: obtaining the characteristic coefficient of each brightness block corresponding to each tensor, and comparing AiThe characteristic coefficient of the corresponding d-th luminance block is recorded asWherein A isiThe corresponding d-th brightness block is Bd,iIs represented by AiCorresponding d-th luminance block, i.e. Bd,iNuclear tensor of, i.e. Bd,iCan be expressed as(Vi)TIs a ViThe transposing of (1).
Step 2_ 4: calculating the activity level measure of each brightness block corresponding to each tensor, and calculating AiThe activity level measure of the corresponding d-th luminance block is noted asWherein m is a positive integer, the initial value of m is1,n is a positive integer, n has an initial value of 1,the symbol "|" is an absolute value-taking symbol,to representThe middle subscript is the value at (m, n).
Step 2_ 5: obtaining a fusion coefficient matrix of each tensor, and calculating AiThe fusion coefficient matrix of (2) is recorded as EiWherein k is a weight index, and k belongs to (0, 1)]In this example, k is 0.5.
Step 2_ 6: calculating the fused brightness block corresponding to each tensor, and calculating AiThe corresponding fused luminance block is marked as Fi,Fi=Ui×Ei×(Vi)T(ii) a Wherein (V)i)TIs a ViThe transposing of (1).
Step 2_ 7: acquiring an overlapped brightness channel image formed by the L fused brightness blocks according to the obtained L fused brightness blocks, and recording the image as YoutIs a reaction of YoutThe pixel value of the pixel point with the middle coordinate position (x, Y) is recorded as Yout(x, y); then Y is putoutDividing the pixel value of each superposed pixel point by the superposition times of the pixel point to obtain a brightness channel image, and recording the brightness channel image asWill be provided withThe pixel value of the pixel point with the middle coordinate position (x, y) is recorded asSuch as YoutThe overlapping times of the pixel points with the (x, y) middle coordinate position is 3, namely the pixel points belong to 3 brightness blocks, and the pixel values are 50, 40 and 80 respectively, so that the pixel values areIs (50+40+80) divided by 3; wherein, YoutThe widths of the N-type glass are M and the heights of the N-type glass are N, x is more than or equal to 1 and less than or equal to M, and y is more than or equal to 1 and less than or equal to N.
Step 2_ 8: to ensureOccupy the entire range of the luminance channel to obtain a higher contrast image, pairPerforming linear transformation optimization to obtain brightness channel image after brightness channel image fusion of all exposure images, and recording asWill be provided withThe pixel value of the pixel point with the middle coordinate position (x, y) is recorded as Wherein, YminTo representMinimum pixel value of (1), YmaxTo representThe maximum pixel value of (1).
And step 3: acquiring a first chrominance channel image obtained by fusing the first chrominance channel images of all the exposure images, wherein the specific process comprises the following steps:
step 3_ 1: calculating the fusion coefficient of each pixel point in the first color channel image of each exposure image, and converting CbdThe fusion coefficient of the pixel point with the middle coordinate position (x, y) is recorded as Wherein x is more than or equal to 1 and less than or equal to M, y is more than or equal to 1 and less than or equal to N, the symbol "|" is an absolute value symbol, Cbd(x, y) denotes CbdThe middle coordinate position is (x, y)) The pixel value of the pixel point of (1); cbdThe closer the pixel value of the pixel point in (1) is to 128, the less color information the pixel point carries, and thus CbdThe fusion coefficient of each pixel point in (1) is determined by the absolute value of the difference between the pixel value of the pixel point and 128.
Step 3_ 2: calculating the fused first color channel image of all the exposure images, and recording as
And 4, step 4: acquiring a second chrominance channel image after the fusion of the second chrominance channel images of all the exposure images, wherein the specific process comprises the following steps:
step 4_ 1: calculating the fusion coefficient of each pixel point in the second color channel image of each exposure image, and converting Cr into CrdThe fusion coefficient of the pixel point with the middle coordinate position (x, y) is recorded as Wherein, Crd(x, y) represents CrdThe middle coordinate position is the pixel value of the pixel point of (x, y); cr (chromium) componentdThe closer the pixel value of the pixel point in (1) is to 128, the less color information the pixel point carries, and therefore, CrdThe fusion coefficient of each pixel point in (1) is determined by the absolute value of the difference between the pixel value of the pixel point and 128.
Step 4_ 2: calculating the second chroma channel image after the fusion of the second chroma channel images of all the exposure images, and recording as
And 5: will be provided withAnd converting the formed images of the YCbCr color space from the YCbCr color space to the RGB color space to obtain a fusion image of the multi-exposure images.
To further illustrate the feasibility and effectiveness of the method of the present invention, the following experiments were conducted.
There are ten multi-exposure image sequences of different scenes with high contrast and detail using Balloons, belgium House, Cadik, Candle, Cave, House, Kluki, Lamp, LightHouse, Madison. Table 1 shows information for each sequence of multi-exposure images, including name, spatial resolution, and number of exposure images.
TABLE 1 Multi-Exposure image sequences
Multi-exposure image sequence Size of exposed image
Balloons 512×339×9
BelgiumHouse 512×384×9
Cadik 512×384×15
Candle 512×364×3
Cave 512×384×3
House 512×340×4
Kluki 512×341×3
Lamp 512×342×6
LightHouse 512×340×3
Madison 512×384×30
Seven classical multi-exposure image fusion algorithms are selected to be compared with the method disclosed by the invention so as to verify the feasibility and effectiveness of the method disclosed by the invention. Seven classical multi-exposure Image fusion algorithms are respectively global averaging, abbreviated as gsaverage, in b.gu, w.li, j.wong, m.zhu, and m.wang, "Gradient field multi-exposure fusion for high dynamic range Image fusion", j.vis.com.image.reproduction, vol.23, No.4, pp.604-610, May2012, (Gradient field multi-exposure Image fusion for high dynamic range Image visualization), in z.g.li, j.h.zhang, and s.rahardja, "Detail-enhanced exposure fusion," IEEE transaction process, vol.21, No.11, No. 2-4676, 467v.467., "local averaging, Image fusion," filtering Image fusion, abbreviated as "Image fusion for Image, n.g.32, n.g.l, No.11, No. 2-467.52, n.g.g.l, No.4, pp.604-610, May2012," Image fusion for high dynamic range Image visualization, "IEEE transaction process, No. 2-enhanced Image fusion," n.g.g.g.l.12, No. 3, n.g.g.h.z.z.h.z.z.z.z.z.z.h.z.z.z.z.z.z.g.z.1. trade-enhanced Image fusion, No. 2-enhanced by "Image fusion," n.12, n.32, b.g.g.32, c.g.g.12, c.g.g.g.g.32, c.g.g.g.g.g.g.g.n.n.g.n.n.g.2. Image fusion, c.12, c.g.g.g.g.g.g.n.n.g.g.n.n.g., the method proposed in the text "in proc. europatics, 2009, pp.1-4" (variable exposure photography synthesis Based on bilateral filters) is abbreviated Raman09, in v. von kakis, o. bouzos, i.andreadis, "Multi-exposure Image Fusion Based on Illumination Estimation," specific imaging publishing Association, pp.135-142, Heraklion, create, greene, 2011. (Multi-exposure Image Fusion Based on Illumination Estimation) the method proposed in von kakis 11.
1) Subjective evaluation
The method, the gsaverage method, the Gu12 method, the Li12 method, the Li13 method, the lsaverage method, the Raman09 method and the Vonikakis11 method are respectively used for fusing a multi-exposure image sequence 'LightHouse', and fused images obtained by the eight methods are correspondingly shown in FIGS. 2a to 2 h. From FIG. 2a, it can be seen that the fused image obtained by the method of the present invention has good contrast and color information; it can be seen from fig. 2b and 2g that the gsaverage method and the Raman09 method have lower contrast in the sky part, and the stone region is darker and cannot display more detailed texture; from fig. 2c, it can be seen that the color of the whole fused image obtained by the Gu12 method is obviously distorted, and the color of the fused image is gray, which is completely different from the actual color; it can be seen from fig. 2d that the Li12 method shows good color and contrast in the sky part, but the color of the stone region is distorted; it can be seen from fig. 2e that the Li13 method can achieve better global contrast, but this method has halo artifacts around the house; it can be seen from fig. 2f that the effect of the fused image obtained by the lsaverage method is the worst, and the whole fused image is seriously distorted on the detail texture; it can be seen from fig. 2h that the von ikakis11 method can retain good texture details in lighter areas, but the texture details in darker areas of the stone are lost.
The method, the gsaverage method, the Gu12 method, the Li12 method, the Li13 method, the lsaverage method, the Raman09 method and the Vonikakis11 method are respectively used for fusing a multi-exposure image sequence 'Madison', and fused images obtained by the eight methods are correspondingly shown in the figures 3a to 3 h. As can be seen from FIG. 3a, the fused image obtained by the method of the present invention has good global contrast, and the portrait pillar retains rich texture details; from fig. 3g and 3h, it can be seen that the fused images obtained by the Raman09 method and the von ikakis11 method are overall darker in hue and do not effectively show the texture of dark areas; it can be seen from fig. 3b that the gsaverage method does not show up clearly on the portrait and that the pillars and windows are darker in tint; from fig. 3c, it can be seen that the Gu12 method, although it can better show the contour texture, has severe distortion in color, and the color of the fused image is generally grayish; it can be seen from fig. 3d that the Li12 method retains good luminance information, but some regions are too bright to show fine texture; it can be seen from fig. 3f that the lsaverage method still has severe texture distortion; it can be seen from fig. 3e that the Li13 method can maintain good brightness, but the global contrast is not high enough.
2) Objective evaluation
The multi-exposure image sequences of "balloon", "belgium House", "cablek", "Candle", "Cave", "House", "Kluki", "Lamp", "LightHouse" and "Madison" are respectively fused by the method of the present invention, the gsaverage method, the Gu12 method, the Li12 method, the Li13 method, the lsaverage method, the Raman09 method and the von ikakis11 method.
Q, set forth in C.S. Xydeas and V.Petrovic, "Objective image fusion performance measure," Electron.Lett., vol.36, No.4, pp.308-309, Feb.2000. (Objective image fusion performance measure) is used hereinAB/FAs an objective quality evaluation index. QAB/FIs an objective evaluation index widely used for evaluating the quality of the fused image and is mainly used for analyzing the edge of the fused imageEdge information, and QAB/FA larger value represents a better quality of the fused image. Table 2 shows the use of QAB/FThe values of the fused images obtained using the different fusion methods were evaluated, with the largest two values in each group shown in bold. As can be seen from Table 2, the process of the present invention performed similarly to the Li13 process and was significantly superior to the other processes.
TABLE 2 use of QAB/FEvaluating values of fused images obtained using different fusion methods
And analyzing the influence of the weight index k, the size of the brightness block, namely the size of the sliding window, and the step length of the sliding window.
1) Influence of the weighting index k
In the method of the present invention, k is set to 0.5 during the acquisition of the luminance channel fusion image. Fig. 4 shows different values of k versus Q when the size of the luminance block is set to 11 × 11 and the step size of the sliding window is set to 2AB/FIn fig. 4, the abscissa represents the value of k and the ordinate represents Q for ten sets of fused imagesAB/FAverage value of (a). As can be seen from FIG. 4, as the value of k becomes larger, QAB/FThe value of (A) is first larger and then smaller, and when k is 0.5, Q isAB/FThe value of (c) is maximum.
2) Influence of the size of the luminance block, i.e. the size of the sliding window
In the method of the present invention, the size of the sliding window is set to 11 × 11 during the acquisition of the luminance channel fusion image. FIG. 5 shows the size of different luminance blocks versus Q with a weight index of 0.5 and a sliding window step size of 2AB/FIn fig. 5, the abscissa represents the size of the luminance block, and the ordinate represents Q of the ten sets of fused imagesAB/FAverage value of (a). From FIG. 5, Q can be seenAB/FThe value of (A) increases with the size of the luminance blockAnd the size of the brightness block is increased from 3 to 8 pixel points, QAB/FThe change trend of the value is large, and the change curve is relatively steep; the size of the luminance block ranges from 8 to 12 pixels, QAB/FThe change trend of the value of (A) is not very large, and the change curve is relatively flat.
3) Effect of step size of sliding Window
In the method of the invention, the step size of the sliding window is set to 2 in the acquisition process of the brightness channel fusion image. FIG. 6 shows the step size of the sliding window versus Q for a weight index of 0.5 and a luminance block size, i.e., a sliding window size of 11 × 11AB/FThe abscissa of fig. 6 represents the size of the step of the sliding window, and the ordinate represents Q of the ten sets of fused imagesAB/FAverage value of (a). As can be seen from FIG. 6, Q is obtained at step sizes of 1 and 2 for the sliding windowAB/FIs substantially the same, and as the step size of the sliding window increases, QAB/FThe value of (c) generally tends to decrease.

Claims (3)

1. A multi-exposure image fusion method based on high-order singular value decomposition is characterized by comprising the following steps:
step 1: selecting D different exposure images with the width of M and the height of N; then converting each exposure image from RGB color space to YCbCr color space to obtain brightness channel image, first chroma channel image and second chroma channel image of each exposure image, and recording the brightness channel image, the first chroma channel image and the second chroma channel image of the d exposure image as Yd、Cbd、Crd(ii) a Wherein D is a positive integer, D is more than 1, D is a positive integer, the initial value of D is1, and D is more than or equal to 1 and less than or equal to D;
step 2: acquiring brightness channel images of all exposure images after brightness channel image fusion, wherein the specific process is as follows:
step 2_ 1: with a width ofAnd has a height ofThe sliding window slides in the brightness channel image of each exposure image by taking the step length as r pixel points, divides the brightness channel image of each exposure image into L brightness blocks, and divides Y intodThe ith luminance block in (1) is denoted as Bd,i(ii) a Recording the overlapping times of each pixel point in the brightness channel image of each exposure image in the brightness block dividing process; then, the brightness blocks at the same position in the brightness channel images of all the exposure images are formed into a size ofL tensors are obtained in total, each tensor corresponds to D luminance blocks, and the tensor composed of the ith luminance block in the luminance channel images of all the exposure images is recorded as ai(ii) a Wherein,r is a positive integer, and r is a positive integer,min () is a minimum function, i is a positive integer, the initial value of i is1, i is more than or equal to 1 and is less than or equal to L;
step 2_ 2: performing a higher order singular value decomposition on each tensor, for AiTo A, aiCarrying out high-order singular value decomposition to obtain Ai=Si×1Ui×2Vi×3Wi(ii) a Wherein S isiIs represented by AiNuclear tensor of, UiIs represented by AiOf the first mode factor matrix, ViIs represented by AiA second mode factor matrix of WiIs represented by AiSymbol "", a third pattern factor matrix1"first mode product sign, sign of tensor"2"second mode product sign, sign as tensor3"is the sign of the third mode product of the tensor;
step 2_ 3: obtaining the characteristic coefficient of each brightness block corresponding to each tensor, and comparing AiThe characteristic coefficient of the corresponding d-th luminance block is recorded as Wherein A isiThe corresponding d-th brightness block is Bd,iIs represented by AiCorresponding d-th luminance block, i.e. Bd,iThe nuclear tensor of (a);
step 2_ 4: calculating the activity level measure of each brightness block corresponding to each tensor, and calculating AiThe activity level measure of the corresponding d-th luminance block is noted as Wherein m is a positive integer, the initial value of m is1,n is a positive integer, n has an initial value of 1,the symbol "|" is an absolute value-taking symbol,to representThe middle subscript is the value at (m, n);
step 2_ 5: obtaining a fusion coefficient matrix of each tensor, and calculating AiThe fusion coefficient matrix of (2) is recorded as EiWherein k is a weight index, and k belongs to (0, 1)];
Step 2_ 6: calculating the fused brightness block corresponding to each tensor, and calculating AiThe corresponding fused luminance block is marked as Fi,Fi=Ui×Ei×(Vi)T(ii) a Wherein (V)i)TIs a ViTransposing;
step 2_ 7: acquiring an overlapped brightness channel image formed by the L fused brightness blocks according to the obtained L fused brightness blocks, and recording the image as YoutIs a reaction of YoutThe pixel value of the pixel point with the middle coordinate position (x, Y) is recorded as Yout(x, y); then Y is putoutDividing the pixel value of each superposed pixel point by the superposition times of the pixel point to obtain a brightness channel image, and recording the brightness channel image asWill be provided withThe pixel value of the pixel point with the middle coordinate position (x, y) is recorded asWherein, YoutThe widths of the N-type glass are M and the heights of the N-type glass are N, x is more than or equal to 1 and less than or equal to M, and y is more than or equal to 1 and less than or equal to N;
step 2_ 8: to pairPerforming linear transformation optimization to obtain brightness channel image after brightness channel image fusion of all exposure images, and recording asWill be provided withThe pixel value of the pixel point with the middle coordinate position (x, y) is recorded as Wherein, YminTo representMinimum pixel value of (1), YmaxTo representMaximum pixel value of;
and step 3: acquiring a first chrominance channel image obtained by fusing the first chrominance channel images of all the exposure images, wherein the specific process comprises the following steps:
step 3_ 1: calculating the fusion coefficient of each pixel point in the first color channel image of each exposure image, and converting CbdThe fusion coefficient of the pixel point with the middle coordinate position (x, y) is recorded as Wherein x is more than or equal to 1 and less than or equal to M, y is more than or equal to 1 and less than or equal to N, the symbol "|" is an absolute value symbol, Cbd(x, y) denotes CbdThe middle coordinate position is the pixel value of the pixel point of (x, y);
step 3_ 2: calculating the fused first color channel image of all the exposure images, and recording as
And 4, step 4: acquiring a second chrominance channel image after the fusion of the second chrominance channel images of all the exposure images, wherein the specific process comprises the following steps:
step 4_ 1: calculating the fusion coefficient of each pixel point in the second color channel image of each exposure image, and converting Cr into CrdThe fusion coefficient of the pixel point with the middle coordinate position (x, y) is recorded as Wherein, Crd(x, y) represents CrdThe middle coordinate position is the pixel value of the pixel point of (x, y);
step 4_ 2: calculating the second chroma channel image after the fusion of the second chroma channel images of all the exposure images, and recording as
And 5: will be provided withAnd converting the formed images of the YCbCr color space from the YCbCr color space to the RGB color space to obtain a fusion image of the multi-exposure images.
2. The method for multi-exposure image fusion based on high-order singular value decomposition as claimed in claim 1, wherein said step 2_1 is performed by takingr=2。
3. The method for multi-exposure image fusion based on higher-order singular value decomposition according to claim 1 or 2, wherein k in step 2_5 is 0.5.
CN201910396691.4A 2019-05-13 2019-05-13 Multi-exposure image fusion method based on high-order singular value decomposition Active CN110211077B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910396691.4A CN110211077B (en) 2019-05-13 2019-05-13 Multi-exposure image fusion method based on high-order singular value decomposition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910396691.4A CN110211077B (en) 2019-05-13 2019-05-13 Multi-exposure image fusion method based on high-order singular value decomposition

Publications (2)

Publication Number Publication Date
CN110211077A true CN110211077A (en) 2019-09-06
CN110211077B CN110211077B (en) 2021-03-09

Family

ID=67787093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910396691.4A Active CN110211077B (en) 2019-05-13 2019-05-13 Multi-exposure image fusion method based on high-order singular value decomposition

Country Status (1)

Country Link
CN (1) CN110211077B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111105383A (en) * 2019-11-12 2020-05-05 杭州电子科技大学 Image fusion color enhancement method for three-color vision
CN112562020A (en) * 2020-12-23 2021-03-26 绍兴图信物联科技有限公司 TIFF image and halftone image format conversion method based on least square method
CN112837254A (en) * 2021-02-25 2021-05-25 普联技术有限公司 Image fusion method and device, terminal equipment and storage medium
CN116452437A (en) * 2023-03-20 2023-07-18 荣耀终端有限公司 High dynamic range image processing method and electronic equipment
CN117391985A (en) * 2023-12-11 2024-01-12 安徽数分智能科技有限公司 Multi-source data information fusion processing method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881854A (en) * 2015-05-20 2015-09-02 天津大学 High-dynamic-range image fusion method based on gradient and brightness information
CN106373105A (en) * 2016-09-12 2017-02-01 广东顺德中山大学卡内基梅隆大学国际联合研究院 Multi-exposure image deghosting integration method based on low-rank matrix recovery
CN106875352A (en) * 2017-01-17 2017-06-20 北京大学深圳研究生院 A kind of enhancement method of low-illumination image
US20170213330A1 (en) * 2016-01-25 2017-07-27 Qualcomm Incorporated Unified multi-image fusion approach
CN107370910A (en) * 2017-08-04 2017-11-21 西安邮电大学 Minimum surround based on optimal exposure exposes set acquisition methods
CN109074637A (en) * 2015-11-27 2018-12-21 斯佩特罗埃奇有限公司 For generating the method and system of output image from multiple corresponding input picture channels
CN109636767A (en) * 2018-11-30 2019-04-16 深圳市华星光电半导体显示技术有限公司 More exposure image fusion methods

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881854A (en) * 2015-05-20 2015-09-02 天津大学 High-dynamic-range image fusion method based on gradient and brightness information
CN109074637A (en) * 2015-11-27 2018-12-21 斯佩特罗埃奇有限公司 For generating the method and system of output image from multiple corresponding input picture channels
US20170213330A1 (en) * 2016-01-25 2017-07-27 Qualcomm Incorporated Unified multi-image fusion approach
CN106373105A (en) * 2016-09-12 2017-02-01 广东顺德中山大学卡内基梅隆大学国际联合研究院 Multi-exposure image deghosting integration method based on low-rank matrix recovery
CN106875352A (en) * 2017-01-17 2017-06-20 北京大学深圳研究生院 A kind of enhancement method of low-illumination image
CN107370910A (en) * 2017-08-04 2017-11-21 西安邮电大学 Minimum surround based on optimal exposure exposes set acquisition methods
CN109636767A (en) * 2018-11-30 2019-04-16 深圳市华星光电半导体显示技术有限公司 More exposure image fusion methods

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
HUANG, Y.B.等: "《The Speech multi features fusion perceptual hash algorithm based on tensor decomposition》", 《OP CONFERENCE SERIES: MATERIALS SCIENCE AND ENGINEERING》 *
K.MA等: "Robust Multi-Exposure Image Fusion:A Structural Patch Decomposition Approach", 《IEEE TRANS.IMAGE PROCESS》 *
YANG, T.T等: "Multi exposure image fusion algorithm based on YCbCr space", 《IOP CONFERENCE SERIES: MATERIALS SCIENCE AND ENGINEERING》 *
戚余斌等: "基于张量分解和卷积稀疏表示的多曝光图像融合", 《光电工程》 *
朱雄泳等: "基于一致性敏感哈希块匹配的HDR图像去伪影融合方法", 《网络出版地址:HTTPS://KNS.CNKI.NET/KCMS/DETAIL/11.2109.TP.20181113.1304.016.HTML》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111105383A (en) * 2019-11-12 2020-05-05 杭州电子科技大学 Image fusion color enhancement method for three-color vision
CN111105383B (en) * 2019-11-12 2023-04-21 杭州电子科技大学 Three-color vision-oriented image fusion color enhancement method
CN112562020A (en) * 2020-12-23 2021-03-26 绍兴图信物联科技有限公司 TIFF image and halftone image format conversion method based on least square method
CN112562020B (en) * 2020-12-23 2024-06-07 绍兴图信物联科技有限公司 TIFF image and half-tone image format conversion method based on least square method
CN112837254A (en) * 2021-02-25 2021-05-25 普联技术有限公司 Image fusion method and device, terminal equipment and storage medium
CN112837254B (en) * 2021-02-25 2024-06-11 普联技术有限公司 Image fusion method and device, terminal equipment and storage medium
CN116452437A (en) * 2023-03-20 2023-07-18 荣耀终端有限公司 High dynamic range image processing method and electronic equipment
CN116452437B (en) * 2023-03-20 2023-11-14 荣耀终端有限公司 High dynamic range image processing method and electronic equipment
CN117391985A (en) * 2023-12-11 2024-01-12 安徽数分智能科技有限公司 Multi-source data information fusion processing method and system
CN117391985B (en) * 2023-12-11 2024-02-20 安徽数分智能科技有限公司 Multi-source data information fusion processing method and system

Also Published As

Publication number Publication date
CN110211077B (en) 2021-03-09

Similar Documents

Publication Publication Date Title
CN110211077B (en) Multi-exposure image fusion method based on high-order singular value decomposition
CN110599415B (en) Image contrast enhancement implementation method based on local self-adaptive gamma correction
RU2400815C2 (en) Method of enhancing digital image quality
CN112435191B (en) Low-illumination image enhancement method based on fusion of multiple neural network structures
CN106780367B (en) HDR photo style transfer method dictionary-based learning
CN105809643B (en) A kind of image enchancing method based on adaptive block channel extrusion
CN113129391B (en) Multi-exposure fusion method based on multi-exposure image feature distribution weight
CN106780417A (en) A kind of Enhancement Method and system of uneven illumination image
CN110009574B (en) Method for reversely generating high dynamic range image from low dynamic range image
CN113822830B (en) Multi-exposure image fusion method based on depth perception enhancement
CN111260580A (en) Image denoising method based on image pyramid, computer device and computer readable storage medium
CN106454144B (en) A kind of bearing calibration of pair of Google glass image overexposure
CN113706393B (en) Video enhancement method, device, equipment and storage medium
Lv et al. Low-light image enhancement via deep Retinex decomposition and bilateral learning
CN115937024A (en) Multi-frame fusion low-illumination image enhancement method based on Retinex theory
Liu et al. Color enhancement using global parameters and local features learning
CN115809966A (en) Low-illumination image enhancement method and system
CN113256533B (en) Self-adaptive low-illumination image enhancement method and system based on MSRCR
Zhang et al. Multi-scale-based joint super-resolution and inverse tone-mapping with data synthesis for UHD HDR video
CN110545414B (en) Image sharpening method
CN115147311B (en) Image enhancement method based on HSV and AM-RetinexNet
CN116630198A (en) Multi-scale fusion underwater image enhancement method combining self-adaptive gamma correction
CN116563133A (en) Low-illumination color image enhancement method based on simulated exposure and multi-scale fusion
Siddiqui et al. Hierarchical color correction for camera cell phone images
CN113012079B (en) Low-brightness vehicle bottom image enhancement method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Li Li

Inventor after: Wu Shengcong

Inventor after: Luo Ting

Inventor after: Xu Haiyong

Inventor after: He Zhouyan

Inventor after: Zhang Junjun

Inventor before: Li Li

Inventor before: Luo Ting

Inventor before: Xu Haiyong

Inventor before: Wu Shengcong

Inventor before: He Zhouyan

Inventor before: Zhang Junjun

CB03 Change of inventor or designer information