CN115063331B - Multi-scale block LBP operator-based ghost-free multi-exposure image fusion method - Google Patents
Multi-scale block LBP operator-based ghost-free multi-exposure image fusion method Download PDFInfo
- Publication number
- CN115063331B CN115063331B CN202210666439.2A CN202210666439A CN115063331B CN 115063331 B CN115063331 B CN 115063331B CN 202210666439 A CN202210666439 A CN 202210666439A CN 115063331 B CN115063331 B CN 115063331B
- Authority
- CN
- China
- Prior art keywords
- image
- weight
- follows
- weight map
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 17
- 238000000034 method Methods 0.000 claims abstract description 41
- 230000004927 fusion Effects 0.000 claims abstract description 13
- 238000000605 extraction Methods 0.000 claims abstract description 11
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 7
- 238000004364 calculation method Methods 0.000 claims description 18
- 230000003044 adaptive effect Effects 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 9
- 238000010606 normalization Methods 0.000 claims description 7
- 230000007797 corrosion Effects 0.000 claims description 6
- 238000005260 corrosion Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 230000000877 morphologic effect Effects 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 5
- 238000007499 fusion processing Methods 0.000 abstract 1
- 238000003384 imaging method Methods 0.000 description 5
- 230000007423 decrease Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000004321 preservation Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000001303 quality assessment method Methods 0.000 description 1
- 238000013441 quality evaluation Methods 0.000 description 1
- 238000005316 response function Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000013316 zoning Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
The invention provides a multi-scale block LBP operator-based ghost-free multi-exposure image fusion method, and relates to the technical field of image processing. For multi-exposure image sequences in dynamic scenes, multi-scale block LBP operators are used for local texture extraction of bright and dark areas and ghost removal caused by moving targets. On the basis, a novel brightness self-adaption method is further provided, so that the fusion image has better visibility. After the weight map is constructed, the discontinuous and noise-containing initial weight map is refined by using a fast guide filter, and the final fusion process adopts pyramid decomposition and reconstruction methods.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a multi-scale block LBP operator-based ghost-free multi-exposure image fusion method.
Background
At present, the multi-exposure image fusion method mainly comprises two methods: hardware-based and software-based methods. Hardware-based methods are straightforward high dynamic range devices to acquire and display real scenes, but these devices tend to be expensive and not universally available. Compared with the method based on hardware, the method based on software is easy to realize, low in price and suitable for common cameras. Existing software-based solutions fall into two main categories: HDR imaging techniques and multi-exposure image fusion (MEF). HDR imaging techniques are techniques that use multiple exposure low-motion images to estimate the Camera Response Function (CRF) to produce high-motion images. The high dynamic image is then compressed and converted to a low dynamic image by tone mapping so that the high dynamic image can be visualized on a common display device. However, the complexity of the HDR imaging technique is high, the time required is long, and the HDR imaging technique is not suitable for a common camera. The multi-exposure fusion method does not need to construct an HDR image, extracts pixels with larger information quantity, better exposure and higher image quality from an input multi-exposure low-dynamic image, then fuses, and finally the obtained fused image can be directly displayed on common display equipment without other processing. Compared with the HDR imaging technology, the multi-exposure image fusion method has lower calculation complexity and higher speed, so the method is the first choice of a common camera, but the existing multi-exposure image fusion technology has a plurality of defects, for example, the information of pixel space neighborhood is not fully considered, the texture details of the image, particularly the detail information of bright and dark areas, and the halation phenomenon exists at the edge of the image cannot be well reserved; the fused image can not retain the characteristic information of the source image sequence, and the color of the image is distorted; the fused image in the dynamic scene is affected by ghost artifacts.
Disclosure of Invention
(one) solving the technical problems
Aiming at the defects of the prior art, the invention provides a multi-scale block LBP operator-based ghost-free multi-exposure image fusion method, which solves the problems that the prior method does not fully consider the information of the pixel space neighborhood, can not well keep the texture details of images, particularly the detail information of bright and dark areas, and has the phenomenon of halation at the edges of the images; the fused image can not retain the characteristic information of the source image sequence, and the color of the image is distorted; and the fusion image in the dynamic scene is affected by the ghost artifact.
(II) technical scheme
In order to achieve the above purpose, the invention is realized by the following technical scheme: the multi-scale block LBP operator-based ghost-free multi-exposure image fusion method comprises the following steps:
extracting contrast weight mapLuminance weight map->Spatial consistency weight map
Weight map estimation
The different weight maps are combined by pixel multiplication, and the specific calculation is as follows:
wherein,is a comparability weight graph, which is->For brightness weight map, ">Spatial consistency weight map,/->The initial weight diagram after combination;
after the initial weight map is generated, it is normalized so that the sum of weights at each pixel (x, y) is 1, calculated as follows:
wherein epsilon is a positive number, K is the number of input images,Is an initial weight graph;
weight map refinement
Will initiate the weight mapMeanwhile, as a guide image and an input image, the initial weight map is refined by adopting a rapid guide filter, and the method is specifically calculated as follows:
representing the weight map after refinement, FGF r,ep,∈ (I, W) denotes a fast guided filtering operation, r, ep, e is a parameter of the filter, r denotes a window radius of the filter, ep is a regularization parameter of the filter, e is a sub-sampling rate, I, W denote a guided image and an image to be filtered, respectively;
the thinned weight graphNormalization is carried out to obtain a normalized weight graph, and the weight graph is calculated as follows:
wherein ε is a positive number, W i (x, y) represents the normalized weight map, K is the number of input imagesRepresenting the thinned weight graph;
image fusion
Decomposing a source image into a Laplacian pyramid, decomposing a final weight graph into a Gaussian pyramid, and then respectively fusing the Laplacian pyramid of the source image and the pyramid of the corresponding weight graph at each stage, wherein the steps are as follows:
G{W i (x,y)} l representing the decomposition of the weight map into Gaussian pyramids, L { I } i (x,y)} l Representing the decomposition of an input image into Laplacian pyramids, L { F } l Is a new Laplacian pyramid after fusion, L represents the layer number of the pyramid, and finally for L { F } l And reconstructing to obtain a final fused image.
Preferably, a contrast weight map is extractedThe method comprises the following steps:
the average brightness L (x, y) of the pixel points (x, y) in the multi-exposure image sequence after normalization is calculated, and the specific calculation is as follows:
wherein L (x, y) is the average brightness of the pixel point (x, y) in the multi-exposure image sequence after normalization, L i (x, y) represents the luminance value of the pixel at the i-th image position (x, y) in the input image sequence, K being the number of input images;
the average brightness at the pixel points (x, y) is used for dividing the exposure normal area, the bright area and the dark area in the image, and the specific calculation is as follows:
wherein L (x, y) is the normalized average luminance in the multi-exposure image sequence at pixel (x, y), the average luminance at each pixel (x, y) determining the bright region B of each image in the source image sequence i (x, y), dark region D i (x, y) and normally exposed region N i (x,y),Is a gray image, α is a luminance threshold, and K is the number of input images;
for the normally exposed region in the source image, the Scharr operator is adopted to extract the texture and the edge, and the local contrast of each pixel point (x, y) is calculated as follows:
wherein G is x ,G y Representing texture variations in the horizontal and vertical directions, N i (x, y) represents a region of normal exposure in the ith image in the input image sequence;
then, according to the convolution calculation result, calculating the texture change weight of the area with normal exposure, wherein the calculation is as follows:
wherein,a texture change weight map at pixel points (x, y) representing an i-th image exposure normal region in the input image sequence, G x ,G y Representing texture variations in the horizontal and vertical directions, respectively;
texture and edge extraction are carried out on bright and dark areas by adopting a multi-scale fast LBP operator, and the texture and edge extraction method is calculated as follows:
S i (x,y)=MBLBP(IN i (x,y))
wherein IN i (x, y) is the bright and dark areas in the input image, MBPLBP (.cndot.) is the multi-scale block LBP operator, S i (x, y) as a coded value at the pixel point (x, y), i.e., an LBP eigenvalue reflecting texture information of the center pixel point (x, y) and its neighborhood;
using a fast Laplace filter pair S i The texture detail information in (x, y) is enhanced while preserving the information of the edge portion, which is calculated as follows:
wherein,is passed through a fast Laplace filter pair S i Texture change weight map of bright and dark areas after the texture detail information in (x, y) is enhanced;
combining the two weights to obtain a final contrast weight graph, and calculating as follows:
wherein,for contrast weight map, ++>Exposing a texture variation weight map at pixel points (x, y) of a normal region for an ith image in an input image sequence, +.>Texture change weight map for bright and dark areas of an input image sequence after being enhanced by a fast Laplace filter.
Preferably, the luminance weight map is extractedThe method comprises the following steps:
the brightness weight values of the red, green and blue channels are constructed by utilizing the combined curve of the Gaussian curve and the Cauchy curve, the higher brightness weight is distributed to the pixels in the well-exposed area, and the lower brightness weight is distributed to the pixels in the bright and dark areas in the image, and the calculation is as follows:
wherein R is l Representing the luminance weight value of the red channel, G l Representing the brightness weight value of the green channel, B l A luminance weight value representing a blue channel;
a luminance weight map is extracted, which is calculated as follows:
wherein,representing a luminance weight map.
Preferably, the adaptive function η (rl, R) is calculated as follows:
l r,i (x, y) represents the luminance value of the pixel in the red channel at the i-th image position (x, y) in the input image.
Preferably, the adaptive function η (gl, G) is calculated as follows:
l g,i (x, y) represents the luminance value of the pixel in the green channel at the i-th image position (x, y) in the input image.
Preferably, the adaptive function η (bl, B) is calculated as follows:
l g,i (x, y) represents the luminance value of the pixel in the blue channel at the i-th image position (x, y) in the input image.
Preferably, a spatial consistency weight map is extractedThe method comprises the following steps:
first, the LBP feature is calculated for each pixel in the source image sequence, as follows:
and->Respectively inputting pixel values of an ith image in R, G and B channels at pixel points (x, y);
for any two different images I in a sequence of images i (x,y)、I j (x, y) (i. Noteq. J) by calculating T at pixel (x, y) i (x,y)、T j The Euclidean distance between (x, y) measures their local similarity in R, G, B channels, respectively, as follows:
the local similarity between the i, j-th images is calculated as follows:
D i,j (x,y)=d r i,j (x,y) 2 ×d g i,j (x,y) 2 ×d b i,j (x,y) 2
then constructing a spatial consistency weight item of an image in a motion scene in the following way, wherein the spatial consistency weight item is specifically calculated as follows:
wherein the standard deviation delta d Controlling local similarity d i,j (x, y) vs. weightIs a function of (1);
finally, the weight map is refined through morphological operators to remove the influence of noise:
wherein s is 1 、s 2 Is a structural element of expansion and corrosion,is an inflation operation, +.>Is a corrosion operation.
(III) beneficial effects
Compared with the prior art, the multi-scale block LBP operator-based ghost-free multi-exposure image fusion method provided by the invention has the following advantages: the method fully reserves the texture details of the image, enhances the detail information of the bright area and the dark area in the image, furthest reserves the characteristics of the source image sequence, can not lose the color information of the image, can process the image sequence shot in a dynamic scene, and can not be influenced by ghost artifacts. The method innovatively provides a multi-scale block LBP-based zoning method for extracting image texture information; aiming at the brightness characteristics of the image, a novel brightness self-adaptive method is innovatively provided, so that the fused image has better visibility; aiming at an image sequence in a dynamic scene, a method based on multi-scale block LBP is innovatively provided for constructing a space consistency weight item, and ghost artifacts in a fusion image are effectively removed.
Drawings
FIG. 1 is a process diagram of the method of the present invention;
FIG. 2 shows a sequence of images in a static scene after a sequence of multi-exposure images has been input; (b) is a sequence of images in a dynamic scene;
FIG. 3 is a graph showing the result of processing an image sequence by the prior art method;
FIG. 4 is a graph showing the result of processing an image sequence according to the present method;
FIG. 5 is a graph showing the result of fusing images by the prior art method;
fig. 6 is a result of fusing images in the present method.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
1. Texture extraction
When fusing images in a static scene without moving objects, two image features should be considered: contrast and brightness, local contrast is used to preserve important details such as texture and edges. When the multi-resolution method is used to fuse the multi-exposure images, it retains enough detail information in the normally exposed areas. However, since the texture detail information of the bright and dark areas is affected by the brightness, part of the detail information is lost in the bright and dark areas. To solve this problem, a multi-scale block LBP-based partitioned texture detail extraction method is innovatively proposed herein, and is specifically calculated as follows:
wherein L (x, y) is the average brightness of the pixel point (x, y) in the multi-exposure image sequence after normalization, L i (x, y) represents the luminance value of the pixel at the i-th image position (x, y) in the input image sequence. We determine the bright region B of each image in the source image sequence by calculating the average luminance at each pixel point (x, y) i (x, y), dark region D i (x, y) and normally exposed region N i (x, y). Here, theIs a gray scale image, α is a luminance threshold, and K is the number of input images. L (x, y) is calculated as follows:
the normal exposure area in the image is extracted by adopting the Scharr operator with higher accuracy to extract the texture and the edge, and the local contrast of each pixel is calculated as follows:
G x ,G y respectively representing texture changes in the horizontal and vertical directions, and then calculating the texture change weight of the normal region according to the convolution calculation result, wherein the calculation is as follows:
representing the texture variation weight at the pixel point (x, y) of the i-th image exposure normal region in the input image sequence.
The texture and edge extraction is carried out on the bright and dark areas in the image by adopting a multi-scale fast LBP operator, the operator has rotation invariance and gray invariance, and has strong robustness to illumination, and the texture detail information of the area can be extracted well, and the method is calculated as follows:
S i (x,y)=MBI,BP(IN i (x,y))
IN i (x, y) is a bright and dark region in the input image, S i (x, y) is taken as the coding value of the pixel at (x, y), namely, the LBP eigenvalue, which can reflect the texture information of the center pixel (x, y) and its neighborhood. Next a fast laplacian filter pair S is used i The texture detail information in (x, y) is enhanced while preserving the information of the edge portion, which is calculated as follows:
finally, a final contrast weight map is obtained by combining the two texture variation weights, as follows:
2. luminance extraction
When photographed using a conventional camera, certain areas in the photograph appear dark, i.e., underexposed, and certain areas in the photograph appear bright, i.e., overexposed. The underexposure and the overexposure can cause serious loss of image information, influence the visual quality of the image, and the innovation of the text provides the following brightness extraction method:
the method utilizes the combined curve of the Gaussian curve and the Cauchy curve to construct the brightness weight values of the red, green and blue channels, and distributes higher brightness weights to pixels in good exposure areas, and distributes lower brightness weights to pixels in bright and dark areas in the image. The calculation is as follows:
because some pixels in the input image sequence may be inherently in bright or dark areas, rather than the image being too bright or too dark due to overexposure or underexposure. Thus, in the method herein, a luminance adaptive function is used to adjust the weights of the pixels in the bright and dark areas of the RGB three-color channel, taking the luminance adaptive function η (rl, R) of the red channel as an example, namely:
l r,i (x, y) represents the luminance value of the pixel in the red channel at the i-th image position (x, y) in the input image. When the adaptive functions η (gl, G) in the green and blue channels are calculated in the same way as the red channel.
3. Moving object detection
When the input image sequence is captured in a dynamic scene, the influence of the moving object on the fused image must be considered, otherwise the final fused image has the influence of ghost artifacts. To address this problem, the innovation herein proposes a method of constructing spatial consistency weights based on multi-scale block LBP (MB-LBP). First, the LBP feature is calculated for each pixel in the source image sequence, as follows:
and->The i-th image of the input image is at pixel point (x, y) and the pixel values in the R, G, B channels, respectively. For any two different images I in a sequence of images i (x,y)、I j (x, y) (i. Noteq. J) by calculating T at pixel (x, y) i (x,y)、T j The Euclidean distance between (x, y) measures their local similarity in R, G, B channels, respectively. The specific method comprises the following steps:
the local similarity between the i, j-th images is calculated as follows:
D i,j (x,y)=d r i,j (x,y) 2 ×d g i,j (x,y) 2 ×d b i,j (x,y) 2
then constructing a spatial consistency weight item of an image in a motion scene in the following way, wherein the spatial consistency weight item is specifically calculated as follows:
wherein the standard deviation delta d Controlling local similarity d i,j (x, y) vs. weightIs set to 0.05 herein. The design idea of the method is as follows: if the pixel (x, y) is in the image I i In which it belongs to the motion region, then image I i And all I j (i+.j) local similarity D at (x, y) i,j (x, y) increases and the spatial consistency weight W at the pixel point i (x, y) will decrease, resulting in image I i The weight value at the pixel point (x, y) decreases.
Finally, the weight map is refined through morphological operators to remove the influence of noise:
wherein s is 1 、s 2 Is a structural element of expansion and corrosion,is an inflation operation, +.>Is a corrosion operation.
4. Weight map estimation
From the previous calculations, three image features (local contrast, luminance features and spatial consistency) are obtained, and these weight terms need to be combined in this step to obtain the initial weight map. In order for the proposed method to extract the highest quality region from the weight map, the different weight maps are combined here using pixel multiplication, as specifically calculated as follows:
after the initial weight map is generated, it needs to be normalized so that at each pixel (x, y) its weight sum is 1, calculated as follows:
where epsilon is a small positive number in order to avoid the situation where the denominator is zero.
5. Weight map refinement
The initial weight map generally contains noise and has discontinuities, so the initial weight map needs to be refined before final fusion is performed. The specific calculation is as follows:
representing the weight map after refinement, FGF r,ep,∈ (I, W) represents a fast-directed filtering operation. r, ep, e are parameters of the filter, r represents a window radius of the filter, ep is a regularization parameter of the filter, which controls a smoothness of the filter, e is a sub-sampling rate, I, W represent a pilot image and an image to be filtered, respectively, in the method, a weight map is generatedAs a guide image and an input image. And finally, normalizing the weight graph to obtain a final weight graph, and calculating as follows:
6. image fusion
In the method, a source image is decomposed into a Laplacian pyramid, the weight map is decomposed into a Gaussian pyramid, and then the Gaussian pyramid and the Laplacian pyramid of the image are fused at each stage as follows:
G{W i (x,y)} l representing the decomposition of the weight map into Gaussian pyramids, L { I } i (x,y)} l Representing the decomposition of an input image into Laplacian pyramids, L { F } l Is a new Laplacian pyramid after fusion, and l represents the layer number of the pyramid. Finally for L { F } l And reconstructing to obtain a final fused image.
And (3) quality evaluation:
Q AB/F : is a novel fused image objective quality assessment index reflecting visual information obtained from fusion of input imagesThe quality of the information, which indicates the degree of preservation of the edge detail information of the image, the higher its value, the more the edge detail information of the fused image preservation source image sequence.
MEF-SSIM: the index is used to measure the structural similarity between the input multi-exposure image sequence and the fused image. The value range of MEF-SSIM is 0-1, the higher the value is, the higher the structural similarity between the result image and the source image sequence is, namely, the better the image quality is, and the MEF-SSIM adopted in the method is an evaluation index of full reference type.
TABLE 1MEF-SSIM test results
/>
TABLE 2Q AB/F Test results
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (7)
1. The multi-scale block LBP operator-based ghost-free multi-exposure image fusion method is characterized by comprising the following steps of:
extracting contrast weight mapLuminance weight map->Spatial consistency weight map->
Weight map estimation
The different weight maps are combined by pixel multiplication, and the specific calculation is as follows:
wherein,is a comparability weight graph, which is->For brightness weight map, ">A space consistency weight graph,The initial weight diagram after combination;
after the initial weight map is generated, it is normalized so that the sum of weights at each pixel (x, y) is 1, calculated as follows:
wherein epsilon is a positive number, K is the number of input images,Is an initial weight graph;
weight map refinement
Will initiallyWeight mapMeanwhile, as a guide image and an input image, the initial weight map is refined by adopting a rapid guide filter, and the method is specifically calculated as follows:
representing the weight map after refinement, FGF r,ep,∈ (I, W) denotes a fast guided filtering operation, r, ep, e is a parameter of the filter, r denotes a window radius of the filter, ep is a regularization parameter of the filter, e is a sub-sampling rate, I, W denote a guided image and an image to be filtered, respectively;
the thinned weight graphNormalization is carried out to obtain a normalized weight graph, and the weight graph is calculated as follows:
wherein ε is a positive number, W i (x, y) represents the normalized weight map, K is the number of input images,representing the thinned weight graph;
image fusion
Decomposing a source image into a Laplacian pyramid, decomposing a final weight graph into a Gaussian pyramid, and then respectively fusing the Laplacian pyramid of the source image and the Gaussian pyramid of the corresponding weight graph at each stage, wherein the steps are as follows:
G{W i (x,y)} l representing the decomposition of the weight map into Gaussian pyramids, L { I } i (x,y)} l Representing the decomposition of an input image into Laplacian pyramids, L { F } l Is a new Laplacian pyramid after fusion, L represents the layer number of the pyramid, and finally for L { F } l And reconstructing to obtain a final fused image.
2. The multi-scale block LBP operator-based ghost-free multi-exposure image fusion method according to claim 1, wherein a contrast weight map is extractedThe method comprises the following steps:
the average brightness L (x, y) of the pixel points (x, y) in the multi-exposure image sequence after normalization is calculated, and the specific calculation is as follows:
wherein L (x, y) is the average brightness of the pixel point (x, y) in the multi-exposure image sequence after normalization, L i (x, y) represents the luminance value of the pixel at the i-th image position (x, y) in the input image sequence, K being the number of input images;
the average brightness at the pixel points (x, y) is used for dividing the exposure normal area, the bright area and the dark area in the image, and the specific calculation is as follows:
wherein L (x, y) is the normalized average brightness of the pixel points (x, y) in the multi-exposure image sequence, and each pixel point (x, y)Bright area B of each image in the sequence of source images determined by the average brightness of (a) i (x, y), dark region D i (x, y) and normally exposed region N i (x,y),Is a gray image, α is a luminance threshold, and K is the number of input images;
for the normally exposed region in the source image, the Scharr operator is adopted to extract the texture and the edge, and the local contrast of each pixel point (x, y) is calculated as follows:
wherein G is x ,G y Representing texture variations in the horizontal and vertical directions, N i (x, y) represents a region of normal exposure in the ith image in the input image sequence;
then, according to the convolution calculation result, calculating the texture change weight of the exposure normal area, wherein the calculation is as follows:
wherein,a texture change weight map at pixel points (x, y) representing an i-th image exposure normal region in the input image sequence, G x ,G y Representing texture variations in the horizontal and vertical directions, respectively;
texture and edge extraction are carried out on bright and dark areas by adopting a multi-scale fast LBP operator, and the texture and edge extraction method is calculated as follows:
S i (x,y)=MBLBP(IN i (x,y))
wherein IN i (x, y) is the bright and dark areas in the input image, MBPLBP (.cndot.) is the multi-scale block LBP operator, S i (x, y) as a coded value at the pixel point (x, y);
using a fast Laplace filter pair S i The texture detail information in (x, y) is enhanced while preserving the information of the edge portion, which is calculated as follows:
wherein,is passed through a fast Laplace filter pair S i Texture change weight map of bright and dark areas after the texture detail information in (x, y) is enhanced;
combination of two or more kinds of materialsThe two weights, the final contrast weight map, are obtained, calculated as follows:
wherein,for contrast weight map, ++>Exposing a texture variation weight map at pixel points (x, y) of a normal region for an ith image in an input image sequence, +.>For bright and dark areas of an input image sequence by fast LaplacianAnd a texture change weight map after the filter enhancement.
3. The multi-scale block LBP operator-based ghost-free multi-exposure image fusion method according to claim 1, wherein a luminance weight map is extractedThe method comprises the following steps:
the brightness weight values of the red, green and blue channels are constructed by utilizing the combined curve of the Gaussian curve and the Cauchy curve, the higher brightness weight is distributed to the pixels in the well-exposed area, and the lower brightness weight is distributed to the pixels in the bright and dark areas in the image, and the calculation is as follows:
wherein R is l Representing the luminance weight value of the red channel, G l Representing the brightness weight value of the green channel, B l The brightness weight value of the blue channel is represented, wherein eta (rl, R), eta (gl, G) and eta (bl, B) are adaptive functions;
a luminance weight map is extracted, which is calculated as follows:
wherein,representing a luminance weight map.
4. A multi-scale block LBP operator-based ghost-free multi-exposure image fusion method according to claim 3, characterized in that: the adaptive function η (rl, R) is calculated as follows:
l r,i (x, y) represents the luminance value of the pixel in the red channel at the i-th image position (x, y) in the input image.
5. A multi-scale block LBP operator-based ghost-free multi-exposure image fusion method according to claim 3, characterized in that: the adaptive function η (gl, G) is calculated as follows:
l g,i (x, y) represents the luminance value of the pixel in the green channel at the i-th image position (x, y) in the input image.
6. A multi-scale block LBP operator-based ghost-free multi-exposure image fusion method according to claim 3, characterized in that: the adaptive function η (bl, B) is calculated as follows:
l b,i (x, y) represents the luminance value of the pixel in the blue channel at the i-th image position (x, y) in the input image.
7. The multi-scale block LBP operator-based ghost-free multi-exposure image fusion method according to claim 1, wherein the extraction space is consistentSex weight mapThe method comprises the following steps:
first, the LBP feature is calculated for each pixel in the source image sequence, as follows:
and->Respectively inputting pixel values of an ith image in R, G and B channels at pixel points (x, y);
for any two different images I in a sequence of images i (x,y)、I j (x, y) (i. Noteq. J) by calculating T at pixel (x, y) i (x,y)、T j The Euclidean distance between (x, y) measures their local similarity in R, G, B channels, respectively, as follows:
the local similarity between the i, j-th images is calculated as follows:
D i,j (x,y)=d r i,j (x,y) 2 ×d g i,j (x,y) 2 ×d b i,j (x,y) 2
then constructing a spatial consistency weight item of an image in a motion scene in the following way, wherein the spatial consistency weight item is specifically calculated as follows:
wherein the standard deviation delta d Controlling local similarity d i,j (x, y) vs. weightIs a function of (1);
finally, the weight map is refined through morphological operators to remove the influence of noise:
wherein s is 1 、s 2 Is a structural element of expansion and corrosion,is an inflation operation, +.>Is a corrosion operation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210666439.2A CN115063331B (en) | 2022-06-14 | 2022-06-14 | Multi-scale block LBP operator-based ghost-free multi-exposure image fusion method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210666439.2A CN115063331B (en) | 2022-06-14 | 2022-06-14 | Multi-scale block LBP operator-based ghost-free multi-exposure image fusion method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115063331A CN115063331A (en) | 2022-09-16 |
CN115063331B true CN115063331B (en) | 2024-04-12 |
Family
ID=83200284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210666439.2A Active CN115063331B (en) | 2022-06-14 | 2022-06-14 | Multi-scale block LBP operator-based ghost-free multi-exposure image fusion method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115063331B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115760663B (en) * | 2022-11-14 | 2023-09-22 | 辉羲智能科技(上海)有限公司 | Method for synthesizing high dynamic range image based on multi-frame multi-exposure low dynamic range image |
CN116630218B (en) * | 2023-07-02 | 2023-11-07 | 中国人民解放军战略支援部队航天工程大学 | Multi-exposure image fusion method based on edge-preserving smooth pyramid |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112819736A (en) * | 2021-01-13 | 2021-05-18 | 浙江理工大学 | Workpiece character image local detail enhancement fusion method based on multiple exposures |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11430094B2 (en) * | 2020-07-20 | 2022-08-30 | Samsung Electronics Co., Ltd. | Guided multi-exposure image fusion |
-
2022
- 2022-06-14 CN CN202210666439.2A patent/CN115063331B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112819736A (en) * | 2021-01-13 | 2021-05-18 | 浙江理工大学 | Workpiece character image local detail enhancement fusion method based on multiple exposures |
Also Published As
Publication number | Publication date |
---|---|
CN115063331A (en) | 2022-09-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115063331B (en) | Multi-scale block LBP operator-based ghost-free multi-exposure image fusion method | |
CN110599433B (en) | Double-exposure image fusion method based on dynamic scene | |
EP1800259B1 (en) | Image segmentation method and system | |
US20110268359A1 (en) | Foreground/Background Segmentation in Digital Images | |
CN112785534A (en) | Ghost-removing multi-exposure image fusion method in dynamic scene | |
Várkonyi-Kóczy et al. | Gradient-based synthesized multiple exposure time color HDR image | |
CN109389569B (en) | Monitoring video real-time defogging method based on improved DehazeNet | |
CN111612725A (en) | Image fusion method based on contrast enhancement of visible light image | |
CN114862698B (en) | Channel-guided real overexposure image correction method and device | |
CN116012232A (en) | Image processing method and device, storage medium and electronic equipment | |
Lee et al. | HDR image reconstruction using segmented image learning | |
CN117152182B (en) | Ultralow-illumination network camera image processing method and device and electronic equipment | |
Mondal et al. | Single image haze removal using contrast limited adaptive histogram equalization based multiscale fusion technique | |
Han et al. | Automatic illumination and color compensation using mean shift and sigma filter | |
CN117011181A (en) | Classification-guided unmanned aerial vehicle imaging dense fog removal method | |
CN111105369A (en) | Image processing method, image processing apparatus, electronic device, and readable storage medium | |
CN113344011B (en) | Color constancy method based on cascade fusion feature confidence weighting | |
Hu et al. | A low-illumination image enhancement algorithm based on morphological-Retinex (MR) operator | |
CN115170420A (en) | Image contrast processing method and system | |
CN115829851A (en) | Portable fundus camera image defect eliminating method and system and storage medium | |
CN112233032B (en) | Method for eliminating ghost image of high dynamic range image | |
Rovid et al. | Gradient based synthesized multiple exposure time HDR image | |
CN114418874A (en) | Low-illumination image enhancement method | |
CN114782268A (en) | Low-illumination image enhancement method for improving SURF algorithm | |
CN112258434A (en) | Detail-preserving multi-exposure image fusion algorithm in static scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |