CN105279746B - A kind of more exposure image fusion methods based on bilateral filtering - Google Patents

A kind of more exposure image fusion methods based on bilateral filtering Download PDF

Info

Publication number
CN105279746B
CN105279746B CN201410240291.1A CN201410240291A CN105279746B CN 105279746 B CN105279746 B CN 105279746B CN 201410240291 A CN201410240291 A CN 201410240291A CN 105279746 B CN105279746 B CN 105279746B
Authority
CN
China
Prior art keywords
mrow
msub
image
weight
msup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410240291.1A
Other languages
Chinese (zh)
Other versions
CN105279746A (en
Inventor
郑喆坤
焦李成
夏增涛
马晶晶
马文萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201410240291.1A priority Critical patent/CN105279746B/en
Publication of CN105279746A publication Critical patent/CN105279746A/en
Application granted granted Critical
Publication of CN105279746B publication Critical patent/CN105279746B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention belongs to technical field of image processing, specifically discloses a kind of more exposure image fusion methods based on bilateral filtering, and its step is:(1) the Same Scene static images of three different exposures are inputted;(2) using the picture being originally inputted is divided into levels of detail and the part of basic unit two based on two-sided filter, then it is respectively processed;(3) for basic unit with Gaussian curve estimate its weight and with picture saturation infromation to being updated to weight;(4) fusion picture among preliminary fusion one is gone by the weight after updating;(5) maximum of its each levels of detail is asked for for levels of detail as its details picture;(6) gone to synthesize final fusion picture by fusion picture among asking for and details picture.The computation complexity of the present invention is low, and can sufficiently protect the details of picture to also have in the picture of fusion can make picture look more natural on color.It can be widely applied in image enhaucament or camera field.

Description

A kind of more exposure image fusion methods based on bilateral filtering
Technical field
The invention belongs to technical field of image processing, specifically a kind of more exposure image fusion sides based on bilateral filtering Method, the more perfect image of detailed information, colour information is fused into available for by the picture of different exposures.
Background technology
The excursion of nature light intensity is very big, and from the daylight of direct projection to the change of starlight, its magnitude is across nine etc. Level (10-4-105cd/m2), but the dynamic range that capture can be crossed relative to the dynamic range human visual system of nature is general Only five brightness degrees.Existing general camera or display can not can not directly display so big dynamic range, And the dynamic range of display image is increased by the lifting of hardware, the cost of this method is prohibitively expensive, therefore it has been proposed that One picture with HDR effect is synthesized with the method for software according to the picture for capturing different exposures.
Image co-registration is exactly that the different pictures exposed of several Same Scenes are fused into a details and colour information all Significant picture.So a more preferable effect is visually attained by for follow-up scientific research either people.In general, Image fusion system should meet following three basic demands:First, the picture after fusion should retain all as far as possible Notable information;Second, the picture after fusion should not introduce ghost or inconsistent information;3rd, it is undesirable to information should This is suppressed as far as possible in picture after fusion.At present, main high-dynamics image integration technology is divided into two major classes:It is based on Tone maps and based on image co-registration.Method based on tone mapping is generally by estimating response curve (CRF), Ran Houyou Response curve generates a high-dynamics image, and this method wastes time and energy;And the method based on image co-registration has bypassed high dynamic The generation of image, directly generating a width has the picture of high-dynamics image effect, and this method key is that the estimation to weight comes The importance in every each region of width source images centering is marked, this kind of method obtains extensive use in image co-registration field.
The method for now proposing many efficient more exposure image fusions, in entitled " more exposures based on sub-band structure The method that image co-registration is disclosed in the patent NO.201010531828.1 of image interfusion method ", passes through filter in this approach Input picture is resolved into one group of sub-band images by ripple device, and modifies sub-band images with weights figure, is then based on such subband figure As obtaining sub-band images after fusion, and the fused images after being merged by restructuring procedure.But this method is in colour information Holding is some upper deficiencies.Disclosed in the patent NO.200710038605.X of entitled " multiple exposure image intensifying method " more The method of exposure image enhancing, in this approach with the image of one group of different exposure, foundation camera response curve property is to figure The most abundant image block of information is chosen as piecemeal to be spliced, then removes blocking effect, obtains the image of Larger Dynamic scope.In this kind Needing to use the response curve property of camera in method, response curve is the build-in attribute of camera, so needs camera business to provide, Or obtained by computing, but it is time-consuming.
The content of the invention
In view of the above-mentioned deficiencies in the prior art, it is an object of the present invention to propose a kind of based on bilateral filtering and colour information More exposure image fusion methods, can keep colour information, while protect the texture information of fused image, make the details of image More clearly there is color more natural.
Realizing the technical scheme of the object of the invention is:Each is obtained to the different exposure images progress bilateral filtering of input The basic unit of picture and levels of detail, the preliminary details for merging picture and needing to improve can be obtained by levels of detail;Pass through for basic unit Gaussian curve has an original weight estimation to basic unit, in the saturation infromation by picture come improving to the weight, What by weight, we can be tentatively is merged to obtain preliminary fused images to figure;Utilize the image obtained by levels of detail The enhancing of details is carried out to the image tentatively merged and then obtains the fused images merged to the end, its specific steps includes as follows:
(1) it is image layered:
Input the image I of 3 different exposuresunder、Inormal、Iover, under-exposure, normal exposure and overexposure are represented respectively Light image, bilateral filtering is carried out respectively to every image, respectively obtain the levels of detail I of every imagedetWith basic unit Ibase, also It is:I=Idet+Ibase, wherein I represents the image of different exposures;
(2) estimation of initial weight:
2a) image is normalized first, makes the value of pixel between [0,1], it is believed that the size of pixel value is got over Close to 0.5, that is, median, it is closer to optimal exposure, and the weight to it is bigger, with Gaussian curve to basic unit IbaseEnter Row processing,Wherein t represents each pixel value, and μ, σ represent average and the side of the curve of Gaussian function respectively Difference;
2b) due to obtained weight Wi(t) be each image itself relation, image co-registration is to need to obtain between image Proportionate relationship, therefore make Wi(t) normalize,Every width weight map picture is obtained relative to others Shared proportionate relationship;
(3) weight is updated
3a) the weight for obtainingIt is by basic unit IbaseObtain, IbaseIt is inherently smooth by bilateral filtering Cross, therefore details can not preserve well to a certain extent, so, the saturation degree component for introducing image goes to update
Wherein, SiThe saturation degree component of image is represented, R, G, B represent basic unit IbaseRed, green, blue triple channel value size;
3b) asking for SiAfterwards, a threshold value λ is defined to updateObtain:
(4) weight for asking forWith step 2b) go to normalize it, try to achieve the weight W tentatively mergedi, introduce and follow Circle filtering carrys out the smooth weight tentatively merged, then utilizes WiWith the image I of original inputunder、Inormal、IoverMelted Close:Imiddle=∑ Ii*Wi, try to achieve middle fused images Imiddle
(5) extraction of details:
Due to removing inherently a kind of smooth strategy of fused images by weighted mean method, the image after fusion is at some There is missing in details, therefore pass through each image levels of detail IdetInformation go it is comprehensive go out a width detail pictures ID
Wherein, Ii detFor the levels of detail of every image;N represents the number of the image of different exposures;
(6) I tried to achieve is utilizedDAlso ImiddleWith reference to trying to achieve final fused images:
Wherein ImiddleIt is the image for the fusion asked for by basic unit, ID(i, j) be asked in step (5) it is thin Ganglionic layer image, α are in ImiddleThe middle weight factor for incorporating levels of detail.
Wherein in step (1) extraction to image layered bilateral filtering, carry out as follows:
By the image I of different exposures, the image after being obtained after bilateral filtering smoothly, the image after this is smooth is just made For basic unit Ibase, bilateral filtering can preserve image edge and details simultaneously image be smoothed, the design formula of bilateral filtering It is as follows:
Wherein,
Wherein BF [I]pRepresent the result after bilateral filtering;
S represents local neighborhood, and p and q represent the position of the pixel in local neighborhood;
IpAnd IqThe pixel value being illustrated respectively on p and q positions;
WpRepresent the comprehensive weight of spatial relationship and similitude in local neighborhood;
WithThe weight of spatial relationship and similitude in local neighborhood, σ are represented respectivelydAnd σrRepresent gauss of distribution function Standard deviation, the standard deviation in the former spatial relationship, the latter represents the standard deviation of similitude.
Wherein described step (4) asks for middle fused images ImiddleIn use circulation filtering, carry out as follows:
Circulation filtering is a kind of edge-protected filtering in real time, weight WiCarried out according to source images and the weight of itself Circulation filtering, its design formula are as follows:
Wi=R (Wi,Ii)
J [k]=(1- αd)I[k]+αdJ[k-1]
Wherein R represents circulation filtering operation, and it is feedback factor that α ∈ [0,1], which are represented, and I [k] represents the kth pixel of weight map Size, J [k] represents the size of revised kth pixel;D represents the distance between neighborhood territory pixel in inputting source images;Ii It is the pixel value of source images.
The present invention has advantages below compared with prior art:
1. the present invention carries out decomposition to original input picture by using bilateral filtering and respectively obtains levels of detail and basic unit, point It is other that this two layers is handled, it is different for different layer processing scheme, in levels of detail, ask for maximum method and link base layer Crossing renewal of the colour information saturation degree to weight makes details and colour information preferably be protected, and makes the picture after fusion more certainly So, Edge texture becomes apparent from;
, can be with using the colour information of picture 2. the present invention can be effective using the colour information of picture when updating weight The colour information of image after fusion is preferably protected, can so picture is looked with nature;
3. by the present invention in that enabling weight picture smooth with circulation filtering, avoid the picture after fusion and introduce it There is ghost phenomenon in his noise, it is possible to increase merges the visual effect of picture.
The simulation experiment result shows that the present invention combines the colour information of bilateral filtering and image, can obtain with details Picture and synthesize one with fusion picture among good color effect, and then by details picture and middle fusion picture Picture with good details and colour information, computation complexity is low, is a kind of good image interfusion method of robustness.
Brief description of the drawings
Fig. 1 is the flow chart of the present invention;
Fig. 2 is to obtain details picture by different exposure images in the present invention;
Fig. 3 is the test image of the different exposures of one group used in emulation experiment of the present invention;
Fig. 4 is to obtain Fig. 3 experimental result using the inventive method;
Fig. 5 is the test pictures of the different exposures of one group used in emulation experiment;
Fig. 6 is the carry out comparison diagram of the fused images using existing method and the inventive method generation.
Embodiment
Reference picture 1, step is as follows for of the invention realizing:
It is step 1, image layered
The source images of different exposures are inputted, are divided the image into two layers by bilateral filtering, levels of detail IdetWith basic unit Ibase, Comprise the following steps that:
1a) by the image I of different exposuresi, the image after being waited until after bilateral filtering smoothly, the smooth image With regard to as basic unit Ibase, bilateral filtering can preserve image edge and details simultaneously image be smoothed, bilateral filtering is as follows:
Wherein,
Wherein BF [I]pRepresent the result after bilateral filtering;
S represents local neighborhood, and p and q represent the position of the pixel in local neighborhood;
IpAnd IqThe pixel value being illustrated respectively on p and q positions;
WpRepresent the comprehensive weight of spatial relationship and similitude in local neighborhood;
WithThe weight of spatial relationship and similitude in local neighborhood, σ are represented respectivelydAnd σrRepresent gauss of distribution function Standard deviation, the standard deviation in the former spatial relationship, the latter represents the standard deviation of similitude.
1b) when obtaining basic unit Ii baseAnd then the levels of detail of every image is obtained to the difference of basic unit by original image Ii det, i.e.,:
Step 2, initial weight is asked to estimate
2a) image is normalized first, makes the value of pixel between [0,1], due to can consider pixel value Size is closer to 0.5, that is, median, and it is closer to optimal exposure, and the weight to it is bigger, in the method with Gauss Curve is to basic unit IbaseHandled, obtain the W of every piece imagei(t):
2b) due to obtained weight Wi(t) be each image itself relation, image co-registration is to need to obtain to weigh between image Proportionate relationship between weight, therefore make Wi(t) normalize,Obtain every width weight map picture relative to Proportionate relationship shared by others.
Step 3, renewal weight
3a) the weight for obtainingIt is by basic unit IbaseGo what is obtained, IbaseInherently put down by bilateral filtering Slip over, therefore the details to a certain extent asked can not preserve well, therefore, method introduces the saturation degree component of image Go to update
Wherein, R, G, B represent basic unit IbaseRed, green, blue channel value size;
3b) asking for SiAfterwards, we define a threshold value λ to updateWork as SiDuring more than threshold value λ, make original power WeightIncrease, works as SiDuring less than threshold value λ, make original weightReduce, work as SiDuring equal to threshold value λ, make original weightNo Become.Changed by an exponential function as follows:
Step 4, for having asked for weightWe are with step 2b) go to normalize it, the weight W for the preliminary fusion askedi, Due to there may be singular point in the weight that tentatively merges, the image after fusion can be made to introduce noise, therefore, this method is introduced Circulation filtering goes smooth original fusion weight to make it more smooth, then utilizes WiWith the image I of original inputunder、 Inormal、IoverMerged:Imiddle=∑ Ii*WiThe middle fused images I askedmiddle;Ask for middle fused images ImiddleIn The circulation filtering used, is carried out as follows:
Circulation filtering is edge-protected filtering in real time, weight WiCirculated according to source images and the weight of itself Filtering, its design formula are as follows:
Wi=R (Wi,Ii)
J [k]=(1- αd)I[k]+αdJ[k-1]
Wherein R represents circulation filtering operation, and it is feedback factor that α ∈ [0,1], which are represented, and I [k] represents the kth pixel of weight map Size, J [k] represents the size of revised kth pixel;D represents the distance between neighborhood territory pixel in inputting source images.
The extraction of step 5, details
According to what is obtained in step 1We pass through the levels of detail of more every width source imagesAsk for each pixel Value of the maximum of position as the location of pixels, thus a few one detail pictures of width levels of detail image co-registration:
Step 6, details merge with intermediate image
Utilize the details I tried to achieveDThe middle fused images I also askedmiddleWith reference to final fused images are tried to achieve, at this A parameter alpha has been used in method to control projecting degree of the details in final fused images, the formula below figure of fusion:
The present invention has advantages below compared with prior art:
1. the present invention carries out decomposition to original input picture by using bilateral filtering and respectively obtains levels of detail and basic unit, point It is other that this two layers is handled, it is different for different layer processing scheme, in levels of detail, ask for maximum method and link base layer Crossing renewal of the colour information saturation degree to weight makes details and colour information preferably be protected, and makes the picture after fusion more certainly So, Edge texture becomes apparent from;
, can be with using the colour information of picture 2. the present invention can be effective using the colour information of picture when updating weight The colour information of image after fusion is preferably protected, can so picture is looked with nature;
3. by the present invention in that enabling weight picture smooth with circulation filtering, avoid the picture after fusion and introduce it There is ghost phenomenon in his noise, it is possible to increase merges the visual effect of picture.
The effect of the present invention can be further illustrated by following emulation experiment:
1. simulated conditions:
It is Intel (R) Core i5 processors in CPU:Dominant frequency 2.40GHZ, internal memory 2G, operating system:WINDOWS 7, base In OpenCv computer visions storehouse.
The image of one group of different exposure shown in analogous diagram 2, Fig. 3 and Fig. 5, wherein:
Fig. 2 (a), 3 (a) and Fig. 5 (a) are a under-exposure picture;
Fig. 2 (b), 3 (b) and Fig. 5 (b) are a normal exposure picture;
Fig. 2 (c), 3 (c) and Fig. 5 (c) are an overexposure picture;
2. emulation content:
In emulation experiment, made comparisons using the inventive method from existing three kinds of different methods, compare distinct methods and melt The effect of conjunction.
Referring to document:W.Zhang and W.K.Cham,“Gradient-directed composition of multi-exposure images,”Proc.IEEEConference onComputerVisionand Pattern Recognition,pp.530-536,2010.
Referring to document:S.Li and X.Kang,“Fast multi-exposure image fusion with median filter and recursive filter,”IEEETrans.Consum.Electron.,vol.58,no.2, pp.626–632,2012.
Referring to document:T.Mertens,J.Kautz,and F.V.Reeth,“Exposure fusion:a simple and practical alternative to high dynamic range photography,”Computer Graphics Forum,vol.28,no.1,pp.161–171,2009.
Different exposure images in Fig. 2 are merged by emulation 1 using the inventive method, wherein:
Fig. 2 (a) is a under exposed picture;
Fig. 2 (b) is the picture of a normal exposure;
Fig. 2 (c) is the picture of an overexposure;
Fig. 2 (d) is the picture of the levels of detail of a comprehensive different exposures.
By Fig. 2 (d) it is known that by after bilateral filtering to levels of detail can be good at preserving original picture Original details, can effectively increase fusion picture textural characteristics.
Emulation 2, the picture of one group of different exposure, the colour information in the picture of the different exposures of the group have been presented in Fig. 3 it Compare it is abundant, can observe fused image colour information preservation situation, Fig. 4 be with context of methods generate one use Fig. 3 The fusion of one group of picture after picture, the fusion picture can be good at preserving colour information, for example bookshelf also has chair, can Enough to reproduce its original color well, the texture of basket is protected well also on bookshelf, picture is looked more It is natural.
Emulation 3, the picture of one group of different exposure being given in Fig. 5, Fig. 6 is the comparison of one group of distinct methods, wherein:
Fig. 6 (a) is the effect after method 1 merges;
Fig. 6 (b) is the effect after method 2 merges;
Fig. 6 (c) is the effect after method 3 merges;
Fig. 6 (d) is the effect after this method fusion.
From Fig. 6 (d) and Fig. 6 (c), Fig. 6 (b) and Fig. 6 (a) contrast, the image border that the inventive method obtains is clear Clear, detailed information is more obvious, picture look more naturally, and compared with other three kinds of fusion methods, this method exists Some regional areas for example, preferably protect details on wall and on window.And on flooring this method look more naturally, It is more preferable in visual effect.
The simulation experiment result shows that the present invention combines the colour information of bilateral filtering and image, can obtain with details Picture and synthesize one with fusion picture among good color effect, and then by details picture and middle fusion picture Picture with good details and colour information, computation complexity is low, is a kind of good image interfusion method of robustness.
The present embodiment belongs to the common knowledge and known technology of the art without the part specifically described, if any need Us are wanted to provide reference!Exemplified as above is only for example, not forming the protection to the present invention to the present invention's The limitation of scope, it is every to be belonged to the same or analogous design of the present invention within protection scope of the present invention.

Claims (3)

1. a kind of more exposure image fusion methods based on bilateral filtering, it is characterised in that comprise the following steps:
(1) it is image layered:
Input the image I of 3 different exposuresunder、Inormal、Iover, under-exposure, normal exposure and overexposure figure are represented respectively Picture, bilateral filtering is carried out respectively to every image, respectively obtain the levels of detail I of every imagedetWith basic unit Ibase, that is,:I =Idet+Ibase, wherein I represents the image of different exposures;
(2) estimation of initial weight:
2a) image is normalized first, makes the value of pixel between [0,1], it is believed that the size of pixel value is closer 0.5, that is, median, it is closer to optimal exposure, and the weight to it is bigger, with Gaussian curve to basic unit IbaseLocated Reason,Wherein t represents each pixel value, and μ, σ represent the average and variance of the curve of Gaussian function respectively;
2b) due to obtained weight Wi(t) be each image itself relation, image co-registration is to need to obtain the ratio between image Example relation, therefore make Wi(t) normalize,Every width weight map is obtained as shared by relative to others Proportionate relationship;
(3) weight is updated
3a) the weight for obtainingIt is by basic unit IbaseObtain, IbaseIt is inherently smoothed by bilateral filtering, Therefore details can not preserve well to a certain extent, so, the saturation degree component for introducing image goes to update
<mrow> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>-</mo> <mfrac> <mn>3</mn> <mrow> <mi>R</mi> <mo>+</mo> <mi>G</mi> <mo>+</mo> <mi>B</mi> </mrow> </mfrac> <mo>&amp;lsqb;</mo> <mi>min</mi> <mrow> <mo>(</mo> <mi>R</mi> <mo>,</mo> <mi>G</mi> <mo>,</mo> <mi>B</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow>
Wherein, SiThe saturation degree component of image is represented, R, G, B represent basic unit IbaseRed, green, blue triple channel value size;
3b) asking for SiAfterwards, a threshold value λ is defined to updateObtain:
<mrow> <msub> <mover> <mi>W</mi> <mo>&amp;OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>=</mo> <msub> <mover> <mi>W</mi> <mo>^</mo> </mover> <mi>i</mi> </msub> <mo>*</mo> <msup> <mi>e</mi> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>&amp;lambda;</mi> <mo>)</mo> <mo>/</mo> <mi>&amp;lambda;</mi> </mrow> </msup> <mo>;</mo> </mrow>
(4) weight for asking forWith step 2b) go to normalize it, try to achieve the weight W tentatively mergedi, introduce circulation filter Ripple carrys out the smooth weight tentatively merged, then utilizes WiWith the image I of original inputunder、Inormal、IoverMerged: Imiddle=∑ Ii*Wi, try to achieve middle fused images Imiddle
(5) extraction of details:
Due to removing inherently a kind of smooth strategy of fused images by weighted mean method, the image after fusion is in some details On have missing, therefore pass through each image levels of detail IdetInformation go it is comprehensive go out a width detail pictures ID
<mrow> <msup> <mi>I</mi> <mi>D</mi> </msup> <mo>=</mo> <msubsup> <mi>I</mi> <mi>x</mi> <mi>det</mi> </msubsup> </mrow>
<mrow> <mi>x</mi> <mo>=</mo> <munder> <mrow> <mi>arg</mi> <mi>max</mi> </mrow> <mi>i</mi> </munder> <mrow> <mo>(</mo> <mo>|</mo> <msubsup> <mi>I</mi> <mi>i</mi> <mi>det</mi> </msubsup> <mo>|</mo> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>...</mo> <mo>,</mo> <mi>N</mi> <mo>)</mo> </mrow> </mrow>
Wherein, Ii detFor the levels of detail of every image;N represents the number of the image of different exposures;
(6) I tried to achieve is utilizedDAlso ImiddleWith reference to trying to achieve final fused images:
<mrow> <msup> <mi>I</mi> <mi>F</mi> </msup> <mo>=</mo> <msup> <mi>I</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>d</mi> <mi>d</mi> <mi>l</mi> <mi>e</mi> </mrow> </msup> <mo>*</mo> <msup> <mi>e</mi> <mrow> <mi>&amp;alpha;</mi> <mo>*</mo> <msup> <mi>I</mi> <mi>D</mi> </msup> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mrow> </msup> <mo>;</mo> </mrow>
Wherein ImiddleIt is the image for the fusion asked for by basic unit, ID(i, j) is the levels of detail figure asked in step (5) Picture, α are in ImiddleThe middle weight factor for incorporating levels of detail.
2. a kind of more exposure image fusion methods based on bilateral filtering according to claim 1, are wherein carried in step (1) Take to image layered bilateral filtering, carry out as follows:
By the image I of different exposures, the image after being obtained after bilateral filtering smoothly, the image after this is smooth is just used as base Layer Ibase, bilateral filtering can preserve image edge and details simultaneously image be smoothed, the design formula of bilateral filtering is as follows:
<mrow> <mi>B</mi> <mi>F</mi> <msub> <mrow> <mo>&amp;lsqb;</mo> <mi>I</mi> <mo>&amp;rsqb;</mo> </mrow> <mi>p</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>W</mi> <mi>p</mi> </msub> </mfrac> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>q</mi> <mo>&amp;Element;</mo> <mi>S</mi> </mrow> </munder> <msub> <mi>G</mi> <msub> <mi>&amp;sigma;</mi> <mi>d</mi> </msub> </msub> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <mi>p</mi> <mo>-</mo> <mi>q</mi> <mo>|</mo> <mo>|</mo> <mo>)</mo> </mrow> <msub> <mi>G</mi> <msub> <mi>&amp;sigma;</mi> <mi>r</mi> </msub> </msub> <mrow> <mo>(</mo> <msub> <mi>I</mi> <mi>p</mi> </msub> <mo>-</mo> <msub> <mi>I</mi> <mi>q</mi> </msub> <mo>)</mo> </mrow> <msub> <mi>I</mi> <mi>q</mi> </msub> </mrow>
Wherein,
<mrow> <msub> <mi>G</mi> <msub> <mi>&amp;sigma;</mi> <mi>d</mi> </msub> </msub> <mo>=</mo> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>d</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>)</mo> </mrow> </mrow> <msub> <mi>&amp;sigma;</mi> <mi>d</mi> </msub> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msup> <mo>,</mo> </mrow>
<mrow> <msub> <mi>G</mi> <msub> <mi>&amp;sigma;</mi> <mi>r</mi> </msub> </msub> <mo>=</mo> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>&amp;delta;</mi> <mrow> <mo>(</mo> <mi>I</mi> <mo>(</mo> <mi>p</mi> <mo>)</mo> <mo>,</mo> <mi>I</mi> <mo>(</mo> <mi>q</mi> <mo>)</mo> <mo>)</mo> </mrow> </mrow> <msub> <mi>&amp;sigma;</mi> <mi>r</mi> </msub> </mfrac> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msup> <mo>,</mo> </mrow>
Wherein BF [I]pRepresent the result after bilateral filtering;
S represents local neighborhood, and p and q represent the position of the pixel in local neighborhood;
IpAnd IqThe pixel value being illustrated respectively on p and q positions;
WpRepresent the comprehensive weight of spatial relationship and similitude in local neighborhood;
WithThe weight of spatial relationship and similitude in local neighborhood, σ are represented respectivelydAnd σrRepresent the mark of gauss of distribution function Accurate poor, the standard deviation in the former spatial relationship, the latter represent the standard deviation of similitude.
A kind of 3. more exposure image fusion methods based on bilateral filtering according to claim 1, wherein the step (4) Ask for middle fused images ImiddleIn use circulation filtering, carry out as follows:
Circulation filtering is a kind of edge-protected filtering in real time, weight WiCirculated according to source images and the weight of itself Filtering, its design formula are as follows:
Wi=R (Wi,Ii)
J [k]=(1- αd)I[k]+αdJ[k-1]
Wherein R represents circulation filtering operation, and it is feedback factor that α ∈ [0,1], which are represented, and I [k] represents the big of the kth pixel of weight map Small, J [k] represents the size of revised kth pixel;D represents the distance between neighborhood territory pixel in inputting source images;IiIt is source The pixel value of image.
CN201410240291.1A 2014-05-30 2014-05-30 A kind of more exposure image fusion methods based on bilateral filtering Expired - Fee Related CN105279746B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410240291.1A CN105279746B (en) 2014-05-30 2014-05-30 A kind of more exposure image fusion methods based on bilateral filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410240291.1A CN105279746B (en) 2014-05-30 2014-05-30 A kind of more exposure image fusion methods based on bilateral filtering

Publications (2)

Publication Number Publication Date
CN105279746A CN105279746A (en) 2016-01-27
CN105279746B true CN105279746B (en) 2018-01-26

Family

ID=55148699

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410240291.1A Expired - Fee Related CN105279746B (en) 2014-05-30 2014-05-30 A kind of more exposure image fusion methods based on bilateral filtering

Country Status (1)

Country Link
CN (1) CN105279746B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654448B (en) * 2016-03-29 2018-11-27 微梦创科网络科技(中国)有限公司 A kind of image interfusion method and system based on bilateral filtering and weight reconstruction
CN105913408B (en) * 2016-04-12 2019-03-01 湘潭大学 A kind of multi-focus image fusing method based on two-sided filter
CN105931210B (en) * 2016-04-15 2019-10-15 中国航空工业集团公司洛阳电光设备研究所 A kind of high resolution image reconstruction method
CN105957030B (en) * 2016-04-26 2019-03-22 成都市晶林科技有限公司 One kind being applied to the enhancing of thermal infrared imager image detail and noise suppressing method
CN106296744A (en) * 2016-11-07 2017-01-04 湖南源信光电科技有限公司 A kind of combining adaptive model and the moving target detecting method of many shading attributes
CN106780420B (en) * 2016-12-08 2019-05-24 无锡赛默斐视科技有限公司 Color Image Fusion based on image wave filter
CN106846281A (en) * 2017-03-09 2017-06-13 广州四三九九信息科技有限公司 image beautification method and terminal device
CN107909560A (en) * 2017-09-22 2018-04-13 洛阳师范学院 A kind of multi-focus image fusing method and system based on SiR
CN108230282A (en) * 2017-11-24 2018-06-29 洛阳师范学院 A kind of multi-focus image fusing method and system based on AGF
CN107977941B (en) * 2017-12-04 2021-04-20 国网智能科技股份有限公司 Image defogging method for color fidelity and contrast enhancement of bright area
CN108171679B (en) * 2017-12-27 2022-07-22 合肥君正科技有限公司 Image fusion method, system and equipment
CN108550122B (en) * 2017-12-29 2021-10-29 西安电子科技大学 Image denoising method based on autonomous path block conduction filtering
CN108184075B (en) * 2018-01-17 2019-05-10 百度在线网络技术(北京)有限公司 Method and apparatus for generating image
CN108492245B (en) * 2018-02-06 2020-06-30 浙江大学 Low-luminosity image pair fusion method based on wavelet decomposition and bilateral filtering
CN108416163A (en) * 2018-03-23 2018-08-17 湖南城市学院 A method of three-dimensional panorama indoor design is generated based on number connection platform
CN108830798B (en) * 2018-04-23 2022-05-13 西安电子科技大学 Improved image denoising method based on propagation filter
CN108876740B (en) * 2018-06-21 2022-04-12 重庆邮电大学 Multi-exposure registration method based on ghost removal
CN109636765B (en) * 2018-11-09 2021-04-02 Tcl华星光电技术有限公司 High dynamic display method based on image multiple exposure fusion
CN109840912B (en) * 2019-01-02 2021-05-04 厦门美图之家科技有限公司 Method for correcting abnormal pixels in image and computing equipment
CN110208829A (en) * 2019-03-21 2019-09-06 西安电子科技大学 A kind of navigational communications anti-interference method
CN115104120A (en) * 2019-04-11 2022-09-23 中科创达软件股份有限公司 Method and device for merging low dynamic range images into single image
CN112634187B (en) * 2021-01-05 2022-11-18 安徽大学 Wide dynamic fusion algorithm based on multiple weight mapping
CN117560580B (en) * 2023-11-13 2024-05-03 四川新视创伟超高清科技有限公司 Smooth filtering method and system applied to multi-camera virtual shooting

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247036A (en) * 2012-02-10 2013-08-14 株式会社理光 Multiple-exposure image fusion method and device
CN103281490A (en) * 2013-05-30 2013-09-04 中国科学院长春光学精密机械与物理研究所 Image fusion algorithm based on bilateral filtering

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247036A (en) * 2012-02-10 2013-08-14 株式会社理光 Multiple-exposure image fusion method and device
CN103281490A (en) * 2013-05-30 2013-09-04 中国科学院长春光学精密机械与物理研究所 Image fusion algorithm based on bilateral filtering

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A Robust Algorithm for Exposure Fusion;Fenghui Li等;《ICSP2012 Proceedings》;20121231;第926-930页 *
Fast Multi-exposure Image Fusion with Median Filter and Recursive Filter;Shutao Li等;《IEEE Transactions on Consumer Electronics》;20120531;第58卷(第2期);第626-632页 *

Also Published As

Publication number Publication date
CN105279746A (en) 2016-01-27

Similar Documents

Publication Publication Date Title
CN105279746B (en) A kind of more exposure image fusion methods based on bilateral filtering
US11455516B2 (en) Image lighting methods and apparatuses, electronic devices, and storage media
CN105915909B (en) A kind of high dynamic range images layered compression method
US8687883B2 (en) Method and a device for merging a plurality of digital pictures
CN109829930A (en) Face image processing process, device, computer equipment and readable storage medium storing program for executing
Zhang et al. A naturalness preserved fast dehazing algorithm using HSV color space
Wang et al. Variational single nighttime image haze removal with a gray haze-line prior
CN106920221A (en) Take into account the exposure fusion method that Luminance Distribution and details are presented
CN105809643B (en) A kind of image enchancing method based on adaptive block channel extrusion
JP2006087063A (en) Multiple exposure image composite system and multiple exposure image composite method
CN109300101A (en) A kind of more exposure image fusion methods based on Retinex theory
Kao High dynamic range imaging by fusing multiple raw images and tone reproduction
CN106412448A (en) Single-frame image based wide dynamic range processing method and system
US20230074060A1 (en) Artificial-intelligence-based image processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
Lv et al. Low-light image enhancement via deep Retinex decomposition and bilateral learning
Huang et al. Multi‐exposure image fusion based on feature evaluation with adaptive factor
Liu et al. Progressive complex illumination image appearance transfer based on CNN
CN110580696A (en) Multi-exposure image fast fusion method for detail preservation
CN106709888A (en) High-dynamic-range image generation method based on human eye visual model
CN117611501A (en) Low-illumination image enhancement method, device, equipment and readable storage medium
CN106780402A (en) Dynamic range of images extended method and device based on Bayer format
CN111161189A (en) Single image re-enhancement method based on detail compensation network
Zhao et al. Learning tone curves for local image enhancement
Chung et al. Under-exposed image enhancement using exposure compensation
TWM625817U (en) Image simulation system with time sequence smoothness

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180126

Termination date: 20180530