CN106251365A - Many exposure video fusion method and device - Google Patents

Many exposure video fusion method and device Download PDF

Info

Publication number
CN106251365A
CN106251365A CN201610587415.2A CN201610587415A CN106251365A CN 106251365 A CN106251365 A CN 106251365A CN 201610587415 A CN201610587415 A CN 201610587415A CN 106251365 A CN106251365 A CN 106251365A
Authority
CN
China
Prior art keywords
frame
pending
alignment
subframe
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610587415.2A
Other languages
Chinese (zh)
Inventor
杜军平
徐亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201610587415.2A priority Critical patent/CN106251365A/en
Publication of CN106251365A publication Critical patent/CN106251365A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The invention provides a kind of many exposure video fusion method and device, wherein method includes: determines reference frame and at least one pending frame in the many exposure images sequence obtained, respectively each described pending frame is aligned to described reference frame according to concordance sensitive hash algorithm;The weight map of the described pending frame after each alignment of the box counting algorithm of the described pending frame after aliging according to each, and the weight map according to reference frame described in the box counting algorithm of described reference frame;The weight map of the described pending frame after utilizing described reference frame to align each is optimized, and weight map and the weight map of described reference frame after each being optimized merge, and obtain fusion image.By the many exposure video fusion method in the present invention and device, for the situation that there is moving target in captured scene, it is possible to merge many exposure images sequence, it is clear without the fuzzy fusion image without ghost image to obtain.

Description

Many exposure video fusion method and device
Technical field
The present invention relates to field of video processing, in particular to a kind of many exposure video fusion method and device.
Background technology
The fusion of many exposure video refers to that the many exposure images sequence in the many exposure video that will capture merges, and obtains Well-exposed fusion image, wherein, many exposure video obtain in the following ways: utilize multiple cameras pair that exposure is different Same scene is recorded a video, and obtains the many exposure video being made up of many exposure images sequence, and wherein, many exposure images are also called Frame of video.
In order to obtain well-exposed fusion image, prior art provides multiple many exposure video fusion method, such as, For the fusion method based on laplacian pyramid of Color Image Fusion, utilize local contrast and neighborhood colour consistency Tolerance, the probability fusion method proposed under random walk framework, and use maximum posteriori probability framework establishment probabilistic model Infer the method merging pixel.
But, fusion method of the prior art is both for what static scene proposed, does not accounts for moving target pair The impact of syncretizing effect, when there is moving target in captured scene, by melting that the fusion method of prior art obtains Close image and there is fuzzy and ghost image, it is impossible to obtain complete fusion image clearly.
Summary of the invention
In view of this, it is an object of the invention to provide a kind of many exposure video fusion method and device, for captured Scene in there is the situation of moving target, it is possible to many exposure images sequence is merged, obtain clear without fuzzy without ghost image Fusion image.
First aspect, embodiments provides a kind of many exposure video fusion method, and described method includes: obtaining Many exposure images sequence in determine reference frame and at least one pending frame, respectively will be each according to concordance sensitive hash algorithm Individual described pending frame is aligned to described reference frame;According to each align after described pending frame box counting algorithm each The weight map of the described pending frame after alignment, and the weight according to reference frame described in the box counting algorithm of described reference frame Figure;The weight map of the described pending frame after utilizing described reference frame to align each is optimized, the power after each being optimized The weight map of multigraph and described reference frame merges, and obtains fusion image.
In conjunction with first aspect, embodiments provide the first possible embodiment of first aspect, wherein, according to Each described pending frame is aligned to described reference frame by concordance sensitive hash algorithm respectively, including: respectively to described in each Pending frame and described reference frame carry out Scale Decomposition, obtain each pending son that frame pending with each described respectively is corresponding Frame set, and the reference sub-frame set corresponding with described reference frame;Respectively to described with reference to each reference in sub-frame set Each pending subframe in subframe and each described pending sub-frame set carries out piecemeal;According to concordance sensitive hash algorithm Respectively by the described pending subframe alignment after each piecemeal in each described pending sub-frame set to described reference subframe Described reference subframe after each corresponding piecemeal in set;Use grayscale mapping function to subframe collection pending each described The described pending subframe after each alignment in conjunction carries out error hiding correction, and uses Poisson video fusion method to described mistake The result that coupling is revised is optimized;Described pending subframe after each in each described pending sub-frame set is optimized It is reconstructed, obtains the described pending frame after each alignment.
In conjunction with the first possible embodiment of first aspect, embodiments providing first aspect the second may Embodiment, wherein, according to concordance sensitive hash algorithm respectively by each point in each described pending sub-frame set Described pending subframe alignment after block is the most described with reference to the described reference subframe after each the corresponding piecemeal in sub-frame set, Including: the described pending subframe after current piecemeal in presently described pending sub-frame set determines image to be matched Block, determines that described image block to be matched is corresponding in the described reference subframe after the current piecemeal in described reference sub-frame set Multiple neighborhood image blocks;Described image block to be matched and each described Neighborhood Graph is calculated respectively according to concordance sensitive hash algorithm As the matching distance between block, described image block to be matched is aligned to the neighborhood image that the described matching distance of minimum is corresponding Block;Repeat described determine image block to be matched, determine neighborhood image block, calculate matching distance, image block to be matched alignment dynamic Make, until by the described pending subframe alignment after each piecemeal in each described pending sub-frame set to described with reference to son Described reference subframe after each corresponding piecemeal in frame set.
In conjunction with first aspect, embodiments provide the third possible embodiment of first aspect, wherein, according to The weight map of the described pending frame after each alignment of the box counting algorithm of the described pending frame after each alignment, Yi Jigen According to the weight map of reference frame described in the box counting algorithm of described reference frame, including: treat described in after calculating each alignment respectively Process the phase equalization feature of frame, local contrast feature and color saturation feature, and calculate described reference frame Phase equalization feature, local contrast feature and color saturation feature;Described pending frame after aliging according to each Phase equalization feature, local contrast feature and color saturation feature calculate respectively each alignment after described in wait to locate The initial weight figure of reason frame, and phase equalization feature, local contrast feature and the color according to described reference frame is full Initial weight figure with reference frame described in degree feature calculation;The initial weight figure of the described pending frame after respectively each being alignd It is normalized, obtains the weight map of the described pending frame after each alignment, and the initial weight figure to described reference frame It is normalized, obtains the weight map of described reference frame.
In conjunction with first aspect, embodiments provide the 4th kind of possible embodiment of first aspect, wherein, utilize The weight map of the described pending frame after each alignment is optimized by described reference frame, including: use and instruct wave filter to pass through Below equation utilizes described reference frame to be optimized the weight map of the described pending frame after each alignment;
q i = Σ j W ‾ i j ( G ) p j ;
W ‾ i j ( G ) = 1 | ω | Σ z : ( i , j ) ∈ ω z ( 1 + ( G i - μ z ) ( G j - μ z ) σ z 2 + ϵ ) ;
Wherein, i and j represents that pixel index, G represent described reference frame,Represent filtering core, pjRepresent alignment after described The weight map of pending frame, qiRepresent the weight map after optimizing, ωzExpression center is the local window of z, and | ω | represents at ωzPicture The quantity of element, μzWithIt is illustrated respectively in ωzOn the average of described reference frame and variance, ε represents regularization parameter, GiIt it is institute State the ith pixel in reference frame G, GjRepresent the jth pixel in described reference frame G.
In conjunction with first aspect, embodiments provide the 5th kind of possible embodiment of first aspect, wherein, by each Weight map and the weight map of described reference frame after individual optimization merge, and obtain fusion image, including: will by below equation Weight map and the weight map of described reference frame after each optimization merge, and obtain fusion image;
F = Σ k = 1 N W ^ k f k ;
Wherein, F represents the fusion image obtained, and k represents the sequence number of each frame, and N represents the total quantity of frame,Represent kth Weight map after the optimization of individual frame, fkRepresent the matrix of kth frame.
Second aspect, embodiments provides a kind of many exposure video fusing device, and described device includes: alignment mould Block, for determining reference frame and at least one pending frame in the many exposure images sequence obtained, breathes out according to concordance sensitivity Each described pending frame is aligned to described reference frame by uncommon algorithm respectively;Weight map computing module, for aliging according to each After described pending frame each alignment of box counting algorithm after the weight map of described pending frame, and according to described ginseng Examine the weight map of reference frame described in the box counting algorithm of frame;Fusion Module, after being used for utilizing described reference frame to align each The weight map of described pending frame be optimized, weight map and the weight map of described reference frame after each being optimized are melted Close, obtain fusion image.
In conjunction with second aspect, embodiments provide the first possible embodiment of second aspect, wherein, described Alignment module includes: Scale Decomposition unit, for respectively frame pending each described and described reference frame being carried out Scale Decomposition, Obtain each pending sub-frame set that frame pending with each described respectively is corresponding, and the reference corresponding with described reference frame Sub-frame set;Blocking unit, for described each with reference in sub-frame set, with reference to subframe and each is described pending respectively Each pending subframe in sub-frame set carries out piecemeal;Alignment unit, is used for respectively according to concordance sensitive hash algorithm The described pending subframe alignment after each piecemeal in each described pending sub-frame set is the most described with reference in sub-frame set Each corresponding piecemeal after described with reference to subframe;Optimize unit, be used for using grayscale mapping function to waiting described in each to locate The described pending subframe after each alignment in reason sub-frame set carries out error hiding correction, and uses Poisson video fusion method The result of described error hiding correction is optimized;Reconfiguration unit, for by each in each described pending sub-frame set Described pending subframe after optimization is reconstructed, and obtains the described pending frame after each alignment.
In conjunction with the first possible embodiment of second aspect, embodiments providing second aspect the second may Embodiment, wherein, described alignment unit includes: neighborhood image block determines subelement, at presently described pending son The described pending subframe after current piecemeal in frame set determines image block to be matched, in described reference sub-frame set Described reference subframe after current piecemeal determines multiple neighborhood image blocks that described image block to be matched is corresponding;Image block aligns Subelement, for calculating described image block to be matched and each described neighborhood image block respectively according to concordance sensitive hash algorithm Between matching distance, described image block to be matched is aligned to the neighborhood image block corresponding to described matching distance of minimum;Weight Multiple subelement, is used for repeating described determining image block to be matched, determining neighborhood image block, calculate matching distance, image to be matched The action of block alignment, until by the described pending subframe alignment after each piecemeal in each described pending sub-frame set extremely Described with reference to the described reference subframe after each the corresponding piecemeal in sub-frame set.
In conjunction with second aspect, embodiments provide the third possible embodiment of second aspect, wherein, described Weight map computing module includes: feature calculation unit, the phase place one of the described pending frame after calculating each alignment respectively Cause property feature, local contrast feature and color saturation feature, and calculate described reference frame phase equalization feature, Local contrast feature and color saturation feature;Initial weight figure computing unit, described in after aliging according to each After the phase equalization feature of pending frame, local contrast feature and color saturation feature calculate each alignment respectively The initial weight figure of described pending frame, and according to the phase equalization feature of described reference frame, local contrast feature with And the initial weight figure of reference frame described in color saturation feature calculation;Weight map normalization unit, for right to each respectively The initial weight figure of the described pending frame after Qi is normalized, and obtains the weight of the described pending frame after each alignment Figure, and the initial weight figure of described reference frame is normalized, obtain the weight map of described reference frame.
Method and device in the embodiment of the present invention, first determines reference frame and extremely in the many exposure images sequence obtained A few pending frame, is aligned to reference frame by each pending frame respectively according to concordance sensitive hash algorithm, secondly basis The weight map of the pending frame after each alignment of the box counting algorithm of the pending frame after each alignment, and according to reference frame The weight map of box counting algorithm reference frame, the weight map of the pending frame after finally utilizing reference frame to align each is carried out Optimizing, weight map and the weight map of reference frame after each being optimized merge, and obtain fusion image.Owing to the present invention implements Each pending frame is aligned to reference frame according to concordance sensitive hash algorithm by method and device in example respectively, therefore for There is the situation of moving target in captured scene, it is possible to merge many exposure images sequence, it is clear without fuzzy to obtain Fusion image without ghost image.
For making the above-mentioned purpose of the present invention, feature and advantage to become apparent, preferred embodiment cited below particularly, and coordinate Appended accompanying drawing, is described below in detail.
Accompanying drawing explanation
In order to be illustrated more clearly that the technical scheme of the embodiment of the present invention, below by embodiment required use attached Figure does to be introduced simply, it will be appreciated that the following drawings illustrate only certain embodiments of the present invention, and it is right to be therefore not construed as The restriction of scope, for those of ordinary skill in the art, on the premise of not paying creative work, it is also possible to according to this A little accompanying drawings obtain other relevant accompanying drawings.
Fig. 1 shows the schematic flow sheet of many exposure video fusion method that the embodiment of the present invention provided;
Fig. 2 shows the structural representation of many exposure video fusing device that the embodiment of the present invention provided.
Detailed description of the invention
For making the purpose of the embodiment of the present invention, technical scheme and advantage clearer, below in conjunction with the embodiment of the present invention Middle accompanying drawing, is clearly and completely described the technical scheme in the embodiment of the present invention, it is clear that described embodiment is only It is a part of embodiment of the present invention rather than whole embodiments.Generally real with the present invention illustrated described in accompanying drawing herein The assembly executing example can be arranged with various different configurations and design.Therefore, below to the present invention's provided in the accompanying drawings The detailed description of embodiment is not intended to limit the scope of claimed invention, but is merely representative of the selected reality of the present invention Execute example.Based on embodiments of the invention, the institute that those skilled in the art are obtained on the premise of not making creative work There are other embodiments, broadly fall into the scope of protection of the invention.
It is both for what static scene proposed in view of fusion method of the prior art, does not accounts for moving target pair The impact of syncretizing effect, when there is moving target in captured scene, by melting that the fusion method of prior art obtains Close image and there is fuzzy and ghost image, it is impossible to obtain complete fusion image clearly, the invention provides a kind of many exposure video and melt Close method and device, be specifically described below in conjunction with embodiment.It should be noted that in order to simplify description language in the present embodiment Speech, is referred to as frame by frame of video, i.e. reference frame etc. are all reference video frame, and pending frame etc. is all pending frame of video.
Fig. 1 shows the schematic flow sheet of many exposure video fusion method that the embodiment of the present invention provided, such as Fig. 1 institute Showing, the method comprises the following steps:
Step S102, determines reference frame and at least one pending frame, according to one in the many exposure images sequence obtained Each pending frame is aligned to reference frame by cause property sensitive hash algorithm respectively.
In the present embodiment, it is contemplated that when there is moving target in the many exposure images sequence obtained, final fusion obtains Image there may be fuzzy and ghost image, it is therefore desirable to choose reference frame and pending frame, each pending frame is aligned to reference Frame, there may be fuzzy and ghost image to avoid fusion to obtain image.
In the present embodiment, in many exposure images sequence, select that there is minimum supersaturation pixel or minimum undersaturation pixel Frame of video is as reference frame, in the case of one, using frame of video in addition to reference frame in many exposure images sequence as pending Frame.In the case of another kind, it is contemplated that in many exposure images sequence, frame of video quantity is more, amount of calculation is relatively big, or many exposure diagram During as sequence exists the frame of video being made without processing, select partial video frame as waiting to locate in many exposure images sequence Reason frame.
Respectively each pending frame is aligned to reference frame according to concordance sensitive hash algorithm and specifically includes following (1) extremely (4) four steps:
(1) respectively each pending frame and reference frame are carried out Scale Decomposition, obtain corresponding with each pending frame respectively Each pending sub-frame set, and the reference sub-frame set corresponding with reference frame.
Definition reference frame is R, and pending frame is S, respectively each pending frame is carried out yardstick by below equation (1) and divides Solving, obtain each corresponding with each pending frame respectively pending sub-frame set, pending sub-frame set includes pending The multiple pending subframe obtained after frame Scale Decomposition, carries out Scale Decomposition by below equation (2) to reference frame, obtains and ginseng Examine the reference sub-frame set that frame is corresponding, include with reference to sub-frame set multiple with reference to subframe to obtain after reference frame Scale Decomposition.
P (x, y, R, σ-1)=P2↓(x,y,R,σ); (1)
P (x, y, S, σ-1)=P2↓(x,y,S,σ); (2)
In formula (1) and (2), { P (x, y, Δ, σ-1) Δ=R, S} represents scaling function, and (x y) represents location of pixels, σ Representing pyramid level, σ span is [3,5], 2 ↓ represent the down-sampling factor.
(2) wait to locate with reference to each in subframe and each pending sub-frame set to reference to each in sub-frame set respectively Reason subframe carries out piecemeal, and this partitioned mode can be sliding window piecemeal.
(3) respectively treating after each piecemeal in each pending sub-frame set is located according to concordance sensitive hash algorithm Reason subframe alignment is to reference to the reference subframe after each the corresponding piecemeal in sub-frame set.
Such as, certain pending sub-frame set A includes that 5 be arranged in order according to picture size order from small to large are treated Process subframe A1, A2, A3, A4 and A5, include being arranged in order according to sized image order from small to large with reference to sub-frame set B 5 with reference to subframe B1, B2, B3, B4 and B5, after A1, A2, A3, A4, A5, B1, B2, B3, B4 and B5 piecemeal, A1 is alignd It is aligned to B2 ... A5 is aligned to B5, i.e. after the piecemeal that the pending subframe alignment after piecemeal to picture size is identical to B1, A2 Reference subframe.
In this action (3), according to concordance sensitive hash algorithm respectively by each point in each pending sub-frame set Pending subframe alignment after block is to reference to the reference subframe after each the corresponding piecemeal in sub-frame set, it is possible to be decomposed into as Lower action:
(31) the pending subframe after the current piecemeal in currently pending sub-frame set determines image block to be matched, The reference subframe after current piecemeal in reference to sub-frame set determines multiple neighborhood image blocks that image block to be matched is corresponding.
Neighborhood image block can determine in the following ways: determines in the pending subframe after current piecemeal that first is treated After coupling image block, with reference to subframe determines the image block matched with this first image block to be matched, referred to as first Match image block, determines second image block to be matched in the neighborhood of first image block to be matched, by first phase Join multiple image blocks in the neighborhood of image block as multiple neighborhood image blocks corresponding to second image block to be matched, multiple The image block that in neighborhood image block, search is mated with second image block to be matched.That is, will match with reference in subframe upper one Image block neighborhood in image block as the neighborhood image block of current image block to be matched.
(32) calculate respectively between image block to be matched and each neighborhood image block according to concordance sensitive hash algorithm Matching distance, is aligned to the neighborhood image block that the matching distance of minimum is corresponding by image block to be matched.
The matching distance between image block to be matched and each neighborhood image block is calculated by below equation (3):
dy=| | x-y | |=min{ | | x'-y | |;x'∈S}; (3)
Wherein, x' represents each neighborhood image block, and y represents image block to be matched, and x represents the neighbour minimum with y matching distance Area image block, dyRepresenting the matching distance of calculated minimum, S represents the set of each neighborhood image block.
(33) repeat above-mentioned determine image block to be matched, determine neighborhood image block, calculate matching distance, image block to be matched The action of alignment, until by the pending subframe alignment after each piecemeal in each pending sub-frame set to reference to subframe collection The reference subframe after each corresponding piecemeal in conjunction.
In this step, can first repeat above-mentioned determine image block to be matched, determine neighborhood image block, calculate matching distance, The action of image block to be matched alignment, by the pending subframe alignment after each piecemeal in currently pending sub-frame set to ginseng Examine the reference subframe after each the corresponding piecemeal in sub-frame set, repeat above-mentioned the most again in other pending sub-frame set Action, until by the pending subframe alignment after each piecemeal in each pending sub-frame set to reference in sub-frame set Reference subframe after each corresponding piecemeal.
By action (31) to (33), it is possible to by the pending subframe after each piecemeal in each pending sub-frame set Reference subframe after being aligned to reference to each the corresponding piecemeal in sub-frame set.
(4) use grayscale mapping function that the pending subframe after each alignment in each pending sub-frame set is carried out Error hiding correction, and use Poisson video fusion method that the result of error hiding correction is optimized.
As a example by pending subframe after current alignment, in the pending subframe after current alignment, each image block is corresponding There is a matching distance, detect each matching distance, when detecting the presence of matching distance and being unsatisfactory for preset requirement, to the most right Pending subframe after Qi carries out error hiding correction, and uses Poisson video fusion method to carry out excellent to the result of error hiding correction Change.It is provided with distance threshold, when the matching distance of above-mentioned calculated minimum is more than this distance threshold, determines the coupling of minimum Distance is unsatisfactory for preset requirement.
This step uses grayscale mapping function carry out initial cavity to fill, the pending subframe after current alignment is carried out Error hiding correction, improves the precision of coupling, and grayscale mapping function is defined as follows:
τ c = arg m i n τ Σ x , y | | τ ( R c ( x , y ) ) - S c w ( x , y ) | | 1 ; - - - ( 4 )
Wherein, τc' () >=0, τc() ∈ [0,1], c ∈ r, g, b},Represent the pending subframe after current alignment The value of Color Channel, RcRepresent the value of the Color Channel of reference subframe corresponding to the pending subframe after current alignment, here The value of Color Channel refers to r, the value of g, b Color Channel, i.e. red channel component, green channel component and blue channel component, (x, Y) location of pixels, the initial value τ of grayscale mapping function are representedcIt is set to reference that the pending subframe after currently aliging is corresponding The grey level histogram of frame, in order to eliminate the abnormal fitness bias caused, it is preferred to use iteration weighted least-squares method realizes optimizing Solve.
The value of each Color Channel of the pending subframe after current alignment, Color Channel can be updated by formula (4) Value update after the result that pending subframe is error hiding correction.
Use Poisson video fusion method that the result of error hiding correction is optimized and refer to extraction error hiding correction The gradient information of the reference subframe that result is corresponding, uses Poisson video fusion method by the gradient information extracted and above-mentioned error hiding The result revised synthesizes, thus is optimized the result of error hiding correction.Wherein, the result of error hiding correction is corresponding Gradient information with reference to subframe can also be replaced into the gradient of currently pending subframe (i.e. without the pending subframe of alignment) Information.
(5) the pending subframe after being optimized by each in each pending sub-frame set is reconstructed, and obtains each right Pending frame after Qi.
As a example by pending frame A`, pending frame A` correspondence has pending sub-frame set A, wraps in pending sub-frame set A Include 5 pending subframes A1, A2, A3, A4 and the A5 being arranged in order according to picture size order from small to large, A1, A2, A3, A4 and A5 be pending frame A` is carried out size decomposition after multiple subimages of obtaining.A1, A2, A3, A4 and A5 have been performed After optimization operation in action (4), A1, A2, A3, A4 and A5 after alignment are reconstructed, A1 after will aliging, A2, A3, A4 and A5 synthesizes, the pending frame A0 after being alignd.
By above-mentioned action (1) to (5), it is possible to respectively each pending frame is alignd according to concordance sensitive hash algorithm To reference frame, each the pending frame after being alignd, to avoid when many exposure images sequence exists moving target, merge Obtain image and there is the fuzzy and problem of ghost image.
Step S104, the pending frame after each alignment of the box counting algorithm of the pending frame after aliging according to each Weight map, and the weight map of the box counting algorithm reference frame according to reference frame.
This step can specifically be decomposed into three below step and realize:
(1) the phase equalization feature of pending frame, local contrast feature and the face after each alignment is calculated respectively Color saturation feature, and calculate the phase equalization feature of reference frame, local contrast feature and color saturation feature.
In order to measure the importance of each pixel in frame of video, obtaining the weight map of frame of video, the present embodiment proposes base Weight method of estimation in feature.After pending frame is aligned, integrated three characteristics of image estimate the weight of pixel, three Characteristics of image includes: phase equalization feature, local contrast feature and color saturation.By integrated three kinds of characteristics of image degree Value, can accurately measure the weight of respective pixel under different exposure yardstick.
Calculate the phase equalization feature of pending frame, local contrast feature and the color saturation after each alignment Feature is identical with the process calculating the phase equalization feature of reference frame, local contrast feature and color saturation feature, Special to calculate the phase equalization feature of pending frame, local contrast feature and color saturation after currently aliging below Illustrate as a example by levying.
By RGB color, the pending frame after currently alignment is transformed into YIQ color space, and wherein Y represents that brightness is believed Breath, I and Q represents chrominance information.Rgb space can pass through formula (5) to the conversion in YIQ space and realize:
Y I Q = 0.299 0.587 0.114 0.596 - 0.274 - 0.322 0.211 - 0.523 0.312 R G B ; - - - ( 5 )
In formula (5), RGB represents red channel component value, green channel component value and blue channel component value respectively,.
In order to calculate the phase equalization feature of the pending frame after currently aliging, the pending frame after currently aliging Luminance component Y and two dimension log-Gabor wave filter carry out convolution, and the transforming function transformation function of two dimension log-Gabor wave filter is calculated as follows:
L G ( ω , θ j ) = exp ( - ( log ( ω / ω 0 ) ) 2 2 σ r 2 ) · exp ( - ( θ - θ j ) 2 2 σ θ 2 ) ; - - - ( 6 )
Wherein, ω0It is the mid frequency of wave filter, θj=j π/J, j={0,1 ..., J-1} is the deflection of wave filter, J It is direction number, σrControl the bandwidth of wave filter, σθDetermining the angular bandwidth of wave filter, θ represents the angle that two wave filter are tangent, J=4, the ω of four yardsticks are rule of thumb set0It is respectively 1/6,1/12,1/24,1/48, σr=0.5978, σθ=0.6545. Convolution results generates one group of orthogonal vectors [en,o(x,y),on,o(x, y)] (yardstick n, direction o, location of pixels (x, y)) obtain office Portion's magnitude determinations is as follows:
A n , o = e n , o ( x , y ) 2 + o n , o ( x , y ) 2 ; - - - ( 7 )
Then, the pending frame after current alignment position (x, y) the phase equalization feature calculation at place is as follows:
PC k ( x , y ) = Σ o ( Σ n e n , o ( x , y ) ) 2 + ( Σ o o n , o ( x , y ) ) 2 ϵ + Σ o Σ n A n , o ( x , y ) ; - - - ( 8 )
Wherein, ε is a little normal amount, phase equalization feature PCk(x, value y) is between 0 to 1.PCk(x, y) more Close to 1, feature is the most notable.
Because it is constant that phase equalization feature is contrast, and contrast information have impact on human visual system to video The perception of quality, so using local contrast as second tolerance, complements one another with phase equalization feature.
In the present embodiment, local contrast feature use frame of video gradient energy measure, gradient tolerance for important carefully Joint characteristic allocation higher weights, such as edge and texture.Frame of video gradient can use convolution mask to calculate.The present embodiment Middle employing Sobel operator is calculated as follows as gradient operator, Sobel operator:
1 4 1 0 - 1 2 0 - 2 1 0 - 1 , 1 4 1 2 1 0 0 0 - 1 - 2 - 1 ; - - - ( 9 )
Along frame of video both horizontally and vertically, luminance channel Y of the pending frame after currently alignment is calculated with Sobel Son performs convolution, obtains both direction gradient GXAnd GY, local contrast feature G of the pending frame after current alignmentKCalculate such as Under:
G k = G x 2 + G y 2 ; - - - ( 10 )
Phase equalization feature and local contrast metric are all only measured, for color video in luminance channel Frame, iff using phase equalization and local contrast as metric, it is impossible to obtain weight map accurately, therefore adopt Measure as the 3rd feature with color saturation.Color saturation feature SkCalculating be defined as R, G, B channel in each picture The standard deviation of element, is calculated as follows:
S k = ( R - m ‾ ) 2 + ( G - m ‾ ) 2 + ( B - m ‾ ) 2 3 ; - - - ( 11 )
m ‾ = ( R + G + B ) 3 ; - - - ( 12 )
Wherein,Represent the meansigma methods of tri-Color Channels of RGB.
In this step, calculate the pending frame after each alignment by above formula (5) to the process shown in formula (12) Phase equalization feature, local contrast feature and color saturation feature, and the phase equalization feature of reference frame, After local contrast feature and color saturation feature, follow the steps below (2).
(2) the phase equalization feature of pending frame, local contrast feature and the color after aliging according to each is full Calculate the initial weight figure of pending frame after each alignment respectively with degree feature, and the phase equalization according to reference frame is special Levy, local contrast feature and the initial weight figure of color saturation feature calculation reference frame.
The process calculating the initial weight figure of the pending frame after alignment and the process of the initial weight figure calculating reference frame Identical, illustrate as a example by the initial weight figure of the pending frame after calculating alignment the most here.
It is mutual between measuring in view of phase equalization feature, local contrast feature and color saturation feature three The relation mended, is therefore combined to estimate frame of video weight by three characteristics of image by being directly multiplied, concrete such as formula (13) shown in,
Wk=PCk×Gk×Sk; (13)
Wherein, WkRepresent the initial weight figure of the pending frame after alignment, PCkRepresent the phase place of the pending frame after alignment Concordance feature, GkRepresent that the local contrast of the pending frame after alignment is to feature, SkRepresent the color of the pending frame after alignment Saturation feature.In this way, the fusion image finally given can keep all material particulars of former video sequence.
(3) the initial weight figure of the pending frame after aliging each respectively is normalized, after obtaining each alignment The weight map of pending frame, and the initial weight figure of reference frame is normalized, obtain the weight map of reference frame.
In this step, to each align after pending frame initial weight figure normalization process with at the beginning of reference frame The normalization process of beginning weight map is identical, with the normalization process of the initial weight figure of the pending frame after current alignment is below Example illustrates.
Assuming that the video frame number in many exposure sequences is N, in order to keep the concordance of fusion results, to N number of weight map Be normalized, it is ensured that each pixel (x, y) place weight and be 1, the initial weight figure after normalizationDefinition For:
W ~ k ( x , y ) = W k ( x , y ) / Σ k ′ = 1 N W k ′ ( x , y ) ; - - - ( 14 )
Wherein, (x, y) represents location of pixels,Represent the initial weight figure after normalization, Wk′(x, at the beginning of y) representing Beginning weight map, k represents the sequence number of current initial weight figure, and k ' represents the sequence number of each initial weight figure, the span of k and k ' For [1, N].
By step S104, the weight method of estimation of feature based, integrated phase concordance, local contrast and color are full With three kinds of characteristics of image of degree, measure the quality of pixel exactly, obtained weight map.
Step S106, the weight map of the pending frame after utilizing reference frame to align each is optimized, and each is optimized After weight map and the weight map of reference frame merge, obtain fusion image.
In this step, the weight map of the pending frame after utilizing reference frame to align each is optimized, including:
Employing instructs wave filter to pass through below equation (15) and (16) utilize reference frame to the pending frame after each alignment Weight map be optimized;
q i = Σ j W ‾ i j ( G ) p j ; - - - ( 15 )
W ‾ i j ( G ) = 1 | ω | Σ z : ( i , j ) ∈ ω z ( 1 + ( G i - μ z ) ( G j - μ z ) σ z 2 + ϵ ) ; - - - ( 16 )
Wherein, i and j represents that pixel index, G represent reference frame,Represent filtering core, pjRepresent the pending frame after alignment Weight map, qiRepresent the weight map after optimizing, ωzExpression center is the local window of z, and | ω | represents at ωzThe number of pixel Amount, μzWithIt is illustrated respectively in ωzOn the average of reference frame and variance, ε represents regularization parameter, it is provided that one about " flat Face block " or the criterion at " high variance/edge ", the scope variance that the impact of ε is similar in two-sided filterGiIt it is ginseng Examine the ith pixel in frame G, GjRepresent the jth pixel in reference frame G.Above-mentioned filtering coreMeet
During above-mentioned optimization, weight map p of the pending frame after reference frame G, alignmentjIt is disposed as passing through step The calculated weight map of S104.
More smooth by the weight map after the optimization that above-mentioned formula (15) and formula (16) obtain, do not comprise unconnected area Territory, it is possible to represent the importance of pixel more accurately.
In this step, weight map and the weight map of reference frame after each being optimized merge, and obtain fusion image, bag Include:
Weight map and the weight map of reference frame after each being optimized by below equation are merged, and obtain fusion figure Picture;
F = Σ k = 1 N W ^ k f k ; - - - ( 17 )
Wherein, F represents the fusion image obtained, and k represents the sequence number of each frame, and N represents the total quantity of frame,Represent kth Weight map after the optimization of individual frame, fkRepresent the matrix of kth frame.
Weight map after optimization indicates in frame of video, which pixel is well-exposed, to promote fusion results to comprise institute There is well-exposed pixel, generate the fusion image with lively visual effect.
Method in the embodiment of the present invention, first obtain many exposure images sequence in determine reference frame and at least one Pending frame, is aligned to reference frame by each pending frame respectively according to concordance sensitive hash algorithm, the most right according to each The weight map of the pending frame after each alignment of the box counting algorithm of the pending frame after Qi, and the image according to reference frame The weight map of feature calculation reference frame, the weight map of the pending frame after finally utilizing reference frame to align each is optimized, Weight map and the weight map of reference frame after each being optimized merge, and obtain fusion image.Due in the embodiment of the present invention Method respectively each pending frame is aligned to reference frame according to concordance sensitive hash algorithm, therefore for captured field There is the situation of moving target in scape, it is possible to merge many exposure images sequence, it is clear without fuzzy melting without ghost image to obtain Close image.
To sum up, the method in the embodiment of the present invention has following technical effect that
(1) Block-matching frame of video alignment schemes based on concordance sensitive hash algorithm, it is possible to solve due to captured The fusion image occurring target travel in scene and cause is fuzzy and the problem of diplopia, finally give have HDR, The complete scene video clearly that illumination is good.
(2) the weight method of estimation of feature based, integrated phase concordance, local contrast and color saturation three kinds figure As feature, it is possible to measure the quality of pixel exactly, thus obtain weight map.
(3) accuracy of weight map is promoted based on the weight map optimization method instructing filtering, it is ensured that be obtained in that high-quality Fusion image.
Corresponding above-mentioned many exposure video fusion method, the embodiment of the present invention additionally provides a kind of many exposure video and merges dress Put, as in figure 2 it is shown, this device includes: alignment module 21, for determining reference frame and extremely in the many exposure images sequence obtained A few pending frame, is aligned to reference frame by each pending frame respectively according to concordance sensitive hash algorithm;Weight map meter Calculate module 22, the weight of the pending frame after each alignment of the box counting algorithm of the pending frame after aliging according to each Figure, and the weight map of the box counting algorithm reference frame according to reference frame;Fusion Module 23, is used for utilizing reference frame to each The weight map of the pending frame after alignment is optimized, and weight map and the weight map of reference frame after each being optimized are melted Close, obtain fusion image.
Wherein, alignment module 21 includes: Scale Decomposition unit, for respectively each pending frame and reference frame being carried out chi Degree decomposes, and obtains each corresponding with each pending frame respectively pending sub-frame set, and the reference corresponding with reference frame Sub-frame set;Blocking unit, for respectively to each reference subframe and each the pending sub-frame set in reference sub-frame set In each pending subframe carry out piecemeal;Alignment unit, for waiting to locate by each respectively according to concordance sensitive hash algorithm After the pending subframe alignment after each piecemeal in reason sub-frame set is extremely with reference to each the corresponding piecemeal in sub-frame set With reference to subframe;Optimize unit, for using grayscale mapping function to treating after each alignment in each pending sub-frame set Process subframe and carry out error hiding correction, and use Poisson video fusion method that the result of error hiding correction is optimized;Reconstruct Unit, the pending subframe after being optimized by each in each pending sub-frame set is reconstructed, and obtains each alignment After pending frame.
Above-mentioned alignment unit includes: neighborhood image block determines subelement, for working as in currently pending sub-frame set Pending subframe after front piecemeal determines image block to be matched, the reference subframe after current piecemeal in reference to sub-frame set The middle multiple neighborhood image blocks determining that image block to be matched is corresponding;Image block alignment subelement, for breathing out according to concordance sensitivity Uncommon algorithm calculates the matching distance between image block to be matched and each neighborhood image block respectively, is aligned to by image block to be matched The neighborhood image block that minimum matching distance is corresponding;Duplicon unit, is used for repeating to determine image block to be matched, determine Neighborhood Graph As block, calculate matching distance, the action of image block to be matched alignment, until by each piecemeal in each pending sub-frame set After pending subframe alignment to reference to the reference subframe after each the corresponding piecemeal in sub-frame set.
Above-mentioned weight map computing module 22 includes: feature calculation unit, pending after calculating each alignment respectively The phase equalization feature of frame, local contrast feature and color saturation feature, and the phase place calculating reference frame is consistent Property feature, local contrast feature and color saturation feature;Initial weight figure computing unit, after aliging according to each The phase equalization feature of pending frame, local contrast feature and color saturation feature calculate respectively each alignment after The initial weight figure of pending frame, and phase equalization feature, local contrast feature and the color according to reference frame The initial weight figure of saturation feature calculation reference frame;Weight map normalization unit, treating after aliging each respectively is located The initial weight figure of reason frame is normalized, and obtains the weight map of the pending frame after each alignment, and at the beginning of reference frame Beginning weight map is normalized, and obtains the weight map of reference frame.
Above-mentioned Fusion Module 23 includes: weight map optimizes unit, instructs wave filter to be utilized by below equation for employing The weight map of the pending frame after each alignment is optimized by reference frame;
q i = Σ j W ‾ i j ( G ) p j ;
W ‾ i j ( G ) = 1 | ω | Σ z : ( i , j ) ∈ ω z ( 1 + ( G i - μ z ) ( G j - μ z ) σ z 2 + ϵ ) ;
Wherein, i and j represents that pixel index, G represent reference frame,Represent filtering core, pjRepresent the pending frame after alignment Weight map, qiRepresent the weight map after optimizing, ωzExpression center is the local window of z, and | ω | represents at ωzThe number of pixel Amount, μzWithIt is illustrated respectively in ωzOn the average of reference frame and variance, ε represents regularization parameter, GiIt is in reference frame G I pixel, GjRepresent the jth pixel in reference frame G.
Above-mentioned Fusion Module 23 includes: image co-registration unit, the weight map after each being optimized by below equation Merge with the weight map of reference frame, obtain fusion image;
F = Σ k = 1 N W ^ k f k ;
Wherein, F represents the fusion image obtained, and k represents the sequence number of each frame, and N represents the total quantity of frame,Represent kth Weight map after the optimization of individual frame, fkRepresent the matrix of kth frame.
Device in the embodiment of the present invention, first obtain many exposure images sequence in determine reference frame and at least one Pending frame, is aligned to reference frame by each pending frame respectively according to concordance sensitive hash algorithm, the most right according to each The weight map of the pending frame after each alignment of the box counting algorithm of the pending frame after Qi, and the image according to reference frame The weight map of feature calculation reference frame, the weight map of the pending frame after finally utilizing reference frame to align each is optimized, Weight map and the weight map of reference frame after each being optimized merge, and obtain fusion image.Due in the embodiment of the present invention Device respectively each pending frame is aligned to reference frame according to concordance sensitive hash algorithm, therefore for captured field There is the situation of moving target in scape, it is possible to merge many exposure images sequence, it is clear without fuzzy melting without ghost image to obtain Close image.
Many exposure image fusion device that the embodiment of the present invention is provided can be the specific hardware on equipment or installation Software on equipment or firmware etc..The device that the embodiment of the present invention is provided, its realize principle and generation technique effect and Preceding method embodiment is identical, for briefly describing, and the not mentioned part of device embodiment part, refer in preceding method embodiment Corresponding contents.Those skilled in the art is it can be understood that arrive, and for convenience and simplicity of description, described above is System, device and the specific works process of unit, be all referred to the corresponding process in said method embodiment, the most superfluous at this State.
In embodiment provided by the present invention, it should be understood that disclosed apparatus and method, can be by other side Formula realizes.Device embodiment described above is only that schematically such as, the division of described unit, the most only one are patrolled Volume function divides, and actual can have other dividing mode when realizing, the most such as, multiple unit or assembly can in conjunction with or can To be integrated into another system, or some features can be ignored, or does not performs.Another point, shown or discussed each other Coupling direct-coupling or communication connection can be the INDIRECT COUPLING by some communication interfaces, device or unit or communication link Connect, can be electrical, machinery or other form.
The described unit illustrated as separating component can be or may not be physically separate, shows as unit The parts shown can be or may not be physical location, i.e. may be located at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be selected according to the actual needs to realize the mesh of the present embodiment scheme 's.
It addition, each functional unit in the embodiment that the present invention provides can be integrated in a processing unit, it is possible to Being that unit is individually physically present, it is also possible to two or more unit are integrated in a unit.
If described function is using the form realization of SFU software functional unit and as independent production marketing or use, permissible It is stored in a computer read/write memory medium.Based on such understanding, technical scheme is the most in other words The part contributing prior art or the part of this technical scheme can embody with the form of software product, this meter Calculation machine software product is stored in a storage medium, including some instructions with so that a computer equipment (can be individual People's computer, server, or the network equipment etc.) perform all or part of step of method described in each embodiment of the present invention. And aforesaid storage medium includes: USB flash disk, portable hard drive, read only memory (ROM, Read-Only Memory), random access memory are deposited The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic disc or CD.
It should also be noted that similar label and letter represent similar terms, therefore, the most a certain Xiang Yi in following accompanying drawing Individual accompanying drawing is defined, then need not it be defined further and explains in accompanying drawing subsequently, additionally, term " the One ", " second ", " the 3rd " etc. are only used for distinguishing and describe, and it is not intended that instruction or hint relative importance.
It is last it is noted that the detailed description of the invention of embodiment described above, the only present invention, in order to the present invention to be described Technical scheme, be not intended to limit, protection scope of the present invention is not limited thereto, although with reference to previous embodiment to this Bright it is described in detail, it will be understood by those within the art that: any those familiar with the art In the technical scope that the invention discloses, the technical scheme described in previous embodiment still can be modified or can be light by it It is readily conceivable that change, or wherein portion of techniques feature is carried out equivalent;And these are revised, change or replace, do not make The essence of appropriate technical solution departs from the spirit and scope of embodiment of the present invention technical scheme.All should contain the protection in the present invention Within the scope of.Therefore, protection scope of the present invention should described be as the criterion with scope of the claims.

Claims (10)

1. exposure video fusion method more than a kind, it is characterised in that described method includes:
In the many exposure images sequence obtained, determine reference frame and at least one pending frame, calculate according to concordance sensitive hash Each described pending frame is aligned to described reference frame by method respectively;
The weight of the described pending frame after each alignment of the box counting algorithm of the described pending frame after aliging according to each Figure, and the weight map according to reference frame described in the box counting algorithm of described reference frame;
The weight map of the described pending frame after utilizing described reference frame to align each is optimized, the power after each being optimized The weight map of multigraph and described reference frame merges, and obtains fusion image.
Method the most according to claim 1, it is characterised in that according to concordance sensitive hash algorithm respectively by described in each Pending frame is aligned to described reference frame, including:
Respectively frame pending each described and described reference frame are carried out Scale Decomposition, obtain pending frame with each described respectively Each corresponding pending sub-frame set, and the reference sub-frame set corresponding with described reference frame;
Treat with reference to each in subframe and each described pending sub-frame set with reference to each in sub-frame set described respectively Process subframe and carry out piecemeal;
Treat described in after each piecemeal in each described pending sub-frame set respectively according to concordance sensitive hash algorithm Process subframe alignment to described with reference to the described reference subframe after each the corresponding piecemeal in sub-frame set;
Use grayscale mapping function that the described pending subframe after each alignment in sub-frame set pending each described is entered Row error hiding correction, and use Poisson video fusion method that the result of described error hiding correction is optimized;
Described pending subframe after being optimized by each in each described pending sub-frame set is reconstructed, and obtains each right Described pending frame after Qi.
Method the most according to claim 2, it is characterised in that according to concordance sensitive hash algorithm respectively by described in each The described pending subframe alignment after each piecemeal in pending sub-frame set is the most described with reference to each phase in sub-frame set Described reference subframe after the piecemeal answered, including:
The described pending subframe after current piecemeal in presently described pending sub-frame set determines image block to be matched, Described reference subframe after current piecemeal in described reference sub-frame set determines that described image block to be matched is corresponding many Individual neighborhood image block;
Calculate respectively between described image block to be matched and each described neighborhood image block according to concordance sensitive hash algorithm Matching distance, is aligned to the neighborhood image block that the described matching distance of minimum is corresponding by described image block to be matched;
Repeat described determine image block to be matched, determine neighborhood image block, calculate matching distance, image block to be matched alignment dynamic Make, until by the described pending subframe alignment after each piecemeal in each described pending sub-frame set to described with reference to son Described reference subframe after each corresponding piecemeal in frame set.
Method the most according to claim 1, it is characterised in that the image of the described pending frame after aliging according to each is special Levy the weight map of the described pending frame after calculating each alignment, and join according to described in the box counting algorithm of described reference frame Examine the weight map of frame, including:
Calculate the phase equalization feature of described pending frame, local contrast feature and the color after each alignment respectively full With degree feature, and calculate the phase equalization feature of described reference frame, local contrast feature and color saturation feature;
The phase equalization feature of described pending frame, local contrast feature and color saturation after aliging according to each Feature calculates the initial weight figure of the described pending frame after each alignment respectively, and the phase place according to described reference frame is consistent The initial weight figure of reference frame described in property feature, local contrast feature and color saturation feature calculation;
The initial weight figure of described pending frame after aliging each respectively is normalized, and obtain after each alignment is described The weight map of pending frame, and the initial weight figure of described reference frame is normalized, obtain the weight of described reference frame Figure.
Method the most according to claim 1, it is characterised in that wait to locate described in after utilizing described reference frame that each is alignd The weight map of reason frame is optimized, including:
Use the weight of the described pending frame after instructing wave filter to utilize described reference frame that each is alignd by below equation Figure is optimized;
q i = Σ j W ‾ i j ( G ) p j ;
W ‾ i j ( G ) = 1 | ω | Σ z : ( i , j ) ∈ ω z ( 1 + ( G i - μ z ) ( G j - μ z ) σ z 2 + ϵ ) ;
Wherein, i and j represents that pixel index, G represent described reference frame,Represent filtering core, pjWait to locate described in after representing alignment The weight map of reason frame, qiRepresent the weight map after optimizing, ωzExpression center is the local window of z, and | ω | represents at ωzPixel Quantity, μzWithIt is illustrated respectively in ωzOn the average of described reference frame and variance, ε represents regularization parameter, GiIt it is described ginseng Examine the ith pixel in frame G, GjRepresent the jth pixel in described reference frame G.
Method the most according to claim 1, it is characterised in that the weight map after each is optimized and the power of described reference frame Multigraph merges, and obtains fusion image, including:
Weight map and the weight map of described reference frame after each being optimized by below equation are merged, and obtain fusion figure Picture;
F = Σ k = 1 N W ^ k f k ;
Wherein, F represents the fusion image obtained, and k represents the sequence number of each frame, and N represents the total quantity of frame,Represent kth frame Optimization after weight map, fkRepresent the matrix of kth frame.
7. exposure video fusing device more than a kind, it is characterised in that described device includes:
Alignment module, for determining reference frame and at least one pending frame, according to one in the many exposure images sequence obtained Each described pending frame is aligned to described reference frame by cause property sensitive hash algorithm respectively;
Weight map computing module, after each alignment of the box counting algorithm of the described pending frame after aliging according to each The weight map of described pending frame, and the weight map according to reference frame described in the box counting algorithm of described reference frame;
Fusion Module, the weight map of the described pending frame after utilizing described reference frame to align each is optimized, will Weight map and the weight map of described reference frame after each optimization merge, and obtain fusion image.
Device the most according to claim 7, it is characterised in that described alignment module includes:
Scale Decomposition unit, for respectively frame pending each described and described reference frame being carried out Scale Decomposition, is distinguished Each pending sub-frame set that frame pending with each described is corresponding, and the reference subframe collection corresponding with described reference frame Close;
Blocking unit, for respectively to each reference subframe and each the described pending subframe collection in described reference sub-frame set Each pending subframe in conjunction carries out piecemeal;
Alignment unit, is used for according to concordance sensitive hash algorithm respectively by each point in each described pending sub-frame set Described pending subframe alignment after block is the most described with reference to the described reference subframe after each the corresponding piecemeal in sub-frame set;
Optimize unit, described in after using grayscale mapping function that each in sub-frame set pending each described is alignd Pending subframe carries out error hiding correction, and uses Poisson video fusion method to carry out excellent to the result of described error hiding correction Change;
Reconfiguration unit, the described pending subframe after being optimized by each in each described pending sub-frame set carries out weight Structure, obtains the described pending frame after each alignment.
Device the most according to claim 8, it is characterised in that described alignment unit includes:
Neighborhood image block determines subelement, waits to locate described in after the current piecemeal in presently described pending sub-frame set Reason subframe determines image block to be matched, the described reference subframe after the current piecemeal in described reference sub-frame set determines Multiple neighborhood image blocks that described image block to be matched is corresponding;
Image block alignment subelement, for according to concordance sensitive hash algorithm calculate respectively described image block to be matched and each Matching distance between described neighborhood image block, the described matching distance that described image block to be matched is aligned to minimum is corresponding Neighborhood image block;
Duplicon unit, is used for repeating described determining image block to be matched, determining neighborhood image block, calculate matching distance, treat Join the action of image block alignment, until by the described pending subframe after each piecemeal in each described pending sub-frame set It is aligned to described with reference to the described reference subframe after each the corresponding piecemeal in sub-frame set.
Device the most according to claim 7, it is characterised in that described weight map computing module includes:
Feature calculation unit is the phase equalization feature of the described pending frame after calculating each alignment respectively, the most right Ratio degree feature and color saturation feature, and calculate the phase equalization feature of described reference frame, local contrast feature And color saturation feature;
Initial weight figure computing unit, for the phase equalization feature according to the described pending frame after each alignment, locally Contrast metric and color saturation feature calculate the initial weight figure of the described pending frame after each alignment respectively, and Reference frame described in phase equalization feature according to described reference frame, local contrast feature and color saturation feature calculation Initial weight figure;
Weight map normalization unit, the initial weight figure of the described pending frame after aliging each respectively carries out normalizing Change, obtain the weight map of the described pending frame after each alignment, and the initial weight figure of described reference frame is carried out normalizing Change, obtain the weight map of described reference frame.
CN201610587415.2A 2016-07-22 2016-07-22 Many exposure video fusion method and device Pending CN106251365A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610587415.2A CN106251365A (en) 2016-07-22 2016-07-22 Many exposure video fusion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610587415.2A CN106251365A (en) 2016-07-22 2016-07-22 Many exposure video fusion method and device

Publications (1)

Publication Number Publication Date
CN106251365A true CN106251365A (en) 2016-12-21

Family

ID=57603310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610587415.2A Pending CN106251365A (en) 2016-07-22 2016-07-22 Many exposure video fusion method and device

Country Status (1)

Country Link
CN (1) CN106251365A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133617A (en) * 2017-04-21 2017-09-05 上海交通大学 It is a kind of based on calculate relevance imaging without imageable target Verification System and authentication method
CN107945148A (en) * 2017-12-15 2018-04-20 电子科技大学 A kind of more exposure image interfusion methods based on MRF regional choices
CN108288253A (en) * 2018-01-08 2018-07-17 厦门美图之家科技有限公司 HDR image generation method and device
CN108492328A (en) * 2018-03-23 2018-09-04 云南大学 Video interframe target matching method, device and realization device
CN108596855A (en) * 2018-04-28 2018-09-28 国信优易数据有限公司 A kind of video image quality Enhancement Method, device and video picture quality enhancement method
CN108876740A (en) * 2018-06-21 2018-11-23 重庆邮电大学 A kind of more exposure registration methods based on ghost removal
CN109688322A (en) * 2018-11-26 2019-04-26 维沃移动通信(杭州)有限公司 A kind of method, device and mobile terminal generating high dynamic range images
CN109978774A (en) * 2017-12-27 2019-07-05 展讯通信(上海)有限公司 Multiframe continuously waits the denoising fusion method and device of exposure images
CN110310251A (en) * 2019-07-03 2019-10-08 北京字节跳动网络技术有限公司 Image processing method and device
CN110545428A (en) * 2018-05-28 2019-12-06 深信服科技股份有限公司 motion estimation method and device, server and computer readable storage medium
CN112418090A (en) * 2020-11-23 2021-02-26 中国科学院西安光学精密机械研究所 Real-time detection method for infrared small and weak target under sky background
CN112468737A (en) * 2020-11-25 2021-03-09 上海摩象网络科技有限公司 Method and device for processing exposure weight matrix of automatic exposure area
CN112634187A (en) * 2021-01-05 2021-04-09 安徽大学 Wide dynamic fusion algorithm based on multiple weight mapping
WO2023016044A1 (en) * 2021-08-12 2023-02-16 荣耀终端有限公司 Video processing method and apparatus, electronic device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247036A (en) * 2012-02-10 2013-08-14 株式会社理光 Multiple-exposure image fusion method and device
CN104616273A (en) * 2015-01-26 2015-05-13 电子科技大学 Multi-exposure image fusion method based on Laplacian pyramid decomposition
CN104835130A (en) * 2015-04-17 2015-08-12 北京联合大学 Multi-exposure image fusion method
CN104881854A (en) * 2015-05-20 2015-09-02 天津大学 High-dynamic-range image fusion method based on gradient and brightness information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103247036A (en) * 2012-02-10 2013-08-14 株式会社理光 Multiple-exposure image fusion method and device
CN104616273A (en) * 2015-01-26 2015-05-13 电子科技大学 Multi-exposure image fusion method based on Laplacian pyramid decomposition
CN104835130A (en) * 2015-04-17 2015-08-12 北京联合大学 Multi-exposure image fusion method
CN104881854A (en) * 2015-05-20 2015-09-02 天津大学 High-dynamic-range image fusion method based on gradient and brightness information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIANG XU ET AL.: "Feature-based multiexposure image-sequence fusion with guided filter and image alignment", 《JOURNAL OF ELECTRONIC IMAGING》 *
徐亮: "多传感器运动图像的跨尺度分析与融合研究", 《中国博士学位论文全文数据库》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133617B (en) * 2017-04-21 2021-09-03 上海交通大学 Imaging-free target authentication system and method based on calculation correlation imaging
CN107133617A (en) * 2017-04-21 2017-09-05 上海交通大学 It is a kind of based on calculate relevance imaging without imageable target Verification System and authentication method
CN107945148A (en) * 2017-12-15 2018-04-20 电子科技大学 A kind of more exposure image interfusion methods based on MRF regional choices
CN107945148B (en) * 2017-12-15 2021-06-01 电子科技大学 Multi-exposure image fusion method based on MRF (Markov random field) region selection
CN109978774B (en) * 2017-12-27 2021-06-18 展讯通信(上海)有限公司 Denoising fusion method and device for multi-frame continuous equal exposure images
CN109978774A (en) * 2017-12-27 2019-07-05 展讯通信(上海)有限公司 Multiframe continuously waits the denoising fusion method and device of exposure images
CN108288253A (en) * 2018-01-08 2018-07-17 厦门美图之家科技有限公司 HDR image generation method and device
CN108492328B (en) * 2018-03-23 2021-02-26 云南大学 Video inter-frame target matching method and device and implementation device
CN108492328A (en) * 2018-03-23 2018-09-04 云南大学 Video interframe target matching method, device and realization device
CN108596855A (en) * 2018-04-28 2018-09-28 国信优易数据有限公司 A kind of video image quality Enhancement Method, device and video picture quality enhancement method
CN110545428A (en) * 2018-05-28 2019-12-06 深信服科技股份有限公司 motion estimation method and device, server and computer readable storage medium
CN110545428B (en) * 2018-05-28 2024-02-23 深信服科技股份有限公司 Motion estimation method and device, server and computer readable storage medium
CN108876740B (en) * 2018-06-21 2022-04-12 重庆邮电大学 Multi-exposure registration method based on ghost removal
CN108876740A (en) * 2018-06-21 2018-11-23 重庆邮电大学 A kind of more exposure registration methods based on ghost removal
CN109688322B (en) * 2018-11-26 2021-04-02 维沃移动通信(杭州)有限公司 Method and device for generating high dynamic range image and mobile terminal
CN109688322A (en) * 2018-11-26 2019-04-26 维沃移动通信(杭州)有限公司 A kind of method, device and mobile terminal generating high dynamic range images
CN110310251B (en) * 2019-07-03 2021-10-29 北京字节跳动网络技术有限公司 Image processing method and device
CN110310251A (en) * 2019-07-03 2019-10-08 北京字节跳动网络技术有限公司 Image processing method and device
CN112418090A (en) * 2020-11-23 2021-02-26 中国科学院西安光学精密机械研究所 Real-time detection method for infrared small and weak target under sky background
CN112468737A (en) * 2020-11-25 2021-03-09 上海摩象网络科技有限公司 Method and device for processing exposure weight matrix of automatic exposure area
CN112468737B (en) * 2020-11-25 2022-04-29 上海摩象网络科技有限公司 Method and device for processing exposure weight matrix of automatic exposure area
CN112634187A (en) * 2021-01-05 2021-04-09 安徽大学 Wide dynamic fusion algorithm based on multiple weight mapping
CN112634187B (en) * 2021-01-05 2022-11-18 安徽大学 Wide dynamic fusion algorithm based on multiple weight mapping
WO2023016044A1 (en) * 2021-08-12 2023-02-16 荣耀终端有限公司 Video processing method and apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN106251365A (en) Many exposure video fusion method and device
CN106650630B (en) A kind of method for tracking target and electronic equipment
CN107111880B (en) Disposition is blocked for computer vision
CN102930252B (en) A kind of sight tracing based on the compensation of neutral net head movement
CN110084304B (en) Target detection method based on synthetic data set
CN103814306B (en) Depth survey quality strengthens
CN102682446B (en) Adaptive combined two-sided filter is used to generate equipment and the method for dense depth map
US8385630B2 (en) System and method of processing stereo images
Kang et al. Detection and tracking of moving objects from a moving platform in presence of strong parallax
CN104778694B (en) A kind of parametrization automatic geometric correction method shown towards multi-projection system
CN105513064A (en) Image segmentation and adaptive weighting-based stereo matching method
CN108197623A (en) For detecting the method and apparatus of target
CN109583483A (en) A kind of object detection method and system based on convolutional neural networks
CN113140011B (en) Infrared thermal imaging monocular vision distance measurement method and related components
CN105303615A (en) Combination method of two-dimensional stitching and three-dimensional surface reconstruction of image
CN108648194B (en) Three-dimensional target identification segmentation and pose measurement method and device based on CAD model
CN109711268B (en) Face image screening method and device
EP2528035B1 (en) Apparatus and method for detecting a vertex of an image
CN110942484B (en) Camera self-motion estimation method based on occlusion perception and feature pyramid matching
CN108898634A (en) Pinpoint method is carried out to embroidery machine target pinprick based on binocular camera parallax
CN109636890B (en) Texture fusion method and device, electronic equipment, storage medium and product
CN104599288A (en) Skin color template based feature tracking method and device
CN112088380A (en) Image segmentation
CN108377374A (en) Method and system for generating depth information related to an image
CN109255808A (en) Building texture blending method and apparatus based on inclination image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20161221