CN105741255A - Image fusion method and device - Google Patents

Image fusion method and device Download PDF

Info

Publication number
CN105741255A
CN105741255A CN201410742736.6A CN201410742736A CN105741255A CN 105741255 A CN105741255 A CN 105741255A CN 201410742736 A CN201410742736 A CN 201410742736A CN 105741255 A CN105741255 A CN 105741255A
Authority
CN
China
Prior art keywords
image
pixel
fused
value
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410742736.6A
Other languages
Chinese (zh)
Other versions
CN105741255B (en
Inventor
陈敏杰
郭春磊
潘博阳
刘阳
林福辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Shanghai Co Ltd
Original Assignee
Spreadtrum Communications Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Shanghai Co Ltd filed Critical Spreadtrum Communications Shanghai Co Ltd
Priority to CN201410742736.6A priority Critical patent/CN105741255B/en
Publication of CN105741255A publication Critical patent/CN105741255A/en
Application granted granted Critical
Publication of CN105741255B publication Critical patent/CN105741255B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

Provided are an image fusion method and device, used for fusing images obtained by a mobile terminal. The method comprises: obtaining a first image, wherein the first image is the image to be fused in each level of a multi-level pyramid image, and the multi-level pyramid image is a multi-level image obtained by performing pyramid decomposition on the images obtained by a mobile terminal; obtaining the deformation parameter corresponding to the first image according to an image alignment algorithm; performing deformation operation on the first image according to the deformation parameter to obtain a second image corresponding to the first image; and performing weighted fusion processing on the second image. The method can simply and rapidly obtain the fusion image of a multi-level pyramid image, realize accurate and effective image fusion quality and achieve better fusion effects.

Description

Image interfusion method and device
Technical field
The present invention relates to technical field of image processing, particularly relate to a kind of image interfusion method and image fusion device.
Background technology
Image co-registration is a kind of informix treatment technology, and its main purpose is by the reliability processing raising image of redundant data between multiple image, by the definition processing raising image of complementary information between multiple image.In recent years, image co-registration has become an important and useful new technique in image processing field.
The fusion treatment of image is widely used in every field, can be effectively improved picture quality by the fusion treatment of image.Obtain in the process of image at imaging device, due to the restriction of the imaging capability of camera lens, it is understood that there may be single-frame images is difficult to when image-forming condition is undesirable obtain desirable imaging effect.So, in the prior art, it is possible to by the blending algorithm of multiple image, multiple image is carried out fusion treatment, to obtain good imaging effect.In prior art, when multiple image is carried out fusion treatment, can based on the motion-vector feature of the translation of image, image is carried out fusion treatment, but when being shot by mobile terminal, the difference between consecutive frame captured by mobile terminal is usually not easy directly to use translation to compensate fusion;According to the characteristic matching mode of image, image can also be carried out fusion treatment, but will be difficult to obtain image co-registration result accurately when image lacks typical case's matching characteristic point.Some other has very high complexity based on the method for reconstructing of super-resolution, is not particularly suited for processing at mobile terminal.
Prior art, carries out in the process of fusion treatment at the image that mobile terminal is acquired, exists and is difficult to problem simple, that image effectively carries out accurately fusion.
Summary of the invention
The problem that being difficult to of this invention address that is simple, image effectively carries out accurately fusion.
For solving the problems referred to above, technical solution of the present invention provides a kind of image interfusion method, for the image of acquisition for mobile terminal is carried out fusion treatment;Described method includes:
Obtaining the first image, described first image is the image that the every single order in multistage pyramid diagram picture is to be fused, and described multistage pyramid diagram picture is that the image that mobile terminal is acquired is carried out the multistage image that pyramid decomposition is obtained;
The deformation parameter of corresponding described first image is obtained according to image alignment algorithm;
According to described deformation parameter, the first image is carried out deformation operation, to obtain the second image of corresponding described first image;
Described second image is weighted fusion treatment.
Optionally, described image alignment algorithm is LucasKanede inverse composition image alignment algorithm.
Optionally, the process of the described deformation parameter obtaining corresponding described first image includes:
Described first image is carried out distortion change so that ∑x[I(W(x;p))-T(W(x;Δp))]2Minimizing, wherein, x represents the pixel in the first image, I (W (x;P)) represent the first image to be fused by W (x;P) image after distortion, W (x;P) being the linear dimensions about p, p is the deformation parameter of corresponding described first image, and Δ p is the increment size of p, T (W (x;Δ p)) represent reference picture by W (x;Image after Δ p) distortion;
Result according to distortion change obtains deformation parameter.
Optionally, described first image carries out adopting in the process that distortion changes the Y channel value of the first image and reference picture.
Optionally, described distortion change includes any one in the first conversion and affine transformation, and described first conversion includes at least one conversion in translation, convergent-divergent and rotation transformation.
Optionally, described method also includes: before described first image carries out distortion change, is weighted the pixel in described first image processing.
Optionally, the described process being weighted the pixel in described first image processing includes:
Pass through formulaIt is weighted, wherein, z represents the difference of the pixel value of corresponding described location of pixels in the pixel value of the pixel position in the first image and reference picture, and ρ (z) represents that the weight of the pixel of described pixel position, the span of described location of pixels lateral coordinates areThe span of described location of pixels longitudinal coordinate isWherein, w is the width of described first image, and h is the height of described first image.
Optionally, described method also includes: the deformation parameter of the first image corresponding to current rank in described multistage pyramid diagram picture is for determining the deformation parameter of the first image corresponding to lower single order in described multistage pyramid diagram picture.
Optionally, described method also includes: the deformation parameter of the first image corresponding to current rank in described multistage pyramid diagram picture is p1To p6, by formula p3new=2p3, p6new=2p6Obtain p3newAnd p6new, by p1、p2、p3new、p4、p5And p6newThe deformation parameter of the first image corresponding to lower single order being defined as in described multistage pyramid diagram picture.
Optionally, the described process that second image is weighted fusion treatment includes:
Pass through formula: R ( x ) = W H , d ( x ) < v 1 W L , d ( x ) > v 2 v 2 - d ( x ) v 2 - v 1 ( W H - W L ) + W L , v 1 &le; d ( x ) &le; v 2
Determining the fusion weight of the pixel of the second image, wherein R (x) represents the weighted value of the pixel x in the second image,I(W(x;P)) representing the value of pixel in the second image, T (x) is the value of the pixel of correspondence position in reference picture, | I (W (x;P))-T (x) | represent I (W (x;P)) and T (x),Representing convolution algorithm, K is convolution kernel, v1And v2For constant, WHAnd WLFor constant;
The product of the gray value of location of pixels in each second image and the weighted value of the pixel of described location of pixels is carried out sum-average arithmetic, obtained meansigma methods is defined as the gray value of location of pixels described in fusion image.
Optionally, adopting the value of the pixel of correspondence position in the value of pixel in the second image and reference picture in the calculating process of described d (x) is YUV value.
Optionally, described method also includes: before obtaining the first image, remove the broad image in image to be fused.
Optionally, the process of the broad image in described removal image to be fused includes:
Obtain the Y access ladder angle value of each image to be fused;
Grad at the image to be fused of detection meets: G < k × GmaxTime, it is determined that the image to be fused of described detection is broad image, and wherein, G is the Grad of the image to be fused of described detection, and k is proportionality coefficient, GmaxFor the greatest gradient value in the Y access ladder angle value of described each image to be fused;
Described broad image is removed from image to be fused.
Optionally, described method also includes: after described second image is weighted fusion treatment, and the image after merging is sharpened process.
For solving above-mentioned technical problem, technical solution of the present invention also provides for a kind of image fusion device, for the image of acquisition for mobile terminal is carried out fusion treatment;Described device includes:
Acquiring unit, is used for obtaining the first image, and described first image is the image that the every single order in multistage pyramid diagram picture is to be fused, and described multistage pyramid diagram picture is that the image that mobile terminal is acquired is carried out the multistage image that pyramid decomposition is obtained;
Parameter determination unit, for obtaining the deformation parameter of corresponding described first image according to image alignment algorithm;
Deformation unit, for carrying out deformation operation according to described deformation parameter to the first image, to obtain the second image of corresponding described first image;
Integrated unit, for being weighted fusion treatment to described second image.
Compared with prior art, technical scheme has the advantage that
After acquisition for mobile terminal to image, acquired image is carried out pyramid decomposition and obtains multistage pyramid diagram picture, and then using every single order image as the first image to be fused, the deformation parameter of corresponding described first image is obtained according to image alignment algorithm, and then according to described deformation parameter, the first image is carried out deformation operation, obtain the second image after deformation, finally described second image is weighted fusion treatment.For the first image of the every single order pyramid diagram picture of correspondence, corresponding second image can be obtained, after each second image is weighted fusion, obtain final fusion image.The method can simply, be rapidly obtained the fusion image of multistage image, and accurate and effective image co-registration quality can be obtained, obtain good syncretizing effect.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of the image interfusion method that technical solution of the present invention provides;
Fig. 2 is the schematic flow sheet of the image interfusion method that the embodiment of the present invention provides;
Fig. 3 is the structural representation of the image fusion device that the embodiment of the present invention provides.
Detailed description of the invention
Prior art, carries out in the process of fusion treatment at the image that mobile terminal is acquired, exists and is difficult to problem simple, that image effectively carries out accurately fusion.
For solving the problems referred to above, technical solution of the present invention provides a kind of image interfusion method.
Fig. 1 is the schematic flow sheet of the image interfusion method that technical solution of the present invention provides, as it is shown in figure 1, step S1 is first carried out, obtains the first image, and described first image is the image that the every single order in multistage pyramid diagram picture is to be fused.
For the image that mobile terminal is acquired, first can carry out pyramid decomposition to obtain the multistage pyramid diagram picture for merging, in present specification, by every single order pyramid diagram picture referred to as one to should the first image of rank pyramid diagram picture.
Perform step S2, obtain the deformation parameter of corresponding described first image according to image alignment algorithm.
Obtain the deformation parameter of each the first image corresponding according to image alignment algorithm, described image alignment algorithm can be LucasKanede inverse composition image alignment algorithm.
Perform step S3, according to described deformation parameter, the first image is carried out deformation operation, to obtain the second image of corresponding described first image.
Deformation parameter according to corresponding first image, carries out deformation operation to described first image, image after deformation is called to should the second image of the first image, all can obtain corresponding second image for each first image.
Perform step S4, described second image is weighted fusion treatment.
After getting all second images, it is weighted each second image processing, and then the weighted results according to all images is weighted fusion treatment, to obtain last fusion treatment result.
The method can simply, be rapidly obtained the fusion image of multistage image, and accurate and effective image co-registration quality can be obtained, obtain good syncretizing effect.
Understandable for enabling the above-mentioned purpose of the present invention, feature and advantage to become apparent from, below in conjunction with accompanying drawing, specific embodiments of the invention are described in detail.
In the present embodiment, to need the image carrying out fusion treatment to illustrate for the image of yuv format.
In the present embodiment, after obtaining multistage image to be fused, the broad image in image to be fused is first removed;It is then based on LucasKanede inverse composition image alignment algorithm and realizes the deformation operation to image;And then the image after deformation process is adopted weighting fusion treatment, it is achieved the fusion treatment to image;The subsequent operations such as process can be sharpened further, to improve the fusion treatment quality of image after obtaining fusion image.
Fig. 2 is the schematic flow sheet of the image interfusion method that the embodiment of the present invention provides.
As in figure 2 it is shown, step S201 is first carried out, the image that mobile terminal is acquired is carried out pyramid decomposition, obtain multistage pyramid diagram picture.
Before image is carried out Image Fusion, first image is carried out pyramid decomposition, prior art has the methods such as gaussian pyramid decomposition, Laplacian pyramid can realize the pyramid decomposition to image.In the process that image is carried out pyramid decomposition, can to need the image decomposed as pyramidal bottom (0 layer) image, then bottom layer image is filtered, down-sampling etc. operates, and obtains second layer image, the like, it is achieved the structure of multistage pyramid diagram picture.
Pyramid decomposition method is well known to those skilled in the art, and does not repeat them here.Fusion process based on the Image Fusion of pyramid decomposition carries out respectively on different scale, different spatial resolutions and different decomposition layer, is obtained in that better syncretizing effect compared with simple image blending algorithm.
Performing step S202, the broad image treated in fusion image filters.
After obtaining multistage pyramid diagram picture, using described multistage pyramid diagram picture as image to be fused, the broad image in described image to be fused is filtered.
Specifically, for the image of input for YUV image, calculated the Grad of the Y passage obtained in every piece image by formula (1).
G = &Sigma; ( ( &PartialD; I Y &PartialD; x ) 2 + ( &PartialD; I Y &PartialD; y ) 2 ) - - - ( 1 )
Wherein, G is the Grad of the image to be fused of described detection, IYY channel value for the pixel in the image to be fused that detects.
Each image to be fused is obtained Y access ladder angle value each through formula (1), all of image gradient value to be detected can be designated as G at this1, G2... .Gn, wherein, n is the number of detection image.
Greatest gradient value is obtained according to formula (2).
Gmax=max (G1, G2…Gn)(2)
Grad at the image to be fused of detection meets G < k × GmaxTime, it is determined that the image to be fused of described detection is broad image, wherein, for proportionality coefficient, GmaxFor the greatest gradient value in the Y access ladder angle value of described each image to be fused.The span of k can be determined accordingly according to experimental data etc., and in the present embodiment, the span of described k can between 0.8~0.9.
After broad image being detected, this broad image is removed from image to be fused.Using multistage image remaining after filtering broad image as image to be fused, in the present embodiment, remaining image to be fused after filtering broad image is called the first image.
Perform step S203, obtain the deformation parameter of corresponding first image based on LucasKanede inverse composition image alignment algorithm.
The first image for each yardstick carries out distortion change.In distortion change procedure, first the Y channel image of the given sharpening good image of effect is as reference picture T (x), to each the first image I1,I2…ImCarry out minimizing the alignment error number of image to be fused (m be), namely make ∑x[I(W(x;p))-T(W(x;Δp))]2Minimizing, wherein, x represents the pixel in the first image, I (W (x;P)) represent the first image to be fused by W (x;P) image after distortion, W (x;P) being the linear dimensions about p, p is the deformation parameter of corresponding described first image, and Δ p is the increment size of p, T (W (x;Δ p)) represent reference picture by W (x;Image after Δ p) distortion.
The above-mentioned process that described first image is carried out distortion change adopts the Y channel value of the first image and reference picture.
By W (x;P) process distorted can adopt translation rotate scale transformation and affine transformation realizes distortion change, described W (x;P) it is properly termed as warping function.
If W is (x;When p) using affine transformation, it is possible to adopt formula (3) to be indicated.
W ( x ; p ) = p 1 p 2 p 3 p 4 p 5 p 6 x y 1 - - - ( 3 )
Corresponding affine transformation, is asked for by formula (4)
&PartialD; W &PartialD; p = x 0 y 0 1 0 0 x 0 y 0 1 - - - ( 4 )
If W is (x;When p) using translation to rotate scale transformation, it is possible to adopt formula (5) to be indicated.
W ( x ; p ) = p 1 p 2 p 3 - p 2 p 1 p 6 x y 1 - - - ( 5 )
Corresponding affine transformation, is asked for by formula (6)
&PartialD; W &PartialD; p = x y 1 0 y - x 0 1 - - - ( 6 )
Wherein, W (x;P) warp parameters corresponding for p in, specifically, corresponding warp parameters is p1To p6
By as follows go out formula (7) ask for Hessian matrix.
H = &Sigma; [ &dtri; T &PartialD; W &PartialD; p ] T [ &dtri; T &PartialD; W &PartialD; p ] - - - ( 7 )
Then according to initializing deformation parameter p, use gradient descent method to estimate Δ p, specifically, ask for Δ p by formula (8).
&Delta;p = &Sigma; x [ &dtri; T &PartialD; W &PartialD; p ] T [ T ( x ) - I ( W ( x ; p ) ) ] ( H ) - 1 - - - ( 8 )
After obtaining the increment value Δ p of p, namely can pass through formula (9) to updating deformation parameter p.
Namely can be got the deformation parameter of corresponding first image to formula (9) by formula (3).
It should be noted that when estimating Δ p above by formula (8), it is possible to first it is weighted the first image processing.It is, for example possible to use M-estimator makes different pixels have different weights, M-estimators is an extensive estimation function of class.
Specifically, it is possible to pass through formulaBeing weighted, wherein, z represents the difference of the pixel value of corresponding described location of pixels in the pixel value of the pixel position in the first image and reference picture, and ρ (z) represents the weight of the pixel of described pixel position.
In order to effectively reduce calculating bit wide, simplify simultaneously multiple dimensioned between the renewal of deformation parameter, be w for width, be highly first image of h, the span of the location of pixels lateral coordinates in described first image can be defined asThe span of described location of pixels longitudinal coordinate can beWherein, w is the width of described first image, and h is the height of described first image.
There is also the need to illustrate, for the first image corresponding to the current rank in multistage pyramid diagram picture deformation parameter for determine in described multistage pyramid diagram picture lower single order corresponding to the deformation parameter of the first image.
Such as, the deformation parameter for the first image corresponding to current rank is p1To p6, it is possible to the initial parameter to the first image being used as after the deformation parameter renewal of the first image corresponding to lower single order.Specifically, it is possible to by formula p3new=2p3, p6new=2p6Get parms p3Parameter p after renewal3newWith parameter p6Parameter p after renewal6new, and then can by p1、p2、p3new、p4、p5And p6newAs the initial deformation parameter of the first image corresponding to lower single order, and then obtained under correspondence the deformation parameter of the first image corresponding to single order by above-mentioned formula (3) to formula (9).
In the present embodiment, the processing mode of multiple dimensioned pyramid diagram picture is used, it is possible to effectively save the calculation consumption calculated in parametric procedure, after the deformation parameter of yardstick k has been estimated, after parameter is updated, can serve as the initial estimate of lower single order image.
Perform step S204, the deformation parameter according to corresponding first image, described first image is carried out deformation operation, generates the second image.
For each the first image, the deformation parameter corresponding with the first image can be obtained, and then according to passing through the deformation parameter corresponding to the first image, warping function or translation in conjunction with corresponding affine transformation rotate the warping function realization of the scale transformation deformation operation to this first image, each first image corresponding, all can obtain the image after a corresponding deformation, the image after the first anamorphose is called to should the second image of the first image at this.
Perform step S205, be weighted each second image processing.
Consider the restriction of mobile terminal computing ability, in the present embodiment the second image to be fused after conversion is used the direct Weighted Fusion mode based on spatial domain.In order to directly merge issuable artifact effect when suppressing object to move, by equation below (10), weight is merged in each pixel calculating in the second image.
R ( x ) = W H , d ( x ) < v 1 W L , d ( x ) > v 2 v 2 - d ( x ) v 2 - v 1 ( W H - W L ) + W L , v 1 &le; d ( x ) &le; v 2 - - - ( 10 )
Wherein R (x) represents the weighted value of the pixel x in the second image, I(W(x;P)) representing the value of pixel in the second image, T (x) is the value of the pixel of correspondence position in reference picture, | I (W (x;P))-T (x) | represent I (W (x;P)) take absolute value with the difference of T (x),Representing convolution algorithm, K is convolution kernel, v1And v2For constant, WHAnd WLFor constant.WHFor the highest weight threshold,LFor minimum weight threshold, for instance, described WHCould be arranged to 1, described WLCould be arranged to 0.
D (x) represents the absolute value of the difference of the pixel value of location of pixels corresponding in the pixel value of pixel position in the second image and reference picture and the convolution results of smoothing kernel, the size of described smoothing kernel can set accordingly according to mobile terminal disposal ability and concrete condition etc., in the present embodiment, the size of described smoothing kernel can be 11 × 11, by the robustness that convolution algorithm can be effectively improved in image co-registration process to picture noise.Adopting the value of the pixel of correspondence position in the value of the pixel in the second image and reference picture in the calculating process of described d (x) is YUV value.
Described v1And v2And WHAnd WLConstant can set accordingly in conjunction with the disposal ability of mobile terminal and experimental data etc..
The corresponding fusion weighted value of each pixel in each second image can be obtained by formula (10).
Perform step S206, obtain the pixel value of final fusion image according to the weighted value of the pixel in each second image.
The product of the gray value of location of pixels in each second image and the weighted value of the pixel of described location of pixels can be carried out sum-average arithmetic, obtained meansigma methods is defined as the gray value of location of pixels described in fusion image.It is hereby achieved that the pixel value of each pixel in final fusion image, the gray value of each pixel of namely final fusion image.
Perform step S207, fusion image is sharpened process.
For picture quality, after obtaining fusion image, it is possible to further the image after merging is sharpened, the associative operation such as deblurring, to realize the further Edge contrast to fusion image, it is effectively improved picture quality.
The image interfusion method that the present embodiment provides, it is possible to simple, be rapidly obtained the fusion image of multistage image, and accurate and effective image co-registration quality can be obtained, obtain good syncretizing effect.
Corresponding above-mentioned fusion image method, the embodiment of the present invention also provides for a kind of image fusion device.Fig. 3 is the structural representation of the image fusion device that the present embodiment provides.
As it is shown on figure 3, described device includes: acquiring unit U11, parameter determination unit U12, deformation unit U13 and integrated unit U14.
Described acquiring unit U11, is used for obtaining the first image, and described first image is the image that the every single order in multistage pyramid diagram picture is to be fused, and described multistage pyramid diagram picture is that the image that mobile terminal is acquired is carried out the multistage image that pyramid decomposition is obtained.
Described parameter determination unit U12, for obtaining the deformation parameter of corresponding described first image according to image alignment algorithm.
Described deformation unit U13, for carrying out deformation operation according to described deformation parameter to the first image, to obtain the second image of corresponding described first image.
Described integrated unit U14, for being weighted fusion treatment to described second image.
Described parameter determination unit U12 includes: change unit U121 and parameter acquiring unit U122.
Described change unit U121, for carrying out distortion change so that ∑ to described first imagex[I(W(x;p))-T(W(x;Δp))]2Minimizing, wherein, x represents the pixel in the first image, I (W (x;P)) represent the first image to be fused by W (x;P) image after distortion, W (x;P) being the linear dimensions about p, p is the deformation parameter of corresponding described first image, and Δ p is the increment size of p, T (W (x;Δ p)) represent reference picture by W (x;Image after Δ p) distortion.
Described parameter acquiring unit U122, for obtaining deformation parameter according to the result of distortion change.
Described parameter determination unit U12 also includes: updating block U123, and the deformation parameter being used for the first image corresponding to the current rank in described multistage pyramid diagram picture determines the deformation parameter of the first image corresponding to the lower single order in described multistage pyramid diagram picture.
Described deformation unit U13 also includes: weighted units U131, for, before described first image carries out distortion change, being weighted the pixel in described first image processing.
Described integrated unit U14 includes: weight determining unit U141 and sum-average arithmetic unit U142.
Described weight determining unit U141, is used for passing through formula:
R ( x ) = W H , d ( x ) < v 1 W L , d ( x ) > v 2 v 2 - d ( x ) v 2 - v 1 ( W H - W L ) + W L , v 1 &le; d ( x ) &le; v 2
Determining the fusion weight of the pixel of the second image, wherein R (x) represents the weighted value of the pixel x in the second image,I(W(x;P)) representing the value of pixel in the second image, T (x) is the value of the pixel of correspondence position in reference picture,Representing convolution algorithm, K is convolution kernel, v1And v2For constant, WHAnd WLFor constant.
Described sum-average arithmetic unit U142, for the product of the weighted value of the pixel of the gray value of location of pixels in each second image and described location of pixels is carried out sum-average arithmetic, is defined as the gray value of location of pixels described in fusion image by obtained meansigma methods.
Described device also includes: removal unit U15, for, before obtaining the first image, removing the broad image in image to be fused.
Described removal unit U15 includes: gradient acquiring unit U151, judging unit U152 and filter unit U153.
Described gradient acquiring unit U151, for obtaining the Y access ladder angle value of each image to be fused.
Described judging unit U152, the Grad for the image to be fused in detection meets: G < k × GmaxTime, it is determined that the image to be fused of described detection is broad image, and wherein, G is the Grad of the image to be fused of described detection, and k is proportionality coefficient, GmaxFor the greatest gradient value in the Y access ladder angle value of described each image to be fused.
Described filter unit U153, for being removed from image to be fused by described broad image.
Described device also includes: sharpen unit U16, for, after described second image is weighted fusion treatment, the image after merging being sharpened process.
Although present disclosure is as above, but the present invention is not limited to this.Any those skilled in the art, without departing from the spirit and scope of the present invention, all can make various changes or modifications, and therefore protection scope of the present invention should be as the criterion with claim limited range.

Claims (22)

1. an image interfusion method, for carrying out fusion treatment to the image of acquisition for mobile terminal;It is characterized in that, including:
Obtaining the first image, described first image is the image that the every single order in multistage pyramid diagram picture is to be fused, and described multistage pyramid diagram picture is that the image that mobile terminal is acquired is carried out the multistage image that pyramid decomposition is obtained;
The deformation parameter of corresponding described first image is obtained according to image alignment algorithm;
According to described deformation parameter, the first image is carried out deformation operation, to obtain the second image of corresponding described first image;
Described second image is weighted fusion treatment.
2. image interfusion method as claimed in claim 1, it is characterised in that described image alignment algorithm is LucasKanede inverse composition image alignment algorithm.
3. image interfusion method as claimed in claim 2, it is characterised in that the process of the deformation parameter of corresponding described first image of described acquisition includes:
Described first image is carried out distortion change so that ∑x[I(W(x;p))-T(W(x;Δp))]2Minimizing, wherein, x represents the pixel in the first image, I (W (x;P)) represent the first image to be fused by W (x;P) image after distortion, W (x;P) being the linear dimensions about p, p is the deformation parameter of corresponding described first image, and Δ p is the increment size of p, T (W (x;Δ p)) represent reference picture by W (x;Image after Δ p) distortion;
Result according to distortion change obtains deformation parameter.
4. image interfusion method as claimed in claim 3, it is characterised in that described first image carries out adopting in the process that distortion changes the Y channel value of the first image and reference picture.
5. image interfusion method as claimed in claim 3, it is characterised in that described distortion change includes any one in the first conversion and affine transformation, and described first conversion includes at least one conversion in translation, convergent-divergent and rotation transformation.
6. image interfusion method as claimed in claim 3, it is characterised in that also include: before described first image carries out distortion change, is weighted the pixel in described first image processing.
7. image interfusion method as claimed in claim 6, it is characterised in that the described process being weighted the pixel in described first image processing includes:
Pass through formulaIt is weighted, wherein, z represents the difference of the pixel value of corresponding described location of pixels in the pixel value of the pixel position in the first image and reference picture, and ρ (z) represents that the weight of the pixel of described pixel position, the span of described location of pixels lateral coordinates areThe span of described location of pixels longitudinal coordinate isWherein, w is the width of described first image, and h is the height of described first image.
8. image interfusion method as claimed in claim 1, it is characterized in that, also include: the deformation parameter of the first image corresponding to current rank in described multistage pyramid diagram picture is for determining the deformation parameter of the first image corresponding to lower single order in described multistage pyramid diagram picture.
9. image interfusion method as claimed in claim 8, it is characterised in that also include: the deformation parameter of the first image corresponding to current rank in described multistage pyramid diagram picture is p1To p6, by formula p3new=2p3, p6new=2p6Obtain p3newAnd p6new, by p1、p2、p3new、p4、p5And p6newThe deformation parameter of the first image corresponding to lower single order being defined as in described multistage pyramid diagram picture.
10. image interfusion method as claimed in claim 1, it is characterised in that the described process that second image is weighted fusion treatment includes:
Pass through formula: R ( x ) = W H , d ( x ) < v 1 W L , d ( x ) > v 2 v 2 - d ( x ) v 2 - v 1 ( W H - W L ) + W L , v 1 &le; d ( x ) &le; v 2
Determining the fusion weight of the pixel of the second image, wherein R (x) represents the weighted value of the pixel x in the second image,I(W(x;P)) representing the value of pixel in the second image, T (x) is the value of the pixel of correspondence position in reference picture, | I (W (x;P))-T (x) | represent I (W (x;P)) take absolute value with the difference of T (x),Representing convolution algorithm, K is convolution kernel, v1And v2For constant, WHAnd WLFor constant;
The product of the gray value of location of pixels in each second image and the weighted value of the pixel of described location of pixels is carried out sum-average arithmetic, obtained meansigma methods is defined as the gray value of location of pixels described in fusion image.
11. image interfusion method as claimed in claim 10, it is characterised in that adopting the value of the pixel of correspondence position in the value of the pixel in the second image and reference picture in the calculating process of described d (x) is YUV value.
12. image interfusion method as claimed in claim 1, it is characterised in that also include: before obtaining the first image, remove the broad image in image to be fused.
13. image interfusion method as claimed in claim 12, it is characterised in that the process of the broad image in described removal image to be fused includes:
Obtain the Y access ladder angle value of each image to be fused;
Grad at the image to be fused of detection meets: G < κ × GmaxTime, it is determined that the image to be fused of described detection is broad image, and wherein, G is the Grad of the image to be fused of described detection, and κ is proportionality coefficient, GmaxFor the greatest gradient value in the Y access ladder angle value of described each image to be fused;
Described broad image is removed from image to be fused.
14. image interfusion method as claimed in claim 1, it is characterised in that also include: after described second image is weighted fusion treatment, the image after merging is sharpened process.
15. an image fusion device, for carrying out fusion treatment to the image of acquisition for mobile terminal;It is characterized in that, including:
Acquiring unit, is used for obtaining the first image, and described first image is the image that the every single order in multistage pyramid diagram picture is to be fused, and described multistage pyramid diagram picture is that the image that mobile terminal is acquired is carried out the multistage image that pyramid decomposition is obtained;
Parameter determination unit, for obtaining the deformation parameter of corresponding described first image according to image alignment algorithm;
Deformation unit, for carrying out deformation operation according to described deformation parameter to the first image, to obtain the second image of corresponding described first image;
Integrated unit, for being weighted fusion treatment to described second image.
16. image fusion device as claimed in claim 15, it is characterised in that described parameter determination unit includes:
Change unit, for carrying out distortion change so that ∑ to described first imagex[I(W(x;p))-T(W(x;Δp))]2Minimizing, wherein, x represents the pixel in the first image, I (W (x;P)) represent the first image to be fused by W (x;P) image after distortion, W (x;P) being the linear dimensions about p, p is the deformation parameter of corresponding described first image, and Δ p is the increment size of p, T (W (x;Δ p)) represent reference picture by W (x;Image after Δ p) distortion;
Parameter acquiring unit, for obtaining deformation parameter according to the result of distortion change.
17. image fusion device as claimed in claim 16, it is characterised in that described deformation unit also includes: weighted units, for, before described first image carries out distortion change, being weighted the pixel in described first image processing.
18. image fusion device as claimed in claim 15, it is characterized in that, described parameter determination unit also includes: updating block, and the deformation parameter being used for the first image corresponding to the current rank in described multistage pyramid diagram picture determines the deformation parameter of the first image corresponding to the lower single order in described multistage pyramid diagram picture.
19. image fusion device as claimed in claim 15, it is characterised in that described integrated unit includes:
Weight determining unit, is used for passing through formula:
R ( x ) = W H , d ( x ) < v 1 W L , d ( x ) > v 2 v 2 - d ( x ) v 2 - v 1 ( W H - W L ) + W L , v 1 &le; d ( x ) &le; v 2
Determining the fusion weight of the pixel of the second image, wherein R (x) represents the weighted value of the pixel x in the second image,I(W(x;P)) representing the value of pixel in the second image, T (x) is the value of the pixel of correspondence position in reference picture, | I (W (x;P))-T (x) | represent I (W (x;P)) take absolute value with the difference of T (x),Representing convolution algorithm, K is convolution kernel, v1And v2For constant, WHAnd WLFor constant;
Sum-average arithmetic unit, for the product of the weighted value of the pixel of the gray value of location of pixels in each second image and described location of pixels is carried out sum-average arithmetic, is defined as the gray value of location of pixels described in fusion image by obtained meansigma methods.
20. image fusion device as claimed in claim 15, it is characterised in that also include: removal unit, for, before obtaining the first image, removing the broad image in image to be fused.
21. image fusion device as claimed in claim 20, it is characterised in that described removal unit includes:
Gradient acquiring unit, for obtaining the Y access ladder angle value of each image to be fused;
Judging unit, the Grad for the image to be fused in detection meets: G < κ × GmaxTime, it is determined that the image to be fused of described detection is broad image, and wherein, G is the Grad of the image to be fused of described detection, and κ is proportionality coefficient, GmaxFor the greatest gradient value in the Y access ladder angle value of described each image to be fused;
Filter unit, for being removed from image to be fused by described broad image.
22. image fusion device as claimed in claim 15, it is characterised in that also include: sharpen unit, for, after described second image is weighted fusion treatment, being sharpened process to the image after merging.
CN201410742736.6A 2014-12-08 2014-12-08 Image interfusion method and device Active CN105741255B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410742736.6A CN105741255B (en) 2014-12-08 2014-12-08 Image interfusion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410742736.6A CN105741255B (en) 2014-12-08 2014-12-08 Image interfusion method and device

Publications (2)

Publication Number Publication Date
CN105741255A true CN105741255A (en) 2016-07-06
CN105741255B CN105741255B (en) 2018-11-16

Family

ID=56236836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410742736.6A Active CN105741255B (en) 2014-12-08 2014-12-08 Image interfusion method and device

Country Status (1)

Country Link
CN (1) CN105741255B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805916A (en) * 2018-04-27 2018-11-13 沈阳理工大学 A kind of method for registering images optimized based on fractional order variation optical flow estimation and antithesis
CN109509149A (en) * 2018-10-15 2019-03-22 天津大学 A kind of super resolution ratio reconstruction method based on binary channels convolutional network Fusion Features
CN109978774A (en) * 2017-12-27 2019-07-05 展讯通信(上海)有限公司 Multiframe continuously waits the denoising fusion method and device of exposure images
CN112614053A (en) * 2020-12-25 2021-04-06 哈尔滨市科佳通用机电股份有限公司 Method and system for generating multiple images based on single image of antagonistic neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101504761A (en) * 2009-01-21 2009-08-12 北京中星微电子有限公司 Image splicing method and apparatus
CN102496158A (en) * 2011-11-24 2012-06-13 中兴通讯股份有限公司 Method and device for image information processing
US8885976B1 (en) * 2013-06-20 2014-11-11 Cyberlink Corp. Systems and methods for performing image fusion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101504761A (en) * 2009-01-21 2009-08-12 北京中星微电子有限公司 Image splicing method and apparatus
CN102496158A (en) * 2011-11-24 2012-06-13 中兴通讯股份有限公司 Method and device for image information processing
US8885976B1 (en) * 2013-06-20 2014-11-11 Cyberlink Corp. Systems and methods for performing image fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张培: "反向合成图像对齐算法的研究及改进", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978774A (en) * 2017-12-27 2019-07-05 展讯通信(上海)有限公司 Multiframe continuously waits the denoising fusion method and device of exposure images
CN108805916A (en) * 2018-04-27 2018-11-13 沈阳理工大学 A kind of method for registering images optimized based on fractional order variation optical flow estimation and antithesis
CN108805916B (en) * 2018-04-27 2021-06-08 沈阳理工大学 Image registration method based on fractional order variation and fractional optical flow model and dual optimization
CN109509149A (en) * 2018-10-15 2019-03-22 天津大学 A kind of super resolution ratio reconstruction method based on binary channels convolutional network Fusion Features
CN112614053A (en) * 2020-12-25 2021-04-06 哈尔滨市科佳通用机电股份有限公司 Method and system for generating multiple images based on single image of antagonistic neural network

Also Published As

Publication number Publication date
CN105741255B (en) 2018-11-16

Similar Documents

Publication Publication Date Title
Pan et al. Kernel estimation from salient structure for robust motion deblurring
Lau et al. Restoration of atmospheric turbulence-distorted images via RPCA and quasiconformal maps
CN107016642B (en) Method and apparatus for resolution up-scaling of noisy input images
US8917948B2 (en) High-quality denoising of an image sequence
Baghaie et al. Structure tensor based image interpolation method
CN109345474A (en) Image motion based on gradient field and deep learning obscures blind minimizing technology
Zhang et al. Decision-based non-local means filter for removing impulse noise from digital images
CN105335947A (en) Image de-noising method and image de-noising apparatus
WO2011010475A1 (en) Multi-frame approach method and system for image upscaling
CN102708550A (en) Blind deblurring algorithm based on natural image statistic property
Kumar Deblurring of motion blurred images using histogram of oriented gradients and geometric moments
CN106651792B (en) Method and device for removing stripe noise of satellite image
WO2017100971A1 (en) Deblurring method and device for out-of-focus blurred image
Dong et al. Blur kernel estimation via salient edges and low rank prior for blind image deblurring
CN105741255A (en) Image fusion method and device
EP3072104B1 (en) Image de-noising method
Zhang et al. Image denoising and zooming under the linear minimum mean square-error estimation framework
CN105809633A (en) Color noise removing method and device
Rao et al. Robust optical flow estimation via edge preserving filtering
Ji et al. Image recovery via geometrically structured approximation
CN102682437A (en) Image deconvolution method based on total variation regularization
CN105574823A (en) Deblurring method and device for out-of-focus blurred image
CN104778662A (en) Millimeter-wave image enhancing method and system
Zhai et al. Progressive image restoration through hybrid graph Laplacian regularization
CN104166961B (en) The fuzzy core method of estimation that low-rank for blindly restoring image approaches

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20200529

Address after: 361012 unit 05, 8 / F, building D, Xiamen international shipping center, No.97 Xiangyu Road, Xiamen area, China (Fujian) free trade zone, Xiamen City, Fujian Province

Patentee after: Xinxin Finance Leasing (Xiamen) Co.,Ltd.

Address before: Zuchongzhi road in Pudong Zhangjiang hi tech park Shanghai 201203 Lane 2288 Pudong New Area Spreadtrum Center Building 1

Patentee before: SPREADTRUM COMMUNICATIONS (SHANGHAI) Co.,Ltd.

TR01 Transfer of patent right
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20160706

Assignee: SPREADTRUM COMMUNICATIONS (SHANGHAI) Co.,Ltd.

Assignor: Xinxin Finance Leasing (Xiamen) Co.,Ltd.

Contract record no.: X2021110000010

Denomination of invention: Image fusion method and device

Granted publication date: 20181116

License type: Exclusive License

Record date: 20210317

EE01 Entry into force of recordation of patent licensing contract
TR01 Transfer of patent right

Effective date of registration: 20230710

Address after: 201203 Shanghai city Zuchongzhi road Pudong New Area Zhangjiang hi tech park, Spreadtrum Center Building 1, Lane 2288

Patentee after: SPREADTRUM COMMUNICATIONS (SHANGHAI) Co.,Ltd.

Address before: 361012 unit 05, 8 / F, building D, Xiamen international shipping center, 97 Xiangyu Road, Xiamen area, China (Fujian) pilot Free Trade Zone, Xiamen City, Fujian Province

Patentee before: Xinxin Finance Leasing (Xiamen) Co.,Ltd.

TR01 Transfer of patent right