CN103310486B - Atmospheric turbulence degraded image method for reconstructing - Google Patents

Atmospheric turbulence degraded image method for reconstructing Download PDF

Info

Publication number
CN103310486B
CN103310486B CN201310219440.1A CN201310219440A CN103310486B CN 103310486 B CN103310486 B CN 103310486B CN 201310219440 A CN201310219440 A CN 201310219440A CN 103310486 B CN103310486 B CN 103310486B
Authority
CN
China
Prior art keywords
image
rsqb
lsqb
vector
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310219440.1A
Other languages
Chinese (zh)
Other versions
CN103310486A (en
Inventor
张艳宁
巩东
孙瑾秋
李海森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201310219440.1A priority Critical patent/CN103310486B/en
Publication of CN103310486A publication Critical patent/CN103310486A/en
Application granted granted Critical
Publication of CN103310486B publication Critical patent/CN103310486B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a kind of atmospheric turbulence degraded image method for reconstructing, for solving the technical matters of existing turbulent flow image reconstruction and restored method reconstruction and restored image poor definition.Technical scheme first carries out multiframe registration to eliminate fault image, then rebuild the diffraction blurring image based on space-time neighbour combination, finally adopts the consistent blind deconvolution of the overall situation to eliminate diffraction blurring.Because the impact of the artificial derivatives that the method has taken into full account registration error, registration interpolation causes and the redundancy structure information of room and time dimension existence are to the effect of rebuilding observed objects, utilize the statistics dependence between pixel and potential high quality graphic content in aerial similar image block when setting up and image block, and devise corresponding sampling policy time choose the high quality graphic block with analog structure in the air.Neighborhood merging method is used to try to achieve diffraction blurring image.Finally utilize the consistent Deconvolution Method of the general overall situation to carry out deblurring to diffraction blurring image, obtain and rebuild image clearly.

Description

Atmospheric turbulence degraded image method for reconstructing
Technical field
The present invention relates to a kind of degraded image method for reconstructing, particularly a kind of atmospheric turbulence degraded image method for reconstructing.
Background technology
Document " XiangZhu; Milanfar, P., " RemovingAtmosphericTurbulenceviaSpace-InvariantDeconvolu tion; " PatternAnalysisandMachineIntelligence, IEEETransactionson, vol.35, no.1, pp.157,170, Jan.2013 " propose a kind of based on B-spline registration, the denoising of time domain kernel regression and overall one blinding restore multiframe turbulent flow image reconstruction and restore framework.The method utilizes multi-frame mean image as with reference to image, uses the non-rigid image registration method based on B-spline to eliminate the scalloping that atmospheric turbulence causes.For the image sequence after registration (alignment), every width image is divided into the image block with formed objects overlap centered by each pixel, constructs the image block sequence with image pixel number equal number with this.And then the method using Sequence kernel to return uses the pixel of relevant position in the potential diffraction blurring image of each image block sequence construct (only comprising the image of the consistent diffraction blurring of the overall situation).Finally use the method for the consistent blind deconvolution of the overall situation to eliminate diffraction blurring, thus obtain final reconstructed results figure clearly.Rely on B-spline registration in method frame described in document and interpolation carries out distortion correction to image, but accurately cannot eliminate the scalloping that atmospheric turbulence causes, and artificial derivatives to a certain degree can be introduced; The time series kernel regression that core rebuilds part relies on registration accuracy, and with only the sample object of single pixel as kernel regression of same position, does not make full use of the realm information in space-time field, have lost image detail.
Summary of the invention
In order to overcome the deficiency of existing turbulent flow image reconstruction and restored method reconstruction and restored image poor definition, the invention provides a kind of atmospheric turbulence degraded image method for reconstructing.The redundancy structure information that the impact of the artificial derivatives that the method has taken into full account registration error, registration interpolation causes and room and time dimension exist is to the effect of rebuilding observed objects, utilize the statistics dependence between pixel and potential high quality graphic content in aerial similar image block when setting up and image block, and devise corresponding sampling policy time choose the high quality graphic block with analog structure in the air.Neighborhood merging method is used to try to achieve diffraction blurring image.Finally utilize the consistent Deconvolution Method of the general overall situation to carry out deblurring to diffraction blurring image and rebuild image clearly.
The technical solution adopted for the present invention to solve the technical problems is: a kind of atmospheric turbulence degraded image method for reconstructing, is characterized in comprising the following steps:
Step one, the non-rigid registration method based on B-spline is used to estimate the sports ground of pixel relative reference image in fault image and carry out registration to image sequence.Make the reference image R of registration of the average image of sequence, choose uniform apart from identical reference mark in image lattice, pass through the perturbed field of the B-spline interpolation representation fault image overall situation with the kinematic parameter at reference mark.Use the perturbed field of the every frame pixel of bidirectional projection's estimation of error, optimization object function C (p) is:
C ( p ) = Σ x | G ( W ( x , p → ) ) - R ( x ) | 2 + Σ x | R ( W ( x , p ← ) ) - G ( x ) | 2 + γ ( p → + p ← ) T ( p → + p ← ) - - - ( 1 )
Wherein, G is fault image, and R is reference picture, with respectively represent reference mark place by degraded image to reference picture and reciprocal motion vector, represent by position x and motion vector the motion vector at any position x place obtained by B-spline interpolation.Wherein, bound term represent that forward direction and backward distortion sports ground have characteristic, Optimization Solution target p is with combination, namely use Gauss-Newton method iterative to obtain p, and then intercepting part obtain with ask for G relative to R global motion field, by interpolation, G is corrected.
Step 2, use and eliminate the image sequence { R of turbulent distortion through registration alignment krebuild single frames and only comprise the image Z of diffraction blurring.The process of rebuilding is the value of each pixel estimating diffraction blurring image respectively.In every frame registering images, rebuild the pixel being positioned at p position, centered by p, intercept the large image block of L × L be used for territory Combining weights Time Created, and the scope that the time-space domain neighbour after limiting samples, obtain image block sequence.
(1) use the variance yields of image as the quantification of clear picture degree, ask for the variance { s of each image block in image block sequence k.Calculate { s kmedian η and mean absolute error σ, judge not meet | s k-η | the image block of > ζ σ is as exterior point, and wherein ζ is the exterior point rejecting coefficient of setting.Obtain rejecting the image block sequence { W centered by p after exterior point k[p] }, and choose s wherein kbe worth maximum image block as reference image block W ref[p].
(2) less l × l(l<L is intercepted using above rejecting in L × L large image block sequence of exterior point) element that combines as neighbour of image block.At W ref[p] center intercepts the image of l × l size as reference V ref[p], at sequence { W k[p] } in every frame L × L scope in search and V refthe little image block of J l × l that [p] is the most similar.For searching for the l × l image block sequence obtained, carrying out row and stretching, obtaining being of a size of l 2sequence vector { the V of × 1 k,j[p] }.Wherein, the weighted array of the Euclidean distance of the gradient of the Euclidean distance of little image block and little image block is used to weigh as similarity:
d samp = &alpha; 1 | | V k , j &lsqb; p &rsqb; - V ref &lsqb; p &rsqb; | | + &alpha; 2 | | &dtri; ( V k , j &lsqb; p &rsqb; ) - &dtri; ( V ref &lsqb; p &rsqb; ) | | - - - ( 2 )
Wherein, () represents gradient image.
(3) the near offset sequence { V that obtains of calculating sampling k,j[p] } in intercept the proportion of vector in combination obtained, i.e. the linking relationship of sample in time domain in different frame.For { V k,j[p] } in sample from kth frame the vector { V obtained k, 1[p] ..., V k,J[p] }, utilize large image block W k[p] and W refdistance between [p] is as the weights of vector in combination weighed in k frame, and distance is less, shows W kin the regional area of [p], image quality is the highest, and weights are larger.Use gaussian kernel function as W k[p] and W refthe tolerance of the distance between [p], then the weights of the vector in k frame in combination are:
&kappa; k , j &lsqb; p &rsqb; = exp { - | | W k &lsqb; p &rsqb; - W ref | | 2 &sigma; 2 } - - - ( 3 )
(4) in order to obtain { V k,j[p] } in the element of each vector center, obtained the value of respective pixel value Z [p] in implicit high-quality diffraction blurring image by linear combination, set up Linear Combination Model:
Z[p]=c TV[p](4)
Wherein, Z [p] is implicit clear pixel corresponding to V [p] center, the l that V [p] is formed for pixel value 2× 1 column vector, c is spatial domain Combining weights, and c represents the syntagmatic between implicit diffraction blurring image pixel value and the image domains pixel observed.Suppose { the V obtained that samples k,j[p] } in each vector meet above Linear Combination Model.
The first step solves c ', definition V k,jin [p], pixel value element forms set this set removing element after be expressed as { v} / t+1.Correspondingly, the sequence vector removing center pixel is { V k' , j[p] }, use represent the relation between center pixel and neighborhood territory pixel, and then in expression (4), weight vector c removes the effect in combining vector of the unexpected element of central element.C ' is tried to achieve by solving following optimization object function:
c ^ &prime; = arg min c &prime; &Sigma; k = 1 K &Sigma; j = 1 J &kappa; k &lsqb; p &rsqb; ( v k , j t + 1 &lsqb; p &rsqb; - c &prime; T V k , j &prime; &lsqb; p &rsqb; ) 2 - - - ( 5 )
Wherein, κ k[p] is for trying to achieve time domain Combining weights in step (3).Solving of formula (5) has closing form, Xie Wei:
c ^ &prime; = ( V M &prime; &Lambda; V M &prime; T ) - 1 ( V M &prime; &Lambda; v c ) - - - ( 6 )
Wherein, V m' be { V k' , j[p] } (l that rearranges 2-1) matrix of × (K × J):
V M′=[V 1,1[p],...,V 1,J[p],...,V k,1[p],...,V K,J[p]]
V cfor vector center pixel the column vector of (K × J) × 1 rearranged, r k,j[p] is the pixel value in the accurate frame of first wife that vector center pixel is corresponding:
v c = &lsqb; v 1,1 t + 1 &lsqb; p &rsqb; , . . . , v 1 , J t + 1 &lsqb; p &rsqb; , . . . , v k , 1 t + 1 &lsqb; p &rsqb; , . . . , v K , J t + 1 &lsqb; p &rsqb; &rsqb; = &lsqb; r 1,1 &lsqb; p &rsqb; , . . . , r 1 , J &lsqb; p &rsqb; , . . . , r k , 1 &lsqb; p &rsqb; , . . . , r K , J &lsqb; p &rsqb; &rsqb;
Λ is the diagonal matrix of KJ × KJ:
Λ=[κ 1[p],...,κ 1[p],...,κ K[p],...,κ K[p]]
Each κ in Λ k[p] is the J in every frame vector repetition J time.Solve and to obtain be (a l 2-1) column vector of × 1.Right afterwards be normalized, make elemental redistribution is between 0-1.
Right after carrying out normalization, again right with c t+1carry out unifying normalization, obtain the final estimated value of spatial linear Combining weights vector c.
(5) time-space domain combinational estimation diffraction image pixel value.
Obtain the above time domain corresponding to position p and spatial domain Combining weights { κ k,j[p] } with c after, the estimation obtaining being positioned at the pixel value that p goes out in diffraction image is combined to the time-space domain near offset obtained of sampling in step (2):
Z ^ &lsqb; p &rsqb; = tr ( c ^ T V M &Lambda; ) - - - ( 7 )
Wherein, the mark of tr () representing matrix, namely diagonal of a matrix value and.
For each pixel of potential diffraction blurring image, repeat step (1)-(5), rebuild and obtain high-quality diffraction blurring image.
Fuzzy in step 3, diffraction blurring image only comprises the consistent diffraction blurring of the overall situation.Its degradation model is:
Z = F &CircleTimes; h + &epsiv; - - - ( 8 )
Wherein, F is image clearly, and h is diffraction blurring point spread function, and ε is additive noise.In order to make result images sharpening further, the method for the consistent deconvolution of the overall situation is used to remove diffraction blurring.Its optimization object function is:
< F ^ , h ^ > = arg min F , h | | Z - h &CircleTimes; F | | 2 + &lambda; 1 R f ( F ) + &lambda; 2 R h ( h ) - - - ( 9 )
Wherein, for the deconvolution error term that formula (8) is drawn, R fand R (F) hh () is respectively the prior-constrained item of picture rich in detail F and litura spread function h, λ 1and λ 2for corresponding weighting parameter, it is adjustable parameter.Particularly,
R f(F)=||ρ(F x)+ρ(F y)||(10)
&rho; ( &kappa; ) = - &theta; 1 | &kappa; | , &kappa; &le; l t - ( &theta; 2 &kappa; 2 + &theta; 3 ) , &kappa; > l t - - - ( 11 )
Wherein, F xand F ybe respectively picture rich in detail F in the horizontal direction with the gradient of vertical direction.And l t, θ 1, θ 2and θ 3for preset parameter.
Use alternating iteration Optimization Solution formula (9), in an iterative process, fix F or h respectively to optimize h and F.Solve and to obtain be the final picture rich in detail removing diffraction blurring.
The invention has the beneficial effects as follows: the redundancy structure information that impact and room and time dimension due to the artificial derivatives that the method has taken into full account registration error, registration interpolation causes exist is to the effect of rebuilding observed objects, utilize the statistics dependence between pixel and potential high quality graphic content in aerial similar image block when setting up and image block, and devise corresponding sampling policy time choose the high quality graphic block with analog structure in the air.Neighborhood merging method is used to try to achieve diffraction blurring image.Finally utilize the consistent Deconvolution Method of the general overall situation to carry out deblurring to diffraction blurring image, obtain and rebuild image clearly.
The present invention is described in detail below in conjunction with embodiment.
Embodiment
1. multiframe registration eliminates fault image.
The non-rigid registration method based on B-spline is used to estimate the sports ground of pixel relative reference image in fault image and carry out registration to image sequence.Make the reference image R of registration of the average image of sequence, choose uniform apart from identical reference mark in image lattice, pass through the perturbed field of the B-spline interpolation representation fault image overall situation with the kinematic parameter at reference mark.Use the perturbed field of the every frame pixel of bidirectional projection's estimation of error, optimization object function C (p) is:
C ( p ) = &Sigma; x | G ( W ( x , p &RightArrow; ) ) - R ( x ) | 2 + &Sigma; x | R ( W ( x , p &LeftArrow; ) ) - G ( x ) | 2 + &gamma; ( p &RightArrow; + p &LeftArrow; ) T ( p &RightArrow; + p &LeftArrow; ) - - - ( 1 )
Wherein, G is fault image, and R is reference picture, with respectively represent reference mark place by degraded image to reference picture and reciprocal motion vector, represent by position x and motion vector the motion vector at any position x place obtained by B-spline interpolation.Wherein, bound term represent that forward direction and backward distortion sports ground have characteristic, Optimization Solution target p is u with combination, namely use Gauss-Newton method iterative to obtain p, and then intercepting part obtain with ask for G relative to R global motion field, by interpolation, G is corrected.In specific implementation, setup control point is 16 pixels at the interval of level and vertical direction, arranges bound term weights γ=5000.
2. based on the diffraction blurring image reconstruction of space-time neighbour combination.
Use the image sequence { R eliminating turbulent distortion through registration alignment krebuild single frames and only comprise the image Z of diffraction blurring.The process of rebuilding is the value of each pixel estimating diffraction blurring image respectively.Discuss to rebuild the pixel being positioned at p position.First, in every frame registering images, centered by p, intercept the large image block of L × L be used for territory Combining weights Time Created, and the scope that the time-space domain neighbour after limiting samples, therefore obtain image block sequence, in the present embodiment, choose L=9.Before choosing " luckily " high-quality reference frame, in order to alleviate the impact of the artificial derivatives that registration causes, first the exterior point in image block sequence is rejected.
(1) image block exterior point is rejected and is chosen with reference to image block.
Use the variance yields of image as the quantification of clear picture degree, ask for the variance { s of each image block in image block sequence k.Calculate { s kmedian η and mean absolute error (MAD) σ, judge not meet | s k-η | the image block of > ζ σ is exterior point, and wherein ζ is the exterior point rejecting coefficient of setting, value ζ=6 in the present embodiment.Large but the image block that picture quality is low of variance yields that the artificial derivatives that causes due to registration in image block sequence causes can be rejected in this approach.Obtain rejecting the image block sequence { W centered by p after exterior point k[p] }, and choose s wherein kbe worth maximum image block as reference image block W ref[p].
(2) the similar little image block sampling of time-space domain sampling.
This step intercepts less l × l(l<L using above rejecting in L × L large image block sequence of exterior point) element that combines as neighbour of image block." neighbour " in the present embodiment comprises double implication: little image block a) being distributed in proximate region in different frame, and b) close in feature space image block, namely has the image block of analog structure.Based on more than, first at W ref[p] center intercepts the image of l × l size as reference V ref[p], this step is at sequence { W k[p] } in every frame L × L scope in search and V refthe little image block of J l × l that [p] is the most similar, sets l=3 in realization.For searching for the l × l image block sequence obtained, carrying out row and stretching, obtaining being of a size of l 2sequence vector { the V of × 1 k,j[p] }.Wherein, the weighted array of the Euclidean distance of the Euclidean distance of little image block and the gradient of image block is used to weigh as similarity:
d samp = &alpha; 1 | | V k , j &lsqb; p &rsqb; - V ref &lsqb; p &rsqb; | | + &alpha; 2 | | &dtri; ( V k , j &lsqb; p &rsqb; ) - &dtri; ( V ref &lsqb; p &rsqb; ) | | - - - ( 2 )
Wherein, () represents gradient image, and in the present embodiment, setpoint distance merges weights is α 12=0.5.Intercept with this as far as possible sequence vector obtained to ensure that in merging and can make full use of space time information redundancy, and use picture rich in detail as with reference to ensure that the preservation of the clear of result with details.
(3) time domain Combining weights is tried to achieve.
First the near offset sequence { V that obtains of calculating sampling k,j[p] } in intercept the proportion of vector in combination obtained, i.e. the linking relationship of sample in time domain in different frame.For { V k,j[p] } in sample from kth frame the vector { V obtained k, 1[p] ..., V k,J[p] }, utilize large image block W k[p] and W refdistance between [p] is as the weights of vector in combination weighed in k frame, and distance is less, shows W kin the regional area of [p], image quality is the highest, and weights should be larger.Use gaussian kernel function as W k[p] and W refthe tolerance of the distance between [p], then the weights of the vector in k frame in combination are::
&kappa; k , j &lsqb; p &rsqb; = exp { - | | W k &lsqb; p &rsqb; - W ref | | 2 &sigma; 2 } - - - ( 3 )
(4) spatial domain Combining weights calculates.
In order to obtain { V k,j[p] } in element (pixel value) in each vector obtained the value of respective pixel value Z [p] in implicit high-quality diffraction blurring image by linear combination, set up Linear Combination Model:
Z[p]=c TV[p](4)
Wherein, Z [p] is implicit clear pixel corresponding to V [p] center, the l that V [p] is formed for pixel value 2× 1 column vector, c is spatial domain Combining weights (column vector), and c represents the syntagmatic between implicit diffraction blurring image pixel value and the image domains pixel observed.{ the V that the sampling of this step hypothesis obtains k,j[p] } in each vector meet above Linear Combination Model.Owing to the Z [p] of known correspondence and V [p] data pair for this Problems of Reconstruction, the present embodiment proposes a kind of weights c approximate evaluation method based on bayesian theory, and priori is given by the single parameter of artificial setting.
For convenience of describing, vague generalization ground hypothesis Z [p]=c tin V [p], each vector has 2t+1 element.The method divides two parts to solve c, and a) first step solves except central element c t+1outside weight vector c', and normalization, b) second step manually sets center weights c t+1, and again c is normalized.
The first step solves c ', definition V k,jin [p], pixel value element forms set this set removing element after be expressed as { v} / t+1.Correspondingly, the sequence vector removing center pixel is { V k' , j[p] }, use represent the relation between center pixel and neighborhood territory pixel, and then in expression (4), weight vector c removes the effect in combining vector of the unexpected element of central element.C ' is tried to achieve by solving optimization object function:
c ^ &prime; = arg min c &prime; &Sigma; k = 1 K &Sigma; j = 1 J &kappa; k &lsqb; p &rsqb; ( v k , j t + 1 &lsqb; p &rsqb; - c &prime; T V k , j &prime; &lsqb; p &rsqb; ) 2 - - - ( 5 )
Wherein, κ k[p] is for trying to achieve time domain Combining weights in step (3).Solving of formula (5) has closing form, Xie Wei:
c ^ &prime; = ( V M &prime; &Lambda; V M &prime; T ) - 1 ( V M &prime; &Lambda; v c ) - - - ( 6 )
Wherein, V m' be { V k' , j[p] } (l that rearranges 2-1) matrix of × (K × J):
V M′=[V 1,1[p],...,V 1,J[p],...,V k,1[p],...,V K,J[p]]
V cfor vector center pixel the column vector of (K × J) × 1 rearranged ( r k,j[p] is the pixel value in the accurate frame of first wife that vector center pixel is corresponding):
v c = &lsqb; v 1,1 t + 1 &lsqb; p &rsqb; , . . . , v 1 , J t + 1 &lsqb; p &rsqb; , . . . , v k , 1 t + 1 &lsqb; p &rsqb; , . . . , v K , J t + 1 &lsqb; p &rsqb; &rsqb; = &lsqb; r 1,1 &lsqb; p &rsqb; , . . . , r 1 , J &lsqb; p &rsqb; , . . . , r k , 1 &lsqb; p &rsqb; , . . . , r K , J &lsqb; p &rsqb; &rsqb;
Λ is the diagonal matrix of KJ × KJ:
Λ=[κ 1[p],...,κ 1[p],...,κ K[p],...,κ K[p]]
Each κ in Λ k[p] is the J in every frame vector repetition J time.Therefore, solve and to obtain be (a l 2-1) column vector of × 1.Right afterwards be normalized, make elemental redistribution is between 0-1.
Due to for removing the Spatial Coupling weights c after central element, in order to obtain complete c, need c t+1manually set as prior imformation, can be understood as pixel Z [p] in diffraction blurring image and the close degree between the image of registration observed intuitively, embody the degree of degeneration of p place image.Consider and compare field pixel, between observation vector center pixel and Z [p], correlativity is comparatively large, and carried out normalization, for generalized case, the present invention is by setting c t+1=1.5 provide priori for estimating.By again right with c t+1carry out unifying normalization, obtain the final estimated value of spatial linear Combining weights vector c.
(5) time-space domain combinational estimation diffraction image pixel value.
Obtain the above time domain corresponding to position p and spatial domain Combining weights { κ k,j[p] } with c after, the estimation obtaining being positioned at the pixel value that p goes out in diffraction blurring image is combined to the time-space domain near offset obtained of sampling in step (2):
Z ^ &lsqb; p &rsqb; = tr ( c ^ T V M &Lambda; ) - - - ( 7 )
Wherein, the mark of tr () representing matrix, namely diagonal of a matrix value and.
For each pixel of potential diffraction blurring image, repeat step (1)-(5), rebuild and obtain high-quality diffraction blurring image.
3. the consistent blind deconvolution of the overall situation eliminates diffraction blurring.
Fuzzy in diffraction blurring image only comprises the consistent diffraction blurring of the overall situation.Its degradation model is:
Z = F &CircleTimes; h + &epsiv; - - - ( 8 )
Wherein, F is image clearly, and h is diffraction blurring point spread function, and ε is additive noise.In order to make result images sharpening further, the method for the consistent deconvolution of the overall situation is used to remove diffraction blurring.Its optimization object function is:
< F ^ , h ^ > = arg min F , h | | Z - h &CircleTimes; F | | 2 + &lambda; 1 R f ( F ) + &lambda; 2 R h ( h ) - - - ( 9 )
Wherein, for the deconvolution error term that formula (8) is drawn, R fand R (F) hh () is respectively the prior-constrained item of picture rich in detail F and litura spread function h, λ 1and λ 2for corresponding weighting parameter, be adjustable parameter, respectively between 0.002-0.5 and 10-25.Particularly,
R f(F)=||ρ(F x)+ρ(F y)||(10)
&rho; ( &kappa; ) = - &theta; 1 | &kappa; | , &kappa; &le; l t - ( &theta; 2 &kappa; 2 + &theta; 3 ) , &kappa; > l t - - - ( 11 )
Wherein, F xand F ybe respectively picture rich in detail F in the horizontal direction with the gradient of vertical direction.And l t, θ 1, θ 2and θ 3for preset parameter, be set to, θ 1=2.7, θ 2=6.1 × 10 -4, θ 3=5.0.L tthe mode of effect is, is decided which the part priori function in use formula (11) in implementation procedure by the Local standard deviation of pixel in computed image.
Use alternating iteration Optimization Solution formula (9), in an iterative process, fix F or h respectively to optimize h and F.Solve and to obtain be the final picture rich in detail removing diffraction blurring.

Claims (1)

1. an atmospheric turbulence degraded image method for reconstructing, is characterized in that comprising the following steps:
Step one, the non-rigid registration method based on B-spline is used to estimate the sports ground of pixel relative reference image in fault image and carry out registration to image sequence; Make the reference image R of registration of the average image of sequence, choose uniform apart from identical reference mark in image lattice, pass through the perturbed field of the B-spline interpolation representation fault image overall situation with the kinematic parameter at reference mark; Use the perturbed field of the every frame pixel of bidirectional projection's estimation of error, optimization object function C (p) is:
C ( p ) = &Sigma; x | G ( W ( x , p &RightArrow; ) ) - R ( x ) | 2 + &Sigma; x | R ( W ( x , p &LeftArrow; ) ) - G ( x ) | 2 + &gamma; ( p &RightArrow; + p &LeftArrow; ) T ( p &RightArrow; + p &LeftArrow; ) - - - ( 1 )
Wherein, G is fault image, and R is reference picture, with respectively represent reference mark place by degraded image to reference picture and reciprocal motion vector, represent by position x and motion vector the motion vector at any position x place obtained by B-spline interpolation; Wherein, bound term represent that forward direction and backward distortion sports ground have characteristic, Optimization Solution target p is with combination, namely use Gauss-Newton method iterative to obtain p, and then intercepting part obtain with ask for G relative to R global motion field, by interpolation, G is corrected;
Step 2, use and eliminate the image sequence { R of turbulent distortion through registration alignment krebuild single frames and only comprise the image Z of diffraction blurring; The process of rebuilding is the value of each pixel estimating diffraction blurring image respectively; In every frame registering images, rebuild the pixel being positioned at p position, centered by p, intercept the large image block of L × L be used for territory Combining weights Time Created, and the scope that the time-space domain neighbour after limiting samples, obtain image block sequence;
(1) use the variance yields of image as the quantification of clear picture degree, ask for the variance { s of each image block in image block sequence k; Calculate { s kmedian η and mean absolute error σ, judge not meet | s k-η | the image block of > ζ σ is as exterior point, and wherein ζ is the exterior point rejecting coefficient of setting; Obtain rejecting the image block sequence { W centered by p after exterior point k[p] }, and choose s wherein kbe worth maximum image block as reference image block W ref[p];
(2) reject above in L × L large image block sequence of exterior point the element intercepting less l × l (l<L) image block and combine as neighbour; At W ref[p] center intercepts the image of l × l size as reference V ref[p], at sequence { W k[p] } in every frame L × L scope in search and V refthe little image block of J l × l that [p] is the most similar; For searching for the l × l image block sequence obtained, carrying out row and stretching, obtaining being of a size of l 2sequence vector { the V of × 1 k,j[p] }; Wherein, the weighted array of the Euclidean distance of the gradient of the Euclidean distance of little image block and little image block is used to weigh as similarity:
d s a m p = &alpha; 1 | | V k , j &lsqb; p &rsqb; - V r e f &lsqb; p &rsqb; | | + &alpha; 2 | | &dtri; ( V k , j &lsqb; p &rsqb; ) - &dtri; ( V r e f &lsqb; p &rsqb; ) | | - - - ( 2 )
Wherein, represent gradient image;
(3) the near offset sequence { V that obtains of calculating sampling k,j[p] } in intercept the proportion of vector in combination obtained, i.e. the linking relationship of sample in time domain in different frame; For { V k,j[p] } in sample from kth frame the vector { V obtained k, 1[p] ..., V k,J[p] }, utilize large image block W k[p] and W refdistance between [p] is as the weights of vector in combination weighed in k frame, and distance is less, shows W kin the regional area of [p], image quality is the highest, and weights are larger; Use gaussian kernel function as W k[p] and W refthe tolerance of the distance between [p], then the weights of the vector in k frame in combination are:
&kappa; k , j &lsqb; p &rsqb; = exp { - || W k &lsqb; p &rsqb; - W r e f || 2 &sigma; 2 } - - - ( 3 )
(4) in order to obtain { V k,j[p] } in the element of each vector center, obtained the value of respective pixel value Z [p] in implicit high-quality diffraction blurring image by linear combination, set up Linear Combination Model:
Z[p]=c TV[p](4)
Wherein, Z [p] is implicit clear pixel corresponding to V [p] center, the l that V [p] is formed for pixel value 2× 1 column vector, c is spatial domain Combining weights, and c represents the syntagmatic between implicit diffraction blurring image pixel value and the image domains pixel observed; Suppose { the V obtained that samples k,j[p] } in each vector meet above Linear Combination Model;
The first step solves c ', definition V k,jin [p], pixel value element forms set this set removing element after be expressed as { v} / t+1; Correspondingly, remove the sequence vector of center pixel be V ' k,j[p] }, use represent the relation between center pixel and neighborhood territory pixel, and then in expression (4), weight vector c removes the effect in combining vector of the unexpected element of central element; C ' is tried to achieve by solving following optimization object function:
c ^ &prime; = arg m i n c &prime; &Sigma; k = 1 K &Sigma; j = 1 J &kappa; k &lsqb; p &rsqb; ( v k , j t + 1 &lsqb; p &rsqb; - c &prime; T V &prime; k , j &lsqb; p &rsqb; ) 2 - - - ( 5 )
Wherein, κ k[p] is for trying to achieve time domain Combining weights in step (3); Solving of formula (5) has closing form, Xie Wei:
c ^ &prime; = ( V M &prime; &Lambda; V M &prime; T ) - 1 ( V M &prime; &Lambda;v c ) - - - ( 6 )
Wherein, V ' mfor V ' k,j[p] } (l that rearranges 2-1) matrix of × (K × J):
V′ M=[V′ 1,1[p],...,V′ 1,J[p],...,V′ k,1[p],...,V′ k,J[p]]
V cfor vector center pixel the column vector of (K × J) × 1 rearranged, r k,j[p] is the pixel value in the accurate frame of first wife that vector center pixel is corresponding:
v c = &lsqb; v 1 , 1 t + 1 &lsqb; p &rsqb; , ... , v 1 , J t + 1 &lsqb; p &rsqb; , ... , v k , 1 t + 1 &lsqb; p &rsqb; , ... , v K , J t + 1 &lsqb; p &rsqb; &rsqb; = &lsqb; r 1 , 1 &lsqb; p &rsqb; , ... , r 1 , J &lsqb; p &rsqb; , ... , r k , 1 &lsqb; p &rsqb; , ... , r K , J &lsqb; p &rsqb; &rsqb;
Λ is the diagonal matrix of KJ × KJ:
Λ=[κ 1[p],...,κ 1[p],...,κ K[p],...,κ K[p]]
Each κ in Λ k[p] is the J in every frame vector repetition J time; Solve and to obtain be (a l 2-1) column vector of × 1; Right afterwards be normalized, make elemental redistribution is between 0-1;
Right after carrying out normalization, again right with c t+1carry out unifying normalization, obtain the final estimated value of spatial linear Combining weights vector c;
(5) the above time domain corresponding to position p and spatial domain Combining weights { κ is obtained k,j[p] } with c after, the estimation obtaining being positioned at the pixel value that p goes out in diffraction image is combined to the time-space domain near offset obtained of sampling in step (2):
Z ^ &lsqb; p &rsqb; = t r ( c ^ T V M &Lambda; ) - - - ( 7 )
Wherein, the mark of tr () representing matrix, namely diagonal of a matrix value and;
For each pixel of potential diffraction blurring image, repeat step (1)-(5), rebuild and obtain high-quality diffraction blurring image;
Fuzzy in step 3, diffraction blurring image only comprises the consistent diffraction blurring of the overall situation; Its degradation model is:
Z = F &CircleTimes; h + &epsiv; - - - ( 8 )
Wherein, F is image clearly, and h is diffraction blurring point spread function, and ε is additive noise; In order to make result images sharpening further, the method for the consistent deconvolution of the overall situation is used to remove diffraction blurring; Its optimization object function is:
< F ^ , h ^ > = arg min F , h || Z - h &CircleTimes; F || 2 + &lambda; 1 R f ( F ) + &lambda; 2 R h ( h ) - - - ( 9 )
Wherein, for the deconvolution error term that formula (8) is drawn, R fand R (F) hh () is respectively the prior-constrained item of picture rich in detail F and litura spread function h, λ 1and λ 2for corresponding weighting parameter, it is adjustable parameter; Particularly,
R f(F)=||ρ(F x)+ρ(F y)||(10)
&rho; ( &kappa; ) = - &theta; 1 | &kappa; | , &kappa; &le; l t - ( &theta; 2 &kappa; 2 + &theta; 3 ) , &kappa; > l t - - - ( 11 )
Wherein, F xand F ybe respectively picture rich in detail F in the horizontal direction with the gradient of vertical direction; And l t, θ 1, θ 2and θ 3for preset parameter;
Use alternating iteration Optimization Solution formula (9), in an iterative process, fix F or h respectively to optimize h and F; Solve and to obtain be the final picture rich in detail removing diffraction blurring.
CN201310219440.1A 2013-06-04 2013-06-04 Atmospheric turbulence degraded image method for reconstructing Active CN103310486B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310219440.1A CN103310486B (en) 2013-06-04 2013-06-04 Atmospheric turbulence degraded image method for reconstructing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310219440.1A CN103310486B (en) 2013-06-04 2013-06-04 Atmospheric turbulence degraded image method for reconstructing

Publications (2)

Publication Number Publication Date
CN103310486A CN103310486A (en) 2013-09-18
CN103310486B true CN103310486B (en) 2016-04-06

Family

ID=49135661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310219440.1A Active CN103310486B (en) 2013-06-04 2013-06-04 Atmospheric turbulence degraded image method for reconstructing

Country Status (1)

Country Link
CN (1) CN103310486B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105551006B (en) * 2015-12-01 2018-06-05 深圳大学 A kind of restorative procedure and system of depth image missing pixel
CN106355195B (en) * 2016-08-22 2021-04-23 中国科学院深圳先进技术研究院 System and method for measuring image definition value
CN106504209A (en) * 2016-10-27 2017-03-15 西安电子科技大学 The iteration of passive millimeter wave radar image weights blind deconvolution method again
CN110874827B (en) * 2020-01-19 2020-06-30 长沙超创电子科技有限公司 Turbulent image restoration method and device, terminal equipment and computer readable medium
CN112213339B (en) * 2020-09-30 2021-12-10 上海交通大学 Method, system and medium for correcting center and Euler angle of particle diffraction image pattern
CN115358953B (en) * 2022-10-21 2023-01-31 长沙超创电子科技有限公司 Turbulence removing method based on image registration and dynamic target fusion
CN117078538B (en) * 2023-07-19 2024-02-13 华中科技大学 Correction method of remote atmospheric turbulence image based on pixel motion statistics

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101504765A (en) * 2009-03-20 2009-08-12 东华大学 Motion blur image sequence restoration method employing gradient amalgamation technology
CN102354395A (en) * 2011-09-22 2012-02-15 西北工业大学 Sparse representation-based blind restoration method of broad image
US8350954B2 (en) * 2009-07-13 2013-01-08 Canon Kabushiki Kaisha Image processing apparatus and image processing method with deconvolution processing for image blur correction

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8451338B2 (en) * 2008-03-28 2013-05-28 Massachusetts Institute Of Technology Method and apparatus for motion invariant imaging

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101504765A (en) * 2009-03-20 2009-08-12 东华大学 Motion blur image sequence restoration method employing gradient amalgamation technology
US8350954B2 (en) * 2009-07-13 2013-01-08 Canon Kabushiki Kaisha Image processing apparatus and image processing method with deconvolution processing for image blur correction
CN102354395A (en) * 2011-09-22 2012-02-15 西北工业大学 Sparse representation-based blind restoration method of broad image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
大气湍流退化图像的复原研究;李庆菲 等;《合肥工业大学学报》;20110128;第34卷(第1期);第80-82、127页 *

Also Published As

Publication number Publication date
CN103310486A (en) 2013-09-18

Similar Documents

Publication Publication Date Title
CN103310486B (en) Atmospheric turbulence degraded image method for reconstructing
CN107025632B (en) Image super-resolution reconstruction method and system
CN106408524A (en) Two-dimensional image-assisted depth image enhancement method
CN106952228A (en) The super resolution ratio reconstruction method of single image based on the non local self-similarity of image
CN106227015A (en) A kind of hologram image super-resolution reconstruction method and system based on compressive sensing theory
CN109887021B (en) Cross-scale-based random walk stereo matching method
CN109472830A (en) A kind of monocular visual positioning method based on unsupervised learning
CN106023230B (en) A kind of dense matching method of suitable deformation pattern
CN105913392A (en) Degraded image overall quality improving method in complex environment
CN111739082A (en) Stereo vision unsupervised depth estimation method based on convolutional neural network
Zheng et al. Multi-contrast brain magnetic resonance image super-resolution using the local weight similarity
CN106600557A (en) PSF estimation method based on hybrid Gaussian model and sparse constraints
CN109801343A (en) Based on annular artifact bearing calibration, the CT control system for rebuilding front and back image
CN104700411A (en) Sparse reconstruction-based dual-time phase remote-sensing image change detecting method
Xiao et al. Deep blind super-resolution for satellite video
CN107292819A (en) A kind of infrared image super resolution ratio reconstruction method protected based on edge details
CN103971354A (en) Method for reconstructing low-resolution infrared image into high-resolution infrared image
CN105513033A (en) Super-resolution reconstruction method based on non-local simultaneous sparse representation
Majee et al. 4D X-ray CT reconstruction using multi-slice fusion
Yu et al. Remote sensing image fusion based on sparse representation
CN109658361A (en) A kind of moving scene super resolution ratio reconstruction method for taking motion estimation error into account
Lin et al. Polarimetric SAR image super-resolution VIA deep convolutional neural network
CN104504672A (en) NormLV feature based low-rank sparse neighborhood-embedding super-resolution method
CN104732480B (en) A kind of remote sensing images ultra-resolution method based on non local regularization model
CN103679662A (en) Super-resolution image restoration method based on category prior nonnegative sparse coding dictionary pair

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant