CN103617607B - A kind of single image super resolution ratio reconstruction method - Google Patents

A kind of single image super resolution ratio reconstruction method Download PDF

Info

Publication number
CN103617607B
CN103617607B CN201310629075.1A CN201310629075A CN103617607B CN 103617607 B CN103617607 B CN 103617607B CN 201310629075 A CN201310629075 A CN 201310629075A CN 103617607 B CN103617607 B CN 103617607B
Authority
CN
China
Prior art keywords
resolution
image
lambda
sparse
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310629075.1A
Other languages
Chinese (zh)
Other versions
CN103617607A (en
Inventor
杨爱萍
钟腾飞
梁斌
田玉针
刘华平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201310629075.1A priority Critical patent/CN103617607B/en
Publication of CN103617607A publication Critical patent/CN103617607A/en
Application granted granted Critical
Publication of CN103617607B publication Critical patent/CN103617607B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a kind of single image super resolution ratio reconstruction method, based on non local similarity and classification half coupling dictionary learning algorithm, including training stage and phase of regeneration, the method is with half coupling dictionary learning algorithm as framework, introduce the classification of training image blocks based on mapping error sparse territory, and use the classification of sparse territory to couple dictionary learning heuristic strategies alternately with half;Introduce the non local similar bound term in sparse territory, excavate the structured message of training image block space in sparse territory, to reconstruct more high frequency detail;Improve rarefaction representation algorithm based on non-local constraint so that it is meet half requirement coupling dictionary learning algorithm general frame;It addition, introduce error compensation mechanism at phase of regeneration, promote super-resolution rebuilding quality further.Compared with prior art, the present invention improves and is rebuilding grain details and eliminating pseudo-edge and sawtooth two aspect, has accomplished preferable effect simultaneously, and its subjective vision effect reaches optimum in the prior art.

Description

A kind of single image super resolution ratio reconstruction method
Technical field
The present invention relates to computer image processing technology field, particularly relate to medical imaging diagnosis, remotely sensed image and video The fields such as monitoring.
Background technology
Single image super-resolution rebuilding is one of study hotspot of digital image processing field, medical imaging diagnosis, The fields such as satellite remote sensing imaging and video monitoring are respectively provided with important using value.Currently, super-resolution based on study is calculated Method becomes one of the study hotspot in super-resolution field the most in the world.The method is from high-resolution and low-resolution image block set Combination learning obtains high-resolution and low-resolution redundant dictionary so that each image block in training set can enter under corresponding dictionary Row rarefaction representation.During super-resolution rebuilding, first calculate sparse under low-resolution dictionary of low-resolution image block Represent coefficient, then by high-resolution dictionary and this rarefaction representation multiplication, obtain the estimation of high-definition picture block.
Wang et al. proposition half coupling dictionary learning algorithm (Semi-coupled Dictionary Learning, SCDL), its basic ideas are as it is shown in figure 1, make X=[x1,x2,...,xn] and Y=[y1,y2,...,yn] represent high-resolution and low-resolution Image block data matrix, wherein { xi,yiIt is corresponding high-resolution and low-resolution image block pair, DxAnd DyRepresent high-resolution and low-resolution word Allusion quotation, SxAnd SyRepresent high-resolution and low-resolution image block data matrix rarefaction representation coefficient matrix under corresponding dictionary.In half coupling Under dictionary learning framework, in the rarefaction representation vector of high-resolution and low-resolution image block, the position of nonzero element and size are the most false If being equal, but it is associated by a mapping matrix:
Sx=W Sy(1)
On the basis of the sparse representation model of l1 norm regularization, add mapping error penalty term, and by high and low resolution Rate rarefaction representation problem simultaneous, obtains half coupling dictionary learning model:
min { D x , D y , W , S x , S y } | | X - D x S x | | F 2 + | | Y - D y S y | | F 2 + γ | | S x - WS y | | F 2 + λ x | | S x | | 1 + λ y | | S y | | 1 + λ w | | W | | F 2 s . t . | | d x , i | | 2 ≤ 1 , | | d y , i | | 2 ≤ 1 , i = 1,2 , . . . , k - - - ( 2 )
In the SCDL algorithm that Wang et al. proposes, formula (2) can be converted into three subproblems and alternately solve, wherein l1Model The rarefaction representation subproblem of number regularization employs the SPARS workbox based on LARS algorithm.This workbox is suitable for solving The rarefaction representation problem of middle and small scale, when training data is larger, training precision is the highest, and cannot add self-defining just Then change item.
Summary of the invention
For the problem overcoming prior art to exist, the invention provides for above not enough, propose single image oversubscription Resolution method for reconstructing, couples the super-resolution rebuilding algorithm of dictionary learning based on non local similarity and classification half, and it is on the whole Being divided into training stage and phase of regeneration, training stage off-line to carry out, phase of regeneration utilizes multiclass half coupling that off-line learning obtains Dictionary and sparse domain mapping matrix carry out super-resolution rebuilding test.With half coupling dictionary learning algorithm as framework, introduce based on The training image blocks sparse territory classification of mapping error, and use the classification of sparse territory to couple dictionary learning inspiration alternately with half Formula strategy;Outside the l1 norm constraint item of original rarefaction representation problem, introduce the non local similar bound term in sparse territory, sparse The structured message of training image block space is excavated in territory, to reconstruct more high frequency detail;Improve based on non-local constraint dilute Relieving the exterior syndrome shows algorithm so that it is meet half requirement coupling dictionary learning algorithm general frame;It addition, introduce error at phase of regeneration Compensation mechanism, promotes super-resolution rebuilding quality further.
The present invention proposes a kind of single image super resolution ratio reconstruction method, based on non local similarity and classification half coupling Dictionary learning algorithm, including training stage and phase of regeneration, the method comprises the following steps:
Step one, each panel height image in different resolution that training image is concentratedThrough fuzzy operator H, down-sampling operator S filter Ripple, and after adding Gaussian noise v, obtain corresponding low-resolution imageIt is enlarged into original through bicubic interpolation operator A again The low-resolution image of sizeObtain training dataset for feature extraction:Upper stochastical sampling is N number of High-definition picture block;?In obtain four width with single order both horizontally and vertically, second order gradient operator for wave filter Filtering image, the relevant position sampling of described filtering image obtains low-resolution image characteristic block;Remove wherein variance and be less than one After determining the smoothed image block of threshold value, by training signal, { X, Y} initialize and are categorized as K class, are designated asSorting technique is K average is classified;Threshold value herein is the 5%-10% of all image block variances;
Step 2, make xiAnd xjBeing two data vectors, they rarefaction representation vectors under half coupling dictionary D are respectively siAnd sj;Above-mentioned " the non local similarity constraint in sparse territory ", uses xiThe weighted average of similar block rarefaction representation coefficient estimate si, i.e. anticipation error
Σ i = 1 N | | s i - Σ j ∈ Ω b ji s j | | 2 = | | S - SB | | F 2 = Tr ( S ( I - B ) ( I - B ) T S T ) = Tr ( SMS T )
Wherein, Ω is and xiThe subscript collection of front P most like data, bjiIt is to represent xjWith xiThe weight of similarity degree,H is selectable parameter, and c is normalization factor;Make jth row i-th column element of weight matrix B For bji, i.e. B (j, i)=bji, wherein Tr () is matrix trace computing, and matrix M is asked via weight matrix B by data matrix X ?;Realize the non local similarity in sparse territory to process;
Step 3, by half coupling dictionary learning model based on non local similarity constraint
min { D x , D y , W , S x , S y } | | X - D x S x | | F 2 + | | Y - D y S y | | F 2 + λ w | | W | | F 2 + γ | | S x - WS y | | F 2 + λ x | | S x | | 1 + λ y | | S y | | 1 + α x Tr ( S x M x S x T ) + α y Tr ( S y M y S y T ) s . t . | | d x , i | | 2 ≤ 1 , | | d y , i | | 2 ≤ 1 , i = 1,2 , . . . , k - - - ( 3 )
It is decomposed into three subproblems, is respectively " dictionary updating ", " mapping matrix renewal " and " double task Its Sparse Decomposition ";
Subproblem is expressed as dictionary updating
min { D x , D y } | | X - D x S x | | F 2 + | | Y - D y S y | | F 2 s . t . | | d x , i | | 2 ≤ 1 , | | d y , i | | 2 ≤ 1 , i = 1,2 , . . . , k
Mapping matrix updates subproblem and is expressed as:
min W | | S x - WS y | | F 2 + λ w γ | | W | | F 2
The mathematical model of double task Its Sparse Decomposition subproblems is expressed as:
min S x | | X - D x S x | | F 2 + γ | | S y - W x → y S x | | + λ x | | S x | | 1 + α x Tr ( S x M x S x T ) min S y | | Y - D y S y | | F 2 + γ | | S x - W y → x S y | | + λ y | | S y | | 1 + α y Tr ( S y M y S y T )
Above-mentioned matrix simultaneous, solves the rarefaction representation problem based on non local similarity constraint obtaining double tasks:
min S x | | X ~ - D ~ x S x | | F 2 + λ x | | S x | | 1 + α x Tr ( S x M x S x T ) min S y | | Y ~ - D ~ y S y | | F 2 + λ y | | S y | | 1 + α y Tr ( S y M y S y T )
Step 4, sparse territory based on mapping error reclassification optimization problem is expressed as:
arg min c Σ i = 1 N | | s x , i - W ( c ( i ) ) s y , i | |
Wherein, c is the classification designator vector of training signal, and its element value is 1~k;
After the reclassification of above-mentioned sparse territory, it is judged that the end condition of training stage: total mapping errorWhether Less than certain threshold value δ1;The most then export all kinds of high-resolution and low-resolution dictionaryAll kinds of sparse domain mapping matrixesWith all kinds of rarefaction representation coefficient matrixesOtherwise, then continue each apoplexy due to endogenous wind being newly divided into carry out based on non-office Portion's similarity constraint partly couple dictionary learning, until meeting end condition;That is, in the training stage, have employed the classification of sparse territory With half coupling dictionary learning discovery learning strategy alternately;
Step 5, at phase of regeneration, use identical alternately heuristic strategies, utilize non local similarity and classification half coupling Closing rarefaction representation, and combine error compensation mechanism and rebuild image, this step farther includes following process:
1, input low-resolution image zl, initialize high-definition pictureFor zlBicubic interpolation;Have and exist overlappingly Image correspondence position sampling fritter also extracts feature, constitutes data matrix after vectorizationSolve following formula to classify:
arg min c Σ i = 1 M | | s ^ x , i - W ( c ( i ) ) s y , i | |
2, formula is solved at each apoplexy due to endogenous wind:
min { S x , S y } | | X ^ - D x S x | | F 2 + | | Y - D y S y | | F 2 + λ w | | W | | F 2 + γ | | S ^ x - WS y | | F 2 + λ x | | S x | | 1 + λ x | | S y | | 1 + α x Tr ( S ^ x M ^ x S ^ x T ) + α y Tr ( S y M y S y T )
Obtain all kinds of rarefaction representation coefficient matrix
3, total mapping error is judgedWhether less than certain threshold value δ2;If it is not, then carry out dilute according to following formula After dredging territory reclassification, return step 2;The most then rebuild high-resolution according to the high-definition picture computing formula of following steps 4 Each fritter of image:
x ^ i = D x ( c ( i ) ) · s ^ x , i
After the pixel value of adjacent image block lap is averaging, obtain the estimation of high-definition picture
4, willDegrade for low-resolution imageCalculate residual imageAnd using residual image e as step 1 Input picture, repeat each step in step 1-3, obtain the super-resolution image of residual image eAnd then obtain final High-definition picture:
y h = y ^ h + y ^ h , e
Realizing error compensation, algorithm for reconstructing terminates.
Compared with prior art, the method for the present invention improves and is rebuilding grain details and eliminating pseudo-edge and sawtooth two side Face, has accomplished preferable effect simultaneously, and its subjective vision effect reaches optimum in the prior art.
Accompanying drawing explanation
Fig. 1 is half coupling dictionary learning;
Fig. 2 is alternately discovery learning framework;
Fig. 3 each algorithm super-resolution rebuilding result.
Detailed description of the invention
Below in conjunction with the accompanying drawings and embodiment, the detailed description of the invention of the present invention is further described.
For the shortcoming that half coupling dictionary learning super-resolution method training precision is the highest, it is proposed that the training stage alternately opens Hairdo learning framework, as shown in Figure 2.Owing to natural image also existing texture block and the edge block of various complexity, use single Dictionary is difficult to obtain accurate rarefaction representation, thus first training image initialization block is classified.At each apoplexy due to endogenous wind, carry out half coupling Close dictionary learning, obtain high-resolution and low-resolution dictionary and sparse domain mapping matrix;Meanwhile, utilize image block in the non-office in sparse territory Portion's similarity, excavates the structural information of training image block space, to reconstruct more high frequency detail.Then according to high and low resolution Rate rarefaction representation coefficient mapping error under all kinds of mapping matrixes, after image block is carried out reclassification, carries out half coupling again Dictionary learning, so circulates, until mapping error terminates less than thresholding algorithm.Supervise relative to the people that has with specific classification rule Superintend and direct classification, alternately discovery learning framework can in learning process the assortment of adaptive adjusting training signal collection, Under sparse domain mapping error minimum target so that all kinds of training signals are more concentrated, and then make each category dictionary learning to obtain More compact, rarefaction representation precision is higher.
In formula (2) half coupling dictionary learning, introduce non local similarity constraint item, obtain based on non local similarity about Half coupling dictionary learning model of bundle:
min { D x , D y , W , S x , S y } | | X - D x S x | | F 2 + | | Y - D y S y | | F 2 + λ w | | W | | F 2 + γ | | S x - WS y | | F 2 + λ x | | S x | | 1 + λ y | | S y | | 1 + α x Tr ( S x M x S x T ) + α y Tr ( S y M y S y T ) s . t . | | d x , i | | 2 ≤ 1 , | | d y , i | | 2 ≤ 1 , i = 1,2 , . . . , k - - - ( 3 )
One, the training stage
1) training data obtains and initializes classification
Each panel height image in different resolution that training image is concentratedThrough fuzzy operator H, down-sampling operator S filtering, and add After adding Gaussian noise v, obtain corresponding low-resolution imageIt is enlarged into original size again through bicubic interpolation operator A Low-resolution imageFor feature extraction.This process is represented by:
z l r = SHy h r - - - ( 4 )
y l r = Az l r - - - ( 5 )
?Upper stochastical sampling is N number ofHigh-definition picture block;?Four width filtering image (wave filter For single order both horizontally and vertically, second order gradient operator) relevant position sampling obtain low resolution characteristic block.Remove variance After smoothed image block less than certain threshold value (5%-10% of all image block variances), by training signal, { X, Y} initialize classification For K class, it is designated asSorting technique is the classification of K average.
2) the non local similarity in sparse territory
The distribution of data space is the most uneven, such as the fritter in natural image has some distinctive structure and patterns, Recent studies suggest that that the structured message of data space contributes to the performance that improving sparse represents.Instability due to Its Sparse Decomposition Property, the possible difference of its rarefaction representation coefficient of similar data is very big, causes reconstruction error relatively big, hence with in natural image Repetitive structure, can be effectively improved the stability of rarefaction representation.Make xiAnd xjBeing two data vectors, they are under dictionary D Rarefaction representation vector is respectively siAnd sjIf, at former data field xjIt is and xiThe data vector that kth is most like, then at sparse territory sj Should also be as be and siThe rarefaction representation vector that kth is most like.Above-mentioned " the non local similarity constraint in sparse territory ", uses xiSimilar block The weighted average of rarefaction representation coefficient estimates si, i.e. anticipation error
Σ i = 1 N | | s i - Σ j ∈ Ω b ji s j | | 2 - - - ( 6 )
The smaller the better.Wherein, Ω is and xiThe subscript collection of front P most like data, bjiIt is to represent xjWith xiSimilarity degree Weight:
b ji = 1 c i · exp ( - | | x i - x j | | 2 h ) - - - ( 7 )
Wherein, h is selectable parameter, ciIt it is normalization factor.Jth row the i-th column element making weight matrix B is bji, I.e. B (j, i)=bji, then formula (6) can be written as:
Σ i = 1 N | | s i - Σ j ∈ Ω b ji s j | | 2 = | | S - SB | | F 2 = Tr ( S ( I - B ) ( I - B ) T S T ) = Tr ( SMS T ) - - - ( 8 )
Wherein Tr () is matrix trace computing, and matrix M is tried to achieve via weight matrix B by data matrix X.
3) non local similarity constraint partly couple dictionary learning
Formula (3) is about three groups of optimized variable { Dx,Dy, { W} and { Sx,SyOne of them is convex optimization problem, therefore uses literary composition Offer alternative optimization thinking, formula (3) is decomposed into three subproblems, i.e. corresponding (9) (10) (12), be referred to as that " dictionary is more Newly ", " mapping matrix renewal " and " double task Its Sparse Decomposition ".
min { D x , D y } | | X - D x S x | | F 2 + | | Y - D y S y | | F 2 s . t . | | d x , i | | 2 ≤ 1 , | | d y , i | | 2 ≤ 1 , i = 1,2 , . . . , k - - - ( 9 )
Two optimized variables of formula (9) dictionary updating subproblem are separable, and therefore it can be analyzed to two secondaries about The quadratic programming problem (QCQP) of bundle, and utilize Lagrange-Dual algorithm to solve respectively.Mapping matrix updates subproblem can table It is shown as:
min W | | S x - WS y | | F 2 + λ w γ | | W | | F 2 - - - ( 10 )
Notice that formula (11) is actually ridge regression problem, there are analytic solutions:
W = S x S y T ( S y S x T + λ x γ I ) - 1 - - - ( 11 )
Wherein, I is unit matrix.
It is linear in view of sparse domain mapping matrix W, two-way learning strategy can be used, learn S in the lumpxWith SyBetween Biaxial stress structure relation Wx→yAnd Wy→x, therefore the mathematical model of double task Its Sparse Decomposition subproblem is represented by:
min S x | | X - D x S x | | F 2 + γ | | S y - W x → y S x | | + λ x | | S x | | 1 + α x Tr ( S x M x S x T ) min S y | | Y - D y S y | | F 2 + γ | | S x - W y → x S y | | + λ y | | S y | | 1 + α y Tr ( S y M y S y T ) - - - ( 12 )
In order to utilize Feature-Sign innovatory algorithm to solve above formula, by upper and lower for matrix simultaneous, and make:
X ~ = X γ · S y , Y ~ = Y γ · S x , D ~ x = D x γ · W x → y , D ~ y = D y γ · W y → x
Then formula (12) can be write as the rarefaction representation problem based on non local similarity constraint of double task:
min S x | | X ~ - D ~ x S x | | F 2 + λ x | | S x | | 1 + α x Tr ( S x M x S x T ) min S y | | Y ~ - D ~ y S y | | F 2 + λ y | | S y | | 1 + α y Tr ( S y M y S y T ) - - - ( 13 )
Formula (13) may utilize two subproblems of Feature-Sign innovatory algorithm alternative optimization, until converging to local optimum Solve.
4) sparse territory based on mapping error reclassification
The classification in original signal territory is changed into carrying out in sparse territory, it will be substantially reduced total mapping errorSo that the sparse domain mapping matrix that all kinds of learnings obtainMore stable.Based on mapping error Sparse territory reclassification optimization problem is represented by:
arg min c Σ i = 1 N | | s x , i - W ( c ( i ) ) s y , i | | - - - ( 14 )
Wherein, c is the classification designator vector of training signal, and its element value is 1~k.Sparse territory based on mapping error Reclassification, actually by rarefaction representation coefficient to { sx,i,sy,i(corresponding original signal is to { xi,yi) be referred to make mapping error Minimum mapping matrix W(i)The i-th corresponding apoplexy due to endogenous wind.
After the reclassification of above-mentioned sparse territory, it is judged that the end condition of training stage: total mapping errorIt is No less than certain threshold value δ1.The most then export all kinds of high-resolution and low-resolution dictionaryAll kinds of sparse domain mapping matrixesWith all kinds of rarefaction representation coefficient matrixesOtherwise, then continue each apoplexy due to endogenous wind being newly divided into carry out based on non-office Portion's similarity constraint partly couple dictionary learning, until meeting end condition.
Two, phase of regeneration
5) the super-resolution rebuilding stage
In the super-resolution training stage, have employed the classification of sparse territory and half coupling dictionary learning heuristic alternately Practise strategy.At phase of regeneration, it is also adopted by identical alternately heuristic strategies, utilizes non local similarity and classification partly to couple dilute Relieving the exterior syndrome shows, and combines error compensation mechanism reconstruction image, and key step is as follows:
1. input low-resolution image zl, initialize high-definition pictureBicubic interpolation for zl;Have and exist overlappingly Image correspondence position sampling fritter also extracts feature, constitutes data matrix after vectorizationSolve formula (15) to classify:
arg min c Σ i = 1 M | | s ^ x , i - W ( c ( i ) ) s y , i | | - - - ( 15 )
2. formula (16) is solved at each apoplexy due to endogenous wind:
min { S x , S y } | | X ^ - D x S x | | F 2 + | | Y - D y S y | | F 2 + λ w | | W | | F 2 + γ | | S ^ x - WS y | | F 2 + λ x | | S x | | 1 + λ x | | S y | | 1 + α x Tr ( S ^ x M ^ x S ^ x T ) + α y Tr ( S y M y S y T ) - - - ( 16 )
Obtain all kinds of rarefaction representation coefficient matrix
3. total mapping error is judgedWhether less than certain threshold value δ2;If it is not, then carry out according to formula (17) After the reclassification of sparse territory, return the and 2. walk;The most then according to each fritter of formula (18) reconstruction high-definition picture:
x ^ i = D x ( c ( i ) ) · s ^ x , i - - - ( 17 )
After the pixel value of adjacent image block lap is averaging, obtain the estimation of high-definition picture
4. error compensation: willDegrade for low-resolution image according to formula (4)Calculate residual imageAnd handle Residual image e as the input picture 1. walked, repeat 1.-3. in each step, obtain the super-resolution image of residual image eAnd then obtain final high-definition picture:
y h = y ^ h + y ^ h , e - - - ( 18 )
Algorithm for reconstructing terminates.
Table 1 each algorithm average peak signal to noise ratio (PSNR, dB)
Bicubic SCSR SISR ADSD-Reg NCSR Context of methods
PSNR 28.40 28.82 29.09 29.99 30.08 30.60
Select to generally acknowledge that more outstanding super-resolution rebuilding algorithm contrasts more in recent years, including Bicubic interpolation, The sparse coding super-resolution algorithms (Sparse Coding Super Resolution, SCSR) of Yang et al., Zeyde et al. Single image super-resolution algorithms (Single Image Super Resolution, SISR), the self adaptation of Dong et al. is dilute Dredge territory selection-adaptive regularization algorithm (Adaptive Sparse Domain Selection and Adaptive Regularization, ASDS-Reg), and non local concentration rarefaction representation super-resolution algorithms [23] of Dong et al. (Non-locally Centralized Sparse Representation,NCSR).Above-mentioned methodical training image changes Become and all use BerkeleySegmentation Data Set and Benchmarks500 image set (BSDS500);Above-mentioned institute Methodical test image all uses Kodak Lossless True Color Image Suite image set.Experimental result is such as Shown in Fig. 3, unified condition and parameter are as follows: down-sampling factor s=3;Sized by fuzzy operator 7 × 7, the height of standard deviation sigma=1.6 This fuzzy operator;Tile size is 5 × 5;Adjacent image block has 3 pixel overlaps;In formula (3), parameters value is successively For: γ=0.5, λxy=0.1, λw=0.01, αxy=0.2.
As seen from Figure 3, compared with Bicubic interpolation, the SISR of the SCSR algorithm of Yang et al. and Zeyde et al. calculates Method, rebuilds image and has higher clear-cut margin degree, but at regional area, texture as upper left in eyes, exists significantly Pseudo-edge and noise.Although the ASDS-Reg algorithm of Dong et al. is rebuild image and is significantly reduced pseudo-edge and noise, but loses Too much grain details, the edge such as the texture below eyes is the most smooth.NCSR algorithm, compared with ASDS, has reconstructed more Many details, but still there is sawtooth effect in some texture.Algorithm is being rebuild grain details and is being eliminated pseudo-edge and sawtooth herein Two aspects, have accomplished preferable effect simultaneously, and its subjective vision effect reaches optimum in above several method.
For objective evaluation above-mentioned super-resolution rebuilding algorithm, 24 images of cromogram image set lossless to Kodak respectively should Carrying out super-resolution rebuilding with each algorithm, table 1 gives the average peak signal to noise ratio (PSNR, dB) that each algorithm rebuilds image.By Table 1 is visible, the super-resolution rebuilding algorithm coupling dictionary learning based on non local similarity and half proposed, and has the highest Y-PSNR, the highest super-resolution performance.

Claims (1)

1. a single image super resolution ratio reconstruction method, based on non local similarity and classification half coupling dictionary learning algorithm, Including training stage and phase of regeneration, it is characterised in that the method comprises the following steps:
Step one, each panel height image in different resolution that training image is concentratedThrough fuzzy operator H, down-sampling operator G filtering, And after adding Gaussian noise v, obtain corresponding low-resolution imageIt is enlarged into original size again through bicubic interpolation operator A Low-resolution imageObtain training dataset for feature extraction:Upper stochastical sampling is N number ofHigh score Resolution image block;?In obtain four width filtering figures with single order both horizontally and vertically, second order gradient operator for wave filter Picture, the relevant position sampling of described filtering image obtains low-resolution image characteristic block;Remove wherein variance and be less than certain threshold value Smoothed image block after, by training signal X, Y} initialize be categorized as K class, be designated asSorting technique is that K average is divided Class;Threshold value herein is the 5%-10% of all image block variances;
Step 2, make xiAnd xjBeing two data vectors, they rarefaction representation vectors under half coupling dictionary D are respectively siWith sj;Use xiThe weighted average of similar block rarefaction representation coefficient estimate si, i.e. anticipation error
Σ i = 1 N | | s i - Σ j ∈ Ω b j i s j | | 2 = | | S - S B | | F 2 = T r ( S ( I - B ) ( I - B ) T S T ) = T r ( SMS T )
Wherein, Ω is and xiThe subscript collection of front P most like data, bjiIt is to represent xjWith xiThe weight of similarity degree,H is selectable parameter, ciIt it is normalization factor;Make jth row i-th column element of weight matrix B For bji, i.e. B (j, i)=bji, wherein Tr () is matrix trace computing, and matrix M is asked via weight matrix B by data matrix X ?;Realize the non local similarity in sparse territory to process;S is sparse coefficient vector;
Step 3, by half coupling dictionary learning model based on non local similarity constraint
min { D x , D y , W , S x , S y } | | X - D x S x | | F 2 + | | Y - D y S y | | F 2 + λ w | | F 2 + γ | | S x - WS y | | F 2 + λ x | | S x | | 1 + λ y | | S y | | 1 + α x T r ( S x M x S x T ) + α y T r ( S y M y S y T )
s.t.||dx,i||2≤1,||dy,i||2≤ 1, i=1,2 ..., k
Wherein, X=[x1,x2,...,xn] and Y=[y1,y2,...,yn] represent high-resolution and low-resolution image block data matrix, DxWith DyRepresent high-resolution and low-resolution dictionary, SxAnd SyRepresent high-resolution and low-resolution image block data matrix sparse table under corresponding dictionary Show coefficient matrix;
It is decomposed into three subproblems, is respectively " dictionary updating ", " mapping matrix renewal " and " double task Its Sparse Decomposition ";Dictionary is more New subproblem is expressed as
m i n { D x , D y } | | X - D x S x | | F 2 + | | Y - D y S y | | F 2 s . t . | | d x , i | | 2 ≤ 1 , | | d y , i | | 2 ≤ 1 , i = 1 , 2 , ... , k
Mapping matrix updates subproblem and is expressed as:
m i n W | | S x - WS y | | F 2 + λ w γ | | W | | F 2
The mathematical model of double task Its Sparse Decomposition subproblems is expressed as:
min S x | | X - D x S x | | F 2 + γ | | S y - W x → y S x | | + λ x | | S x | | 1 + α x T r ( S x M x S x T ) min S y | | Y - D y S y | | F 2 + γ | | S x - W y → x S y | | + λ y | | S y | | 1 + α y T r ( S y M y S y T )
Above-mentioned matrix simultaneous, solves the rarefaction representation problem based on non local similarity constraint obtaining double tasks:
m i n S x | | X ~ - D ~ x S x | | F 2 + λ x | | S x | | 1 + α x T r ( S x M x S x T ) m i n S y | | Y ~ - D ~ y S y | | F 2 + λ y | | S y | | 1 + α y T r ( S y M y S y T )
Order:
X ~ = X γ · S y , Y ~ = Y γ · S x , D ~ x = D x γ · W x → y , D ~ y = D y γ · W y → x
Step 4, sparse territory based on mapping error reclassification optimization problem is expressed as:
argmin c Σ i = 1 N | | s x , i - W ( c ( i ) ) s y , i | |
Wherein, c is the classification designator vector of training signal, and its element value is 1~k;
After the reclassification of above-mentioned sparse territory, it is judged that the end condition of training stage: total mapping errorWhether it is less than Certain threshold value δ1;The most then export all kinds of high-resolution and low-resolution dictionaryAll kinds of sparse domain mapping matrixesWith All kinds of rarefaction representation coefficient matrixesOtherwise, then continue each apoplexy due to endogenous wind being newly divided into carry out based on non local similarity Retrain partly couples dictionary learning, until meeting end condition;That is, in the training stage, have employed the classification of sparse territory and half coupling Dictionary learning discovery learning strategy alternately;
Step 5, at phase of regeneration, use identical alternately heuristic strategies, utilize non local similarity and classification partly to couple dilute Relieving the exterior syndrome shows, and combines error compensation mechanism and rebuild image, and this step farther includes following process:
(1) input low-resolution image zl, initialize high-definition pictureFor zlBicubic interpolation;Have overlappingly at image Correspondence position sampling fritter also extracts feature, constitutes data matrix after vectorizationSolve following formula to classify:
argmin c Σ i = 1 M | | s ^ x , i - W ( c ( i ) ) s y , i | |
(2) in each classification, formula is solved:
min { S x , S y } | | X ^ - D x S x | | F 2 + | | Y - D y S y | | F 2 + λ w | | W | | F 2 + γ | | S ^ x - WS y | | F 2 + λ x | | S x | | 1 + λ y | | S y | | 1 + α x T r ( S ^ x M ^ x S ^ x T ) + α y T r ( S y M y S y T )
M representsAnd yiRarefaction representation vector under high-resolution and low-resolution dictionaryAnd sy,iDimension;Obtain all kinds of rarefaction representation system Matrix number
(3) total mapping error is judgedWhether less than certain threshold value δ2;If it is not, then carry out sparse territory according to following formula After reclassification, return step (2);The most then rebuild high-resolution according to the high-definition picture computing formula of following steps (4) Each fritter of image:
x ^ i = D x ( c ( i ) ) · s ^ x , i
After the pixel value of adjacent image block lap is averaging, obtain the estimation of high-definition picture
(4) willDegrade for low-resolution imageCalculate residual imageAnd using residual image e as step (1) Input picture, repeats each step in step (1)-(3), obtains the super-resolution image of residual image eAnd then obtain final High-definition picture:
y h = y ^ h + y ^ h , e
Realizing error compensation, algorithm for reconstructing terminates.
CN201310629075.1A 2013-11-28 A kind of single image super resolution ratio reconstruction method Expired - Fee Related CN103617607B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310629075.1A CN103617607B (en) 2013-11-28 A kind of single image super resolution ratio reconstruction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310629075.1A CN103617607B (en) 2013-11-28 A kind of single image super resolution ratio reconstruction method

Publications (2)

Publication Number Publication Date
CN103617607A CN103617607A (en) 2014-03-05
CN103617607B true CN103617607B (en) 2016-11-30

Family

ID=

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077511A (en) * 2013-01-25 2013-05-01 西安电子科技大学 Image super-resolution reconstruction method based on dictionary learning and structure similarity
CN103295196A (en) * 2013-05-21 2013-09-11 西安电子科技大学 Super-resolution image reconstruction method based on non-local dictionary learning and biregular terms

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077511A (en) * 2013-01-25 2013-05-01 西安电子科技大学 Image super-resolution reconstruction method based on dictionary learning and structure similarity
CN103295196A (en) * 2013-05-21 2013-09-11 西安电子科技大学 Super-resolution image reconstruction method based on non-local dictionary learning and biregular terms

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Nonlocally Centralized Sparse Representation for Image Restoration;Weisheng Dong et al.;《IEEE Transactions on image processing》;20100430;第22卷(第4期);1620-1630 *
Semi-coupled dictionray learning with applications to image super-resolution and photo-sketch synthesis;shenlong Wang et al.;《2012 IEEE Conference on Computer Vision and Pattern Recognition》;20121231;2216-2223 *

Similar Documents

Publication Publication Date Title
CN102354397B (en) Method for reconstructing human facial image super-resolution based on similarity of facial characteristic organs
DE69937897T2 (en) SYSTEM AND METHOD FOR 4D RECONSTRUCTION AND PRESENTATION
CN106934766A (en) A kind of infrared image super resolution ratio reconstruction method based on rarefaction representation
CN104050653B (en) Hyperspectral image super-resolution method based on non-negative structure sparse
CN109886986A (en) A kind of skin lens image dividing method based on multiple-limb convolutional neural networks
CN103985105B (en) Contourlet territory based on statistical modeling multimode medical image fusion method
CN106952228A (en) The super resolution ratio reconstruction method of single image based on the non local self-similarity of image
CN106780342A (en) Single-frame image super-resolution reconstruction method and device based on the reconstruct of sparse domain
CN107633486A (en) Structure Magnetic Resonance Image Denoising based on three-dimensional full convolutional neural networks
CN106204447A (en) The super resolution ratio reconstruction method with convolutional neural networks is divided based on total variance
CN106204449A (en) A kind of single image super resolution ratio reconstruction method based on symmetrical degree of depth network
CN106228512A (en) Based on learning rate adaptive convolutional neural networks image super-resolution rebuilding method
CN112465827A (en) Contour perception multi-organ segmentation network construction method based on class-by-class convolution operation
CN109360152A (en) 3 d medical images super resolution ratio reconstruction method based on dense convolutional neural networks
CN107341765A (en) A kind of image super-resolution rebuilding method decomposed based on cartoon texture
CN104156994A (en) Compressed sensing magnetic resonance imaging reconstruction method
CN106600533B (en) Single image super resolution ratio reconstruction method
CN106157249A (en) Based on the embedded single image super-resolution rebuilding algorithm of optical flow method and sparse neighborhood
CN104008537A (en) Novel noise image fusion method based on CS-CT-CHMM
CN105654425A (en) Single-image super-resolution reconstruction method applied to medical X-ray image
DE112020005584T5 (en) Occlusion-aware interior scene analysis
Zhang et al. FDGNet: A pair feature difference guided network for multimodal medical image fusion
CN104299201B (en) Image reconstruction method based on heredity sparse optimization
CN109559278B (en) Super resolution image reconstruction method and system based on multiple features study
Xia et al. Deep residual neural network based image enhancement algorithm for low dose CT images

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20161130

Termination date: 20201128