CN102629374A - Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding - Google Patents

Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding Download PDF

Info

Publication number
CN102629374A
CN102629374A CN2012100498041A CN201210049804A CN102629374A CN 102629374 A CN102629374 A CN 102629374A CN 2012100498041 A CN2012100498041 A CN 2012100498041A CN 201210049804 A CN201210049804 A CN 201210049804A CN 102629374 A CN102629374 A CN 102629374A
Authority
CN
China
Prior art keywords
resolution
training
prime
proper vector
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012100498041A
Other languages
Chinese (zh)
Other versions
CN102629374B (en
Inventor
李小燕
和红杰
尹忠科
陈帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN201210049804.1A priority Critical patent/CN102629374B/en
Publication of CN102629374A publication Critical patent/CN102629374A/en
Application granted granted Critical
Publication of CN102629374B publication Critical patent/CN102629374B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding. The method is characterized by: using first and secondary subspace projection methods to project original high-dimensional data to a low-dimensional space, using dimension reduction feature vectors to show a feature of a low-resolution image block so that global structure information and local structure information of original data can be maintained; comparing a Euclidean distance between the dimension reduction feature vectors in the low-dimensional space, finding a neighborhood block which is most matched with the low-resolution image block to be reconstructed, using a similarity and a scale factor between the feature vectors to construct an accurate embedded weight coefficient so that a searching speed and matching precision can be increased; then constructing the similarity and the scale factor between the feature vectors, calculating the accurate weight coefficient and acquiring more high frequency information from a training database; finally, according to the weight coefficient and the neighborhood block, estimating the high-resolution image block with high precision, reconstructing the image which has the high similarity with a real object, which is good for later-stage real object identification processing.

Description

Image super-resolution rebuilding method based on subspace projection and neighborhood embedding
Technical field
The present invention relates to image process method, relate in particular to a kind of image super-resolution rebuilding method based on subspace projection and neighborhood embedding.
Background technology
Image super-resolution (Super Resolution; SR) reconstruction technique is meant and is utilized in one or more low resolution (the Low Resolution that captures under the different observation angles of Same Scene, different observation time or the different sensors situation; LR) mutual information in the image; Adopt the method for Digital Image Processing to reconstruct panel height resolution (High Resolution, HR) image.This reconstruction technique can (Charge Coupled Device CCD) waits under the condition of hardware device qualification, estimates the high-frequency information that it is lost through the one or more low-resolution image at charge-coupled image sensor.In fields such as remote sensing satellite, military surveillance, medical imaging, security monitoring and traffic administrations, the image super-resolution rebuilding technology not only has important researching value, also is with a wide range of applications.
The essence of image super-resolution rebuilding is how to solve the accurate estimation problem of losing high-frequency information; This is the one-to-many ill-conditioning problem of classics; This theory problem and applied research still are in the exploratory stage, and the aspects such as quality of the time efficiency of image super-resolution rebuilding method and super-resolution rebuilding image all remain further to be improved.Existing image super-resolution rebuilding method mainly is divided into interpolation, reconstruct and study three major types.Method of interpolation is a kind of method of utilizing known neighbour's point value to estimate the interpolation point value, and these class methods are difficult to estimate the high-frequency information of losing.The reconstruct method is to utilize the mutual information between several low-resolution images comprehensively to estimate high-frequency information, and its reconstruction effect is better than method of interpolation.But, the reconstruct method needs the low-resolution image behind several accurate registrations, and the precision of present most image registration algorithms is not very high, can influence the quality of super-resolution rebuilding image.Learning method is the corresponding relation between low-resolution image piece and the high-definition picture piece in the first learning training storehouse, and estimates and wait to rebuild the corresponding high-definition picture piece of low-resolution image piece according to this relation.Learning method can be realized the single image super-resolution rebuilding, does not need image is carried out registration, and can from the training storehouse, get access to more high-frequency information, thereby better rebuild effect.Therefore, learning method is present image super-resolution rebuilding Study on Technology focus.
The image super-resolution rebuilding that embeds based on neighborhood is an important branch in the learning method, obtains good achievement in research in recent years.(document 1:H.Chang such as Chang; D.-Y.Yeung; And Y.Xiong, " Super-resolution through neighbor embedding ", IEEE Conference on Computer Vision Pattern Recognition; Pp.275-282; 2004) at first propose to utilize the thought of neighborhood embedding to realize image super-resolution rebuilding, this method is to have under the prerequisite of similar local geometric features hypothesis at low-resolution image piece and corresponding high-definition picture piece, waits to rebuild the low-resolution image piece to each; Utilize local linear (the Locally Linear Embedding of embedding; LLE) algorithm calculates several and the embedding weight coefficient of waiting to rebuild the similar training low-resolution image piece of low-resolution image piece, and each similar training low-resolution image piece is as a neighborhood piece, and then embeds the high-definition picture piece after weight coefficients and corresponding neighborhood piece come comprehensively to estimate reconstruction with these.But, this method is not considered the influence to the super-resolution rebuilding effect of image block type and neighborhood piece number.Subsequently, (document 2:T.-M.Chan, J.Zhang such as Chan; J.Pu, and H.Huang, " Neighbor embedding based super-resolution algorithm through edge detection and feature selection "; Pattern Recognition Letters, vol.30, pp.494-502; 2009) propose a kind of neighborhood and embed algorithm based on rim detection and Feature Selection; This method can confirm to choose the number of neighborhood piece according to the type (edge and non-edge) of image block, yet, when there is serious blurring effect in input low-resolution image to be rebuild; This algorithm can be difficult to judge the type of image block, makes the super-resolution rebuilding poor effect.(document 3:K.Zhang, X.Gao, X.Li such as Zhang; And D.Tao, " Partially supervised neighbor embedding for example-based image super-resolution ", IEEE Journal of Selected Topics in Signal Processing; Vol.5; No.2, pp.230-239,2011) utilize class label information to construct a semi-supervised distance function; With the neighborhood piece of waiting to rebuild low-resolution image piece coupling, the super-resolution rebuilding effect of this method depends on the accuracy of identification of sorter to a certain extent more than from the training storehouse, selecting more according to this semi-supervised distance function.Algorithm described in the document 1~3 all is on original high-dimensional feature space, to seek several neighborhood pieces; Because the dimension of original feature vector is very big; Only rely on certain distance function to seek the neighborhood piece, neighborhood piece that searches out and the similarity of waiting to rebuild between the low-resolution image piece are not high.The characteristic dimensionality reduction is a kind of feasible method that addresses the above problem; Based on this thinking, (document 4:X.Gao, K.Zhang such as Gao; D.Tao; And X.Li, " Joint learning for single image super-resolution via coupled constraint ", IEEE Transactions on Image Processing; 2011) proposition is low with training with the combination learning method, high-resolution features is vectorial projects on the low dimensional feature subspace simultaneously, and then on this low dimensional feature subspace, seeks out several and the neighborhood piece of waiting to rebuild low-resolution image piece coupling.Though document 4 has been used the method for characteristic dimensionality reduction; But when original low, high-resolution features is vectorial when projecting on the unified low dimensional feature subspace simultaneously; Because the dimension of original high resolution proper vector is bigger than original low resolution proper vector; The useful information in the original high resolution proper vector can be damaged inevitably, also harmful effect can be produced the neighborhood piece of seeking coupling.In addition; The algorithm of document 1~4 all is to adopt the local linear algorithm that embeds when calculating the embedding weight coefficient; The embedding weight coefficient that calculates might be negative value; This can influence the neighbour and train the high-definition picture piece that the degree of high-frequency information can be provided, and the computation complexity of this algorithm is high, and arithmetic speed is slow.
Summary of the invention
The object of the present invention is to provide a kind of image super-resolution rebuilding method based on subspace projection and neighborhood embedding; This method realizes more effective characteristic dimensionality reduction; Estimate more accurately and rebuild back high-definition picture piece; Similarity between reconstructed image and the real-world object is higher, the super-resolution rebuilding better effects if.
The present invention solves its technical matters, and the technical scheme that is adopted is:
A, training:
L width of cloth resolution is identical, that size is identical high-definition picture is as the training high-definition picture
Figure BDA0000139605960000031
L=1,2 ..., L, L=3~80; To every width of cloth training high-definition picture
Figure BDA0000139605960000032
Carry out obtaining N behind the overlapping piecemeal 1Individual size is the training high-definition picture piece of z*z, N 1=1000~7000, z=6,9,12,15, obtain N=L*N altogether 1Individual training high-definition picture piece; Extract behind standardization brightness of each training high-definition picture piece as a training high resolving power standardization brightness image block, convert i training high resolving power standardization brightness image block into i by the order that be listed as and train high-resolution features vectorial
Figure BDA0000139605960000033
I=1,2 ..., N, each training high-resolution features vector
Figure BDA0000139605960000034
Dimension be d 1=z 2, all training high-resolution features vectors
Figure BDA0000139605960000035
(1≤i≤N) form to train the high-resolution features matrix
Figure BDA0000139605960000036
(1≤l≤L) width of cloth is trained high-definition picture to l
Figure BDA0000139605960000041
Do a times of down-sampling and handle, a=2,3,4,5, obtain corresponding l width of cloth training low-resolution image
Figure BDA0000139605960000042
Again to every width of cloth training low-resolution image
Figure BDA0000139605960000043
Do and obtain corresponding training interpolation image after a times of up-sampling handled
Figure BDA0000139605960000044
Extract every width of cloth training interpolation image The vertical and second order level of single order level, single order, second order VG (vertical gradient) characteristic, simultaneously this four width of cloth gradient characteristic image is carried out overlapping piecemeal, every width of cloth training interpolation image
Figure BDA0000139605960000046
Obtain 4*N behind the overlapping piecemeal 1Individual size is the training low resolution gradient characteristic image piece of z*z; Obtain 4*N training low resolution gradient characteristic image piece altogether; Per four training low resolution gradient characteristic image pieces are as a training low resolution gradient characteristic image piece group, convert i training low resolution gradient characteristic image piece group into i by the order that is listed as and train the low resolution proper vector Each training low resolution proper vector
Figure BDA0000139605960000048
Dimension be d 2=4*z 2, all training low resolution proper vectors
Figure BDA0000139605960000049
(1≤i≤N) form to train the low resolution eigenmatrix X s = [ x 1 s , x 2 s , · · · , x N s ] ;
B, pre-service:
Input waits to rebuild low-resolution image R d, its resolution and all training low-resolution images
Figure BDA00001396059600000411
(resolution of 1≤l≤L) is identical, will wait to rebuild low-resolution image R dCarry out obtaining waiting to rebuild interpolation image R after a times of up-sampling handled c, extract and wait to rebuild interpolation image R cThe vertical and second order level of single order level, single order, second order VG (vertical gradient) characteristic, simultaneously this four width of cloth gradient characteristic image is carried out obtaining 4*N altogether behind the overlapping piecemeal 2Individual size be z*z wait to rebuild low resolution gradient characteristic image piece, N 2=1000~7000; Wait to rebuild low resolution gradient characteristic image piece and wait to rebuild low resolution gradient characteristic image piece group for per four, wait to rebuild low resolution gradient characteristic image piece group with j and convert by the order that be listed as that j is individual to wait to rebuild the low resolution proper vector into as one
Figure BDA00001396059600000412
J=1,2 ..., N 2, each waits to rebuild the low resolution proper vector
Figure BDA00001396059600000413
Dimension and all training low resolution proper vector
Figure BDA00001396059600000414
(dimension of 1≤i≤N) is identical, is d 2=4*z 2, institute remains to be rebuild the low resolution proper vector (1≤j≤N 2) form one and wait to rebuild the low resolution eigenmatrix X t = [ x 1 t , x 2 t , · · · , x N 2 t ] ;
C, super-resolution rebuilding:
C1, subcharacter matrix generate: each that obtains among the step B waited to rebuild the low resolution proper vector (1≤j≤N 2), the training low resolution eigenmatrix that obtains in steps A successively
Figure BDA0000139605960000052
In seek out with j and wait to rebuild the low resolution proper vector
Figure BDA0000139605960000053
Between the minimum n-1=100~300 training low resolution proper vector of Euclidean distance, wait to rebuild the low resolution proper vector with j
Figure BDA0000139605960000054
As the subcharacter vector x 1j, and with the n-1 that searches out a training low resolution proper vector by with j wait to rebuild the low resolution proper vector
Figure BDA0000139605960000055
Between Euclidean distance from small to large successively as the subcharacter vector x 2j, x 3j..., x Nj, form j sub-eigenmatrix thus
C2, dimensionality reduction eigenmatrix generate: adopt I and II subspace projection method, with j sub-eigenmatrix
Figure BDA0000139605960000057
Carry out the characteristic dimensionality reduction, each subcharacter vector x I ' j(1≤i '≤n) project on the low n-dimensional subspace n, obtain n dimensionality reduction proper vector altogether
Figure BDA0000139605960000058
Each dimensionality reduction proper vector
Figure BDA0000139605960000059
Dimension be m<d 2Each subcharacter vector x I ' jWith corresponding dimensionality reduction proper vector Form mapping relations one by one, behind the I and II subspace projection, dimension is d 2The subcharacter vector x I ' jConverting dimension to is the dimensionality reduction proper vector of m
Figure BDA00001396059600000511
Realize the characteristic dimensionality reduction, all dimensionality reduction proper vectors (1≤i '≤n) form j dimensionality reduction eigenmatrix X j m = { x i ′ j m } i ′ = 1 n ;
C3, weight coefficient calculate: the dimensionality reduction proper vector in that step C2 obtains is gathered
Figure BDA00001396059600000514
In seek out and the dimensionality reduction proper vector
Figure BDA00001396059600000515
Between k minimum dimensionality reduction proper vector of Euclidean distance, k=5~10 are according to step C1 neutron proper vector x I ' jWith dimensionality reduction proper vector among the step C2
Figure BDA00001396059600000516
Between mapping relations one by one, and the call number of the k that searches out a dimensionality reduction proper vector, low in the training that steps A obtains successively, high-resolution features matrix X sAnd Y sIn seek out that corresponding k is low to training, high-resolution features is vectorial, and, high-resolution features low to training by this k is vectorial forms j low, the high-resolution features matrix of neighbour respectively
Figure BDA00001396059600000517
With
Figure BDA00001396059600000518
According to j neighbour's low resolution eigenmatrix
Figure BDA00001396059600000519
Wait to rebuild the low resolution proper vector with j
Figure BDA00001396059600000520
Calculate j similarity group
Figure BDA00001396059600000521
With j scale factor group And associating
Figure BDA00001396059600000523
With
Figure BDA00001396059600000524
Calculate j normalization and embed the weight coefficient group
Figure BDA0000139605960000061
C4, reconstruction high-definition picture piece: the linear weighted function form that embeds weight coefficient group
Figure BDA0000139605960000062
and j neighbour's high-resolution features matrix
Figure BDA0000139605960000063
according to j normalization; Estimate j rebuild back high-resolution features vector
Figure BDA0000139605960000064
again with j brightness average
Figure BDA0000139605960000065
addition of waiting to rebuild the low-resolution image piece; And convert the image block form into, can draw j and rebuild back high-definition picture piece;
Repeat above step C1~step C4, obtain N altogether 2Individual size is a high-definition picture piece after the reconstruction of z*z, obtains the preliminary image H of super-resolution rebuilding behind the lap splice 0
D, aftertreatment:
Adopt the preliminary image H of iteration back-projection algorithms to super-resolution rebuilding 0Carry out Q iterative computation and handle, Q=10~30 obtain the final image H of super-resolution rebuilding *
Compared with prior art, the invention has the beneficial effects as follows:
One, the present invention adopts I and II subspace projection method that the subcharacter matrix is projected on the lower dimensional space; Obtain corresponding dimensionality reduction eigenmatrix; Reduce the dimension of original low resolution proper vector to greatest extent, and more effectively represent the characteristic of low-resolution image piece.Therefore, the time and the space complexity of the inventive method reduce, and arithmetic speed improves.
Two, the direct Euclidean distance between the dimensionality reduction proper vector relatively at lower dimensional space but not on the original higher dimensional space of the present invention; Owing to comprised the overall situation and the local structural information of raw data in the dimensionality reduction eigenmatrix, can seek out and wait to rebuild the neighborhood piece that the low-resolution image piece matees the most more efficiently according to the dimensionality reduction proper vector.It is thus clear that the present invention has higher matching precision to seeking the neighborhood piece.
Three, the present invention utilizes the similarity between weight coefficient and two proper vectors that approximate proportional relation is arranged; Construct the similarity waiting to rebuild between low resolution proper vector and neighbour's low resolution proper vector and corresponding scale factor; Calculate more accurate weight coefficient, help estimating the high-definition picture piece.It is thus clear that the present invention has higher estimated accuracy and stronger adaptability.
In a word; The inventive method adopts I and II subspace projection method that original high dimensional data is projected on the lower dimensional space; Make the character representation of low-resolution image piece become more succinct, efficient, can search out the more neighborhood piece of coupling, and construct more accurate weight coefficient through similarity between proper vector and scale factor; Rebuild back high-definition picture piece thereby estimate more accurately; The super-resolution rebuilding better effects if, the similarity of reconstructed image and real-world object is higher, helps later stage real-world object identification and handles.
Adopt I and II subspace projection method among the above-mentioned step C2, the specific practice of j sub-eigenmatrix
Figure BDA0000139605960000071
being carried out obtaining behind the characteristic dimensionality reduction j dimensionality reduction eigenmatrix is:
The one-level subspace projection:
Calculate j j the radially basic kernel function matrix that sub-eigenmatrix
Figure BDA0000139605960000073
is corresponding K j = { K i ′ j ′ j } i ′ , j ′ = 1 n = { exp ( - | | x i ′ j - x j ′ j | | 2 / β j ) } i ′ , j ′ = 1 n , Wherein the individual radially parameter of basic kernel function of j does
Figure BDA0000139605960000075
Then to j radially basic kernel function matrix K jCarry out obtaining j centralization kernel function matrix after centralization is handled
Figure BDA0000139605960000076
G=I-1/n in the formula, I are unit matrixs that size is n*n, and 1 is that a size is n*n, and all values is 1 matrix; Find the solution j centralization kernel function matrix
Figure BDA0000139605960000077
Descending characteristic value decomposition equation
Figure BDA0000139605960000078
I '=1,2,3 ..., n, j=1,2,3 ..., N 2, λ in the formula I ' jFor by the individual eigenwert of i ' after the descending sort, α I ' jIt is the individual eigenvalue of i ' I ' jThe corresponding individual proper vector of i '; Again to the individual proper vector α of i ' I ' jCarry out obtaining the orthogonalization proper vector after the orthogonalization process
Figure BDA0000139605960000079
R eigenvalue of maximum λ before choosing I ' jCorresponding orthogonalization proper vector
Figure BDA00001396059600000710
R=50~n, the r that selects orthogonalization proper vector
Figure BDA00001396059600000711
Form j one-level subspace projection matrix
Figure BDA00001396059600000712
R eigenvalue of maximum before described
Figure BDA00001396059600000713
Sum accounts for all eigenwerts
Figure BDA00001396059600000714
More than 99% of sum; By formula
Figure BDA00001396059600000715
Draw j one-level mapping matrix T is the matrix transpose computing, all one-level mapping vectors (1≤i '≤n) dimension is r;
The secondary subspace projection:
Calculate j corresponding nuclear distance function matrix of j one-level mapping matrix
Figure BDA00001396059600000718
D j r = { D i ′ j ′ j r } i ′ , j ′ = 1 n = { 2 - 2 * Exp ( - | | x i ′ j r - x j ′ j r | | 2 / γ j ) } i ′ , j ′ = 1 n , Wherein the parameter of j nuclear distance function does
Figure BDA0000139605960000082
Construct j adjacency matrix then
Figure BDA0000139605960000083
As i ' ∈ Λ J ' jOr j ' ∈ Λ I ' jThe time,
Figure BDA0000139605960000084
Otherwise, S I ' j ' j=0, Λ wherein J ' jAnd Λ I ' jBe expressed as respectively The neighborhood tally set, the element number of neighborhood tally set is b=5~10; Calculate j adjacency matrix S jJ corresponding Laplce's matrix F j=diag (S j* 1 n)-S j, wherein 1 nBeing expressed as a size is n*1, and all values is 1 column vector, and diag () is expressed as the diagonalization of matrix computing; Then calculate j symmetric matrix respectively U j = X j r * F j * F j * ( X j r ) T With j diagonal matrix V j = X j r * { Diag ( S j * 1 n ) } * ( X j r ) T , J symmetric matrix U jWith j diagonal matrix V jAll be that size is the matrix of r*r, unite to solve U jAnd V jBroad sense ascending order characteristic value decomposition equation
Figure BDA0000139605960000088
In the formula
Figure BDA0000139605960000089
Be the individual eigenwert of i ' after arranging by ascending order, p I ' jIt is the individual eigenwert of i ' The corresponding individual proper vector of i '; M minimal eigenvalue before choosing
Figure BDA00001396059600000811
Characteristic of correspondence vector p I ' j, m=10~r, the m that a selects proper vector
Figure BDA00001396059600000812
Form j secondary subspace projection matrix P j=[p 1j, p 2j..., p Mj], by formula
Figure BDA00001396059600000813
Can draw j dimensionality reduction eigenmatrix All dimensionality reduction proper vectors
Figure BDA00001396059600000815
(1≤i '≤n) dimension is m.
After adopting above one-level subspace projection method, comprise the global structure information of original high dimensional data in the one-level mapping matrix, kept more partial structurtes information in the dimensionality reduction eigenmatrix that behind above secondary subspace projection, obtains.Adopt I and II subspace projection method; The dimensionality reduction eigenmatrix has not only kept the global structure information of raw data but also has kept partial structurtes information; Realize the characteristic dimensionality reduction, can more effectively express the characteristic of low-resolution image piece, thereby make super-resolution rebuilding become succinct and efficient.
Step C3 joint above the j-th similarity group
Figure BDA00001396059600000816
and j scale factor group calculate the j th normalized weighting coefficients embedded
Figure BDA00001396059600000818
the specific approach is:
Calculate j neighbour's low resolution eigenmatrix that step C3 obtains respectively
Figure BDA00001396059600000819
In k training low resolution proper vector
Figure BDA0000139605960000091
(1≤i "≤k) wait to rebuild the low resolution proper vector with j
Figure BDA0000139605960000092
Between similarity
Figure BDA0000139605960000093
With corresponding scale factor
Figure BDA0000139605960000094
Wherein
Figure BDA0000139605960000095
With Be expressed as training low resolution proper vector respectively
Figure BDA0000139605960000097
Wait to rebuild the low resolution proper vector with j
Figure BDA0000139605960000098
In u brightness value, ε is 1*10 -4~1*10 -2Positive number; The k that calculates is to similarity
Figure BDA0000139605960000099
With scale factor c I " jForm j similarity group respectively With j scale factor group
Figure BDA00001396059600000911
Unite j similarity group
Figure BDA00001396059600000912
With j scale factor group
Figure BDA00001396059600000913
Calculate j and embed the weight coefficient group
Figure BDA00001396059600000914
And to j embedding weight coefficient group Do and obtain j normalization embedding weight coefficient group after normalization is handled { w ^ i ′ ′ j } i ′ ′ = 1 k = { w i ′ ′ j / Σ i ′ ′ j k w i ′ ′ j } i ′ ′ = 1 k .
Construct through above method and to wait to rebuild low resolution proper vector and similarity and the scale factor between the training low resolution proper vector of coupling the most; Calculate more accurate weight coefficient with them; Satisfy the requirement that weight coefficient and similarity become to be similar to direct ratio; Can from the training storehouse, get access to more high-frequency information, help improving the estimated accuracy of rebuilding back high-definition picture piece, thereby obtain better super-resolution rebuilding effect.
Come the present invention is done further detailed description below in conjunction with accompanying drawing and embodiment.
Description of drawings
Fig. 1 (a)~Fig. 1 (f) is employed training image and a test pattern in the embodiment of the invention.
Fig. 2 (A)~Fig. 2 (E) realizes the The simulation experiment result of image super-resolution rebuilding for adopting algorithms of different.
In Fig. 2 (A)~Fig. 2 (E), the intermediate rectangular block diagram is the regional area on the reconstructed image, and the corner block diagram amplifies 2 times design sketch for this regional area, and Fig. 2 (A) is for adopting the figure as a result of NESR algorithm (document 1); Fig. 2 (B) is for adopting the figure as a result of NeedFS algorithm (document 2); Fig. 2 (C) is for adopting the figure as a result of CSNE algorithm (document 3); Fig. 2 (D) is for adopting the figure as a result of JLNE algorithm (document 4); Fig. 2 (E) is the figure as a result of the inventive method.
Fig. 3 carries out image block root-mean-square error value E behind the super-resolution rebuilding to (a)~(f) among Fig. 1 six width of cloth images respectively for existing four kinds of methods and the inventive method pHistogram.
Fig. 4 carries out image block structural similarity value S behind the super-resolution rebuilding to (a)~(f) among Fig. 1 six width of cloth images respectively for existing four kinds of methods and the inventive method pHistogram.
Embodiment
Embodiment
A kind of image super-resolution rebuilding method based on subspace projection and neighborhood embedding may further comprise the steps:
A, training:
L width of cloth resolution is identical, that size is identical high-definition picture is as the training high-definition picture
Figure BDA0000139605960000101
L=1,2 ..., L, L=3~80; To every width of cloth training high-definition picture
Figure BDA0000139605960000102
Carry out obtaining N behind the overlapping piecemeal 1Individual size is the training high-definition picture piece of z*z, N 1=1000~7000, z=6,9,12,15, obtain N=L*N altogether 1Individual training high-definition picture piece; Extract behind standardization brightness of each training high-definition picture piece as a training high resolving power standardization brightness image block, convert i training high resolving power standardization brightness image block into i by the order that be listed as and train high-resolution features vectorial
Figure BDA0000139605960000103
I=1,2 ..., N, each training high-resolution features vector
Figure BDA0000139605960000104
Dimension be d 1=z 2, all training high-resolution features vectors
Figure BDA0000139605960000105
(1≤i≤n) form to train the high-resolution features matrix
Figure BDA0000139605960000106
(1≤l≤L) width of cloth is trained high-definition picture to l
Figure BDA0000139605960000107
Do a times of down-sampling and handle, a=2,3,4,5, obtain corresponding l width of cloth training low-resolution image
Figure BDA0000139605960000108
Again to every width of cloth training low-resolution image Do and obtain corresponding training interpolation image after a times of up-sampling handled
Figure BDA00001396059600001010
Extract every width of cloth training interpolation image
Figure BDA00001396059600001011
The vertical and second order level of single order level, single order, second order VG (vertical gradient) characteristic, simultaneously this four width of cloth gradient characteristic image is carried out overlapping piecemeal, every width of cloth training interpolation image
Figure BDA00001396059600001012
Obtain 4*N behind the overlapping piecemeal 1Individual size is the training low resolution gradient characteristic image piece of z*z; Obtain 4*N training low resolution gradient characteristic image piece altogether; Per four training low resolution gradient characteristic image pieces are as a training low resolution gradient characteristic image piece group, convert i training low resolution gradient characteristic image piece group into i by the order that is listed as and train the low resolution proper vector Each training low resolution proper vector
Figure BDA00001396059600001014
Dimension be d 2=4*z 2, all training low resolution proper vectors
Figure BDA0000139605960000111
(1≤i≤N) form to train the low resolution eigenmatrix X s = [ x 1 s , x 2 s , · · · , x N s ] ;
B, pre-service:
Input waits to rebuild low-resolution image R d, its resolution and all training low-resolution images
Figure BDA0000139605960000113
(resolution of 1≤l≤L) is identical, will wait to rebuild low-resolution image R dCarry out obtaining waiting to rebuild interpolation image R after a times of up-sampling handled c, extract and wait to rebuild interpolation image R cThe vertical and second order level of single order level, single order, second order VG (vertical gradient) characteristic, simultaneously this four width of cloth gradient characteristic image is carried out obtaining 4*N altogether behind the overlapping piecemeal 2Individual size be z*z wait to rebuild low resolution gradient characteristic image piece, N 2=1000~7000; Wait to rebuild low resolution gradient characteristic image piece and wait to rebuild low resolution gradient characteristic image piece group for per four, wait to rebuild low resolution gradient characteristic image piece group with j and convert by the order that be listed as that j is individual to wait to rebuild the low resolution proper vector into as one
Figure BDA0000139605960000114
J=1,2 ..., N 2, each waits to rebuild the low resolution proper vector
Figure BDA0000139605960000115
Dimension and all training low resolution proper vector
Figure BDA0000139605960000116
(dimension of 1≤i≤N) is identical, is d 2=4*z 2, institute remains to be rebuild the low resolution proper vector
Figure BDA0000139605960000117
(1≤j≤N 2) form one and wait to rebuild the low resolution eigenmatrix X t = [ x 1 t , x 2 t , · · · , x N 2 t ] ;
C, super-resolution rebuilding:
C1, subcharacter matrix generate: each that obtains among the step B waited to rebuild the low resolution proper vector
Figure BDA0000139605960000119
(1≤j≤N 2), the training low resolution eigenmatrix that obtains in steps A successively
Figure BDA00001396059600001110
In seek out with j and wait to rebuild the low resolution proper vector
Figure BDA00001396059600001111
Between the minimum n-1=100~300 training low resolution proper vector of Euclidean distance, wait to rebuild the low resolution proper vector with j
Figure BDA00001396059600001112
As the subcharacter vector x 1j, and with the n-1 that searches out a training low resolution proper vector by with j wait to rebuild the low resolution proper vector
Figure BDA00001396059600001113
Between Euclidean distance from small to large successively as the subcharacter vector x 2j, x 3j..., x Nj, form j sub-eigenmatrix thus
Figure BDA00001396059600001114
C2, dimensionality reduction eigenmatrix generate: adopt I and II subspace projection method, with j sub-eigenmatrix
Figure BDA0000139605960000121
Carry out the characteristic dimensionality reduction, each subcharacter vector x I ' j(1≤i '≤n) project on the low n-dimensional subspace n, obtain n dimensionality reduction proper vector altogether
Figure BDA0000139605960000122
Each dimensionality reduction proper vector
Figure BDA0000139605960000123
Dimension be m<d 2Each subcharacter vector x I ' jWith corresponding dimensionality reduction proper vector Form mapping relations one by one, behind the I and II subspace projection, dimension is d 2The subcharacter vector x I ' jConverting dimension to is the dimensionality reduction proper vector of m
Figure BDA0000139605960000125
Realize the characteristic dimensionality reduction, all dimensionality reduction proper vectors
Figure BDA0000139605960000126
(1≤i '≤n) form j dimensionality reduction eigenmatrix X j m = { x i ′ j m } i ′ = 1 n ;
More than adopt I and II subspace projection method, the specific practice of j sub-eigenmatrix
Figure BDA0000139605960000128
being carried out obtaining behind the characteristic dimensionality reduction j dimensionality reduction eigenmatrix is:
The one-level subspace projection:
Calculate j j the radially basic kernel function matrix that sub-eigenmatrix
Figure BDA00001396059600001210
is corresponding K j = { K i ′ j ′ j } i ′ , j ′ = 1 n = { exp ( - | | x i ′ j - x j ′ j | | 2 / β j ) } i ′ , j ′ = 1 n , Wherein the individual radially parameter of basic kernel function of j does
Figure BDA00001396059600001212
Then to j radially basic kernel function matrix K jCarry out obtaining j centralization kernel function matrix after centralization is handled
Figure BDA00001396059600001213
G=I-1/n in the formula, I are unit matrixs that size is n*n, and 1 is that a size is n*n, and all values is 1 matrix; Find the solution j centralization kernel function matrix Descending characteristic value decomposition equation
Figure BDA00001396059600001215
I '=1,2,3 ..., n, j=1,2,3 ..., N 2, λ in the formula I ' jFor by the individual eigenwert of i ' after the descending sort, α I ' jIt is the individual eigenvalue of i ' I ' jThe corresponding individual proper vector of i '; Again to the individual proper vector α of i ' I ' jCarry out obtaining the orthogonalization proper vector after the orthogonalization process
Figure BDA00001396059600001216
R eigenvalue of maximum λ before choosing I ' jCorresponding orthogonalization proper vector
Figure BDA00001396059600001217
R=50~n, the r that selects orthogonalization proper vector
Figure BDA00001396059600001218
Form j one-level subspace projection matrix
Figure BDA00001396059600001219
R eigenvalue of maximum before described
Figure BDA00001396059600001220
Sum accounts for all eigenwerts
Figure BDA00001396059600001221
More than 99% of sum; By formula
Figure BDA00001396059600001222
Draw j one-level mapping matrix
Figure BDA00001396059600001223
T is the matrix transpose computing, all one-level mapping vectors
Figure BDA00001396059600001224
(1≤i '≤n) dimension is r;
The secondary subspace projection:
Calculate j corresponding nuclear distance function matrix of j one-level mapping matrix
Figure BDA0000139605960000131
D j r = { D i ′ j ′ j r } i ′ , j ′ = 1 n = { 2 - 2 * Exp ( - | | x i ′ j r - x j ′ j r | | 2 / γ j ) } i ′ , j ′ = 1 n , Wherein the parameter of j nuclear distance function does
Figure BDA0000139605960000133
Construct j adjacency matrix then
Figure BDA0000139605960000134
As i ' ∈ Λ J ' jOr j ' ∈ Λ I ' jThe time, Otherwise, S I ' j ' j=0, Λ wherein J ' jAnd Λ I ' jBe expressed as respectively
Figure BDA0000139605960000136
The neighborhood tally set, the element number of neighborhood tally set is b=5~10; Calculate j adjacency matrix S jJ corresponding Laplce's matrix F j=diag (S j* 1 n)-S j, wherein 1 nBeing expressed as a size is n*1, and all values is 1 column vector, and diag () is expressed as the diagonalization of matrix computing; Then calculate j symmetric matrix respectively U j = X j r * F j * ( X j r ) T With j diagonal matrix V j = X j r * { Diag ( S j * 1 n ) } * ( X j r ) T , J symmetric matrix U jWith j diagonal matrix V jAll be that size is the matrix of r*r, unite to solve U jAnd V jBroad sense ascending order characteristic value decomposition equation
Figure BDA0000139605960000139
In the formula
Figure BDA00001396059600001310
Be the individual eigenwert of i ' after arranging by ascending order, p I ' jIt is the individual eigenwert of i '
Figure BDA00001396059600001311
The corresponding individual proper vector of i '; M minimal eigenvalue before choosing
Figure BDA00001396059600001312
Characteristic of correspondence vector p I ' j, m=10~r, the m that a selects proper vector
Figure BDA00001396059600001313
Form j secondary subspace projection matrix P j=[p 1j, p 2j..., p Mj], by formula Can draw j dimensionality reduction eigenmatrix
Figure BDA00001396059600001315
All dimensionality reduction proper vectors
Figure BDA00001396059600001316
(1≤i '≤n) dimension is m;
C3, weight coefficient calculate: the dimensionality reduction proper vector in that step C2 obtains is gathered
Figure BDA00001396059600001317
In seek out and the dimensionality reduction proper vector
Figure BDA00001396059600001318
Between k minimum dimensionality reduction proper vector of Euclidean distance, k=5~10 are according to step C1 neutron proper vector x I ' jWith dimensionality reduction proper vector among the step C2
Figure BDA00001396059600001319
Between mapping relations one by one, and the call number of the k that searches out a dimensionality reduction proper vector, low in the training that steps A obtains successively, high-resolution features matrix X sAnd Y sIn seek out that corresponding k is low to training, high-resolution features is vectorial, and, high-resolution features low to training by this k is vectorial forms j low, the high-resolution features matrix of neighbour respectively
Figure BDA0000139605960000141
With
Figure BDA0000139605960000142
According to j neighbour's low resolution eigenmatrix Wait to rebuild the low resolution proper vector with j
Figure BDA0000139605960000144
Calculate j similarity group
Figure BDA0000139605960000145
With j scale factor group
Figure BDA0000139605960000146
And associating
Figure BDA0000139605960000147
With
Figure BDA0000139605960000148
Calculate j normalization and embed the weight coefficient group
Figure BDA0000139605960000149
Joint above the j-th similarity group
Figure BDA00001396059600001410
and j scale factor group calculate the j th normalized weighting coefficients embedded
Figure BDA00001396059600001412
The specific approach is:
Calculate j neighbour's low resolution eigenmatrix that step C3 obtains respectively
Figure BDA00001396059600001413
In k training low resolution proper vector (1≤i " waits to rebuild the low resolution proper vector for≤k and j
Figure BDA00001396059600001415
Between similarity
Figure BDA00001396059600001416
With corresponding scale factor
Figure BDA00001396059600001417
Wherein With
Figure BDA00001396059600001419
Be expressed as training low resolution proper vector respectively
Figure BDA00001396059600001420
Wait to rebuild the low resolution proper vector with j
Figure BDA00001396059600001421
In u brightness value, ε is 1*10 -4~1*10 -2Positive number; The k that calculates is to similarity
Figure BDA00001396059600001422
With scale factor c I " jForm j similarity group respectively
Figure BDA00001396059600001423
With j scale factor group
Figure BDA00001396059600001424
Unite j similarity group
Figure BDA00001396059600001425
With j scale factor group
Figure BDA00001396059600001426
Calculate j and embed the weight coefficient group And to j embedding weight coefficient group
Figure BDA00001396059600001428
Do and obtain j normalization embedding weight coefficient group after normalization is handled { w ^ i ′ ′ j } i ′ ′ = 1 k = { w i ′ ′ j / Σ i ′ ′ = 1 k w i ′ ′ j } i ′ ′ = 1 k ;
C4, reconstruction high-definition picture piece: the linear weighted function form that embeds weight coefficient group
Figure BDA00001396059600001430
and j neighbour's high-resolution features matrix
Figure BDA00001396059600001431
according to j normalization; Estimate j rebuild back high-resolution features vector
Figure BDA00001396059600001432
again with j brightness average
Figure BDA00001396059600001433
addition of waiting to rebuild the low-resolution image piece; And convert the image block form into, can draw j and rebuild back high-definition picture piece;
Repeat above step C1~step C4, obtain N altogether 2Individual size is a high-definition picture piece after the reconstruction of z*z, obtains the preliminary image H of super-resolution rebuilding behind the lap splice 0
D, aftertreatment:
Adopt the preliminary image H of iteration back-projection algorithms to super-resolution rebuilding 0Carry out Q iterative computation and handle, Q=10~30 obtain the final image H* of super-resolution rebuilding.
Emulation experiment:
The condition of emulation experiment with concrete parameter is:
The natural image of in Fig. 1, choosing the big or small 384*510 of being of the L=5 width of cloth is as the training high-definition picture, and a remaining width of cloth has been done 6 times emulation experiment successively by turns altogether as the test reference image.
The sampling multiple a=3 of training high-definition picture, the image block size is z*z=9*9, obtains N=L*N behind the doubling of the image piecemeal altogether 1=5*5440=27200 image block, the dimension of training high-resolution features vector is d 1=z 2=9 2=81, the dimension of training low resolution proper vector is d 2=4*z 2=4*9 2=324.
As low-resolution image to be rebuild, size is 128 * 170 with the image of test reference image behind 3 times of down-samplings.The interpolation image that low-resolution image to be rebuild is carried out obtaining behind 3 times of up-samplings carries out obtaining N behind the overlapping piecemeal 2Wait to rebuild the low-resolution image piece for=5440, the dimension of waiting to rebuild the low resolution proper vector is d 2=324.All subcharacter matrix X jVectorial number in (1≤j≤5440) is n=101, and the dimension of one-level mapping vector is r=100, in secondary subspace projection process; The element number of neighborhood tally set is b=5; The dimension of dimensionality reduction proper vector is m=22, and the number of the dimensionality reduction proper vector of searching is k=9, ε=1*10 -3, the employed iterations of aftertreatment is Q=20.
Adopt existing four kinds of methods simultaneously; Be respectively NESR algorithm (document 1), NeedFS algorithm (document 2), CSNE algorithm (document 3) and JLNE algorithm (document 4) and the inventive method (a)~(f) among Fig. 1 six width of cloth images are similarly being carried out super-resolution emulation reconstruction under the emulation experiment condition.
The result is as shown in Figure 2 for the subjective vision effect comparison, Fig. 2 only provide (b) among Fig. 1, (c) and (d) three width of cloth images carry out the result of emulation experiment.Fig. 2 (A)~Fig. 2 (D) is respectively the The simulation experiment result of document 1~4, and Fig. 2 (E) is the The simulation experiment result of the inventive method.In Fig. 2 (A)~Fig. 2 (E), the intermediate rectangular block diagram is the regional area on the reconstructed image, and the corner block diagram amplifies 2 times design sketch for this regional area.Relatively partial enlarged drawing can be found out: fuzzy and texture aliasing can appear in the reconstructed image of NESR, NeedFS and CSNE algorithm; The texture edge is level and smooth inadequately in the reconstructed image of JLNE algorithm; Detailed information in the reconstructed image of the inventive method is more clear, and the edge is more level and smooth.It is thus clear that the inventive method is better than existing four kinds of methods on the subjective vision effect.
In order more accurately the whole bag of tricks to be carried out objective evaluation, below with image block root-mean-square error value E pWith image block structural similarity value S pAs estimating the good and bad objective indicator of the whole bag of tricks, wherein E pAnd S pComputing formula following:
E p = 1 N 2 Σ j = 1 N 2 1 d 1 Σ v = 1 d 1 ( y jv - y jv t ) 2
S p = 1 N 2 Σ j = 1 N 2 ( 2 μ 1 j μ 2 j + ϵ 1 ) ( 2 σ 12 j + ϵ 2 ) ( μ 1 j 2 + μ 2 j 2 + ϵ 1 ) ( σ 1 j 2 + σ 2 j 2 + ϵ 2 )
In the same form, y JvBe v brightness value of j image block on the original high resolution image,
Figure BDA0000139605960000163
Be v brightness value of j image block on the super-resolution rebuilding image, d 1Be the number of the contained brightness value of each image block, identical with the dimension of training high-resolution features vector, be d 1=81, N 2For rebuilding back high-definition picture piece number.
In two formulas, μ 1jAnd μ 2jBe respectively the brightness average of j image block on original high resolution image and the super-resolution rebuilding image, σ 1jAnd σ 2jBe respectively the standard deviation of j image block on original high resolution image and the super-resolution rebuilding image, σ 12jBe the standard covariance of j image block on j image block on the original high resolution image and the super-resolution rebuilding image, ε 1=6.5, ε 2=58.5.
Fig. 3 carries out image block root-mean-square error value E behind the super-resolution rebuilding to (a)~(f) among Fig. 1 six width of cloth images respectively for existing four kinds of methods and the inventive method pHistogram, can find out the image block root-mean-square error value E of 6 width of cloth reconstructed images of the inventive method from this figure pAll be lower than existing four kinds of methods, the difference minimum of the reconstructed image and the original high resolution image of the inventive method is described, the reconstruction effect is best.
Fig. 4 carries out image block structural similarity value S behind the super-resolution rebuilding to (a)~(f) among Fig. 1 six width of cloth images respectively for existing four kinds of methods and the inventive method pHistogram, can find out the image block structural similarity value S of 6 width of cloth reconstructed images of the inventive method from this figure pAll be higher than existing four kinds of methods, explained that also the inventive method can reconstruct more high-frequency information, more approaches the original high resolution image.
Above The simulation experiment result shows, subjective vision effect and objective evaluation index this aspect two on, the inventive method all is better than existing four kinds of methods, in the application of image super-resolution rebuilding, has feasibility and applicability.

Claims (3)

1. image super-resolution rebuilding method that embeds based on subspace projection and neighborhood may further comprise the steps:
A, training:
L width of cloth resolution is identical, that size is identical high-definition picture is as the training high-definition picture
Figure FDA0000139605950000011
L=1,2 ..., L, L=3~80; To every width of cloth training high-definition picture
Figure FDA0000139605950000012
Carry out obtaining N behind the overlapping piecemeal 1Individual size is the training high-definition picture piece of z*z, N 1=1000~7000, z=6,9,12,15, obtain N=L*N altogether 1Individual training high-definition picture piece; The standardization brightness that extracts each training high-definition picture piece is as a training high resolving power standardization brightness image block, converts i training high resolving power standardization brightness image block into i by the order that be listed as and trains high-resolution features vectorial
Figure FDA0000139605950000013
I=1,2 ..., N, each training high-resolution features vector Dimension be d 1=z 2, all training high-resolution features vectors (1≤i≤N) form to train the high-resolution features matrix
Figure FDA0000139605950000016
(1≤l≤L) width of cloth is trained high-definition picture to l Do a times of down-sampling and handle, a=2,3,4,5, obtain corresponding l width of cloth training low-resolution image
Figure FDA0000139605950000018
Again to every width of cloth training low-resolution image
Figure FDA0000139605950000019
Do and obtain corresponding training interpolation image after a times of up-sampling handled
Figure FDA00001396059500000110
Extract every width of cloth training interpolation image
Figure FDA00001396059500000111
The vertical and second order level of single order level, single order, second order VG (vertical gradient) characteristic, simultaneously this four width of cloth gradient characteristic image is carried out overlapping piecemeal, every width of cloth training interpolation image
Figure FDA00001396059500000112
Obtain 4*N behind the overlapping piecemeal 1Individual size is the training low resolution gradient characteristic image piece of z*z; Obtain 4*N training low resolution gradient characteristic image piece altogether; Per four training low resolution gradient characteristic image pieces are as a training low resolution gradient characteristic image piece group, convert i training low resolution gradient characteristic image piece group into i by the order that is listed as and train the low resolution proper vector
Figure FDA00001396059500000113
Each training low resolution proper vector
Figure FDA00001396059500000114
Dimension be d 2=4*z 2, all training low resolution proper vectors
Figure FDA00001396059500000115
(1≤i≤N) form to train the low resolution eigenmatrix X s = [ x 1 s , x 2 s , · · · , x N s ] ;
B, pre-service:
Input waits to rebuild low-resolution image R d, its resolution and all training low-resolution images (resolution of 1≤l≤L) is identical, will wait to rebuild low-resolution image R dCarry out obtaining waiting to rebuild interpolation image R after a times of up-sampling handled c, extract and wait to rebuild interpolation image R cThe vertical and second order level of single order level, single order, second order VG (vertical gradient) characteristic, simultaneously this four width of cloth gradient characteristic image is carried out obtaining 4*N altogether behind the overlapping piecemeal 2Individual size be z*z wait to rebuild low resolution gradient characteristic image piece, N 2=1000~7000; Wait to rebuild low resolution gradient characteristic image piece and wait to rebuild low resolution gradient characteristic image piece group for per four, wait to rebuild low resolution gradient characteristic image piece group with j and convert by the order that be listed as that j is individual to wait to rebuild the low resolution proper vector into as one
Figure FDA0000139605950000022
J=1,2 ..., N 2, each waits to rebuild the low resolution proper vector
Figure FDA0000139605950000023
Dimension and all training low resolution proper vector
Figure FDA0000139605950000024
(dimension of 1≤i≤N) is identical, is d 2=4*z 2, institute remains to be rebuild the low resolution proper vector
Figure FDA0000139605950000025
(1≤j≤N 2) form one and wait to rebuild the low resolution eigenmatrix X t = [ x 1 t , x 2 t , · · · , x N 2 t ] ;
C, super-resolution rebuilding:
C1, subcharacter matrix generate: each that obtains among the step B waited to rebuild the low resolution proper vector
Figure FDA0000139605950000027
(1≤j≤N 2), the training low resolution eigenmatrix that obtains in steps A successively
Figure FDA0000139605950000028
In seek out with j and wait to rebuild the low resolution proper vector
Figure FDA0000139605950000029
Between the minimum n-1=100~300 training low resolution proper vector of Euclidean distance, wait to rebuild the low resolution proper vector with j As the subcharacter vector x 1j, and with the n-1 that searches out a training low resolution proper vector by with j wait to rebuild the low resolution proper vector
Figure FDA00001396059500000211
Between Euclidean distance from small to large successively as the subcharacter vector x 2j, x 3j..., x Nj, form j sub-eigenmatrix thus
C2, dimensionality reduction eigenmatrix generate: adopt I and II subspace projection method, with j sub-eigenmatrix Carry out the characteristic dimensionality reduction, each subcharacter vector x I ' j(1≤i '≤n) project on the low n-dimensional subspace n, obtain n dimensionality reduction proper vector altogether
Figure FDA00001396059500000214
Each dimensionality reduction proper vector
Figure FDA00001396059500000215
Dimension be m<d 2Each subcharacter vector x I ' jWith corresponding dimensionality reduction proper vector
Figure FDA00001396059500000216
Form mapping relations one by one, behind the I and II subspace projection, dimension is d 2The subcharacter vector x I ' jConverting dimension to is the dimensionality reduction proper vector of m
Figure FDA0000139605950000031
Realize the characteristic dimensionality reduction, all dimensionality reduction proper vectors
Figure FDA0000139605950000032
(1≤i '≤n) form j dimensionality reduction eigenmatrix X j m = { x i ′ j m } i ′ = 1 n ;
C3, weight coefficient calculate: the dimensionality reduction proper vector in that step C2 obtains is gathered
Figure FDA0000139605950000034
In seek out and the dimensionality reduction proper vector
Figure FDA0000139605950000035
Between k minimum dimensionality reduction proper vector of Euclidean distance, k=5~10 are according to step C1 neutron proper vector x I ' jWith dimensionality reduction proper vector among the step C2
Figure FDA0000139605950000036
Between mapping relations one by one, and the call number of the k that searches out a dimensionality reduction proper vector, low in the training that steps A obtains successively, high-resolution features matrix X sAnd Y sIn seek out that corresponding k is low to training, high-resolution features is vectorial, and, high-resolution features low to training by this k is vectorial forms j low, the high-resolution features matrix of neighbour respectively With
Figure FDA0000139605950000038
According to j neighbour's low resolution eigenmatrix
Figure FDA0000139605950000039
Wait to rebuild the low resolution proper vector with j Calculate j similarity group
Figure FDA00001396059500000311
With j scale factor group
Figure FDA00001396059500000312
And associating
Figure FDA00001396059500000313
With
Figure FDA00001396059500000314
Calculate j normalization and embed the weight coefficient group
Figure FDA00001396059500000315
C4, reconstruction high-definition picture piece: the linear weighted function form that embeds weight coefficient group and j neighbour's high-resolution features matrix
Figure FDA00001396059500000317
according to j normalization; Estimate j rebuild back high-resolution features vector
Figure FDA00001396059500000318
again with j brightness average
Figure FDA00001396059500000319
addition of waiting to rebuild the low-resolution image piece; And convert the image block form into, can draw j and rebuild back high-definition picture piece;
Repeat above step C1~step C4, obtain N altogether 2Individual size is a high-definition picture piece after the reconstruction of z*z, obtains the preliminary image H of super-resolution rebuilding behind the lap splice 0
D, aftertreatment:
Adopt the preliminary image H of iteration back-projection algorithms to super-resolution rebuilding 0Carry out Q iterative computation and handle, Q=10~30 obtain the final image H of super-resolution rebuilding *
2. image super-resolution rebuilding method according to claim 1; It is characterized in that; Adopt I and II subspace projection method among the described step C2, the specific practice of j sub-eigenmatrix
Figure FDA00001396059500000320
being carried out obtaining behind the characteristic dimensionality reduction j dimensionality reduction eigenmatrix is:
The one-level subspace projection:
Calculate j j the radially basic kernel function matrix that sub-eigenmatrix
Figure FDA0000139605950000042
is corresponding D j r = { D i ′ j ′ j r } i ′ , j ′ = 1 n = { 2 - 2 * exp ( - | | x i ′ j r - x j ′ j r | | 2 / γ j ) } i ′ , j ′ = 1 n , Wherein the individual radially parameter of basic kernel function of j does
Figure FDA0000139605950000044
Then to j radially basic kernel function matrix K jCarry out obtaining j centralization kernel function matrix after centralization is handled
Figure FDA0000139605950000045
G=I-1/n in the formula, I are unit matrixs that size is n*n, and 1 is that a size is n*n, and all values is 1 matrix; Find the solution j centralization kernel function matrix
Figure FDA0000139605950000046
Descending characteristic value decomposition equation
Figure FDA0000139605950000047
I '=1,2,3 ..., n, j=1,2,3 ..., N 2, λ in the formula I ' jFor by the individual eigenwert of i ' after the descending sort, α I ' jIt is the individual eigenvalue of i ' I ' jThe corresponding individual proper vector of i '; Again to the individual proper vector α of i ' I ' jCarry out obtaining the orthogonalization proper vector after the orthogonalization process
Figure FDA0000139605950000048
R eigenvalue of maximum λ before choosing I ' jCorresponding orthogonalization proper vector
Figure FDA0000139605950000049
R=50~n, the r that selects orthogonalization proper vector
Figure FDA00001396059500000410
Form j one-level subspace projection matrix
Figure FDA00001396059500000411
R eigenvalue of maximum before described
Figure FDA00001396059500000412
Sum accounts for all eigenwerts
Figure FDA00001396059500000413
More than 99% of sum; By formula
Figure FDA00001396059500000414
Draw j one-level mapping matrix
Figure FDA00001396059500000415
T is the matrix transpose computing, all one-level mapping vectors
Figure FDA00001396059500000416
(1≤i '≤n) dimension is r;
The secondary subspace projection:
Calculate j corresponding nuclear distance function matrix of j one-level mapping matrix
Figure FDA00001396059500000417
D j r = { D i ′ j ′ j r } i ′ , j ′ = 1 n = { 2 - 2 * Exp ( - | | x i ′ j r - x j ′ j r | | 2 / γ j ) } i ′ , j ′ = 1 n , Wherein the parameter of j nuclear distance function does
Figure FDA00001396059500000419
Construct j adjacency matrix then
Figure FDA00001396059500000420
As i ' ∈ Λ J ' jOr j ' ∈ Λ I ' jThe time,
Figure FDA00001396059500000421
Otherwise, S I ' j ' j=0, Λ wherein J ' jAnd Λ I ' jBe expressed as respectively
Figure FDA00001396059500000422
The neighborhood tally set, the element number of neighborhood tally set is b=5~10; Calculate j adjacency matrix S jJ corresponding Laplce's matrix F j=diag (S j* 1 n)-S j, wherein 1 nBeing expressed as a size is n*1, and all values is 1 column vector, and diag () is expressed as the diagonalization of matrix computing; Then calculate j symmetric matrix respectively U j = X j r * F j * ( X j r ) T With j diagonal matrix V j = X j r * { Diag ( S j * 1 n ) } * ( X j r ) T , J symmetric matrix U jWith j diagonal matrix V jAll be that size is the matrix of r*r, unite to solve U jAnd V jBroad sense ascending order characteristic value decomposition equation
Figure FDA0000139605950000053
In the formula
Figure FDA0000139605950000054
Be the individual eigenwert of i ' after arranging by ascending order, p I ' jIt is the individual eigenwert of i '
Figure FDA0000139605950000055
The corresponding individual proper vector of i '; M minimal eigenvalue before choosing Characteristic of correspondence vector p I ' j, m=10~r, the m that a selects proper vector
Figure FDA0000139605950000057
Form j secondary subspace projection matrix P j=[p 1j, p 2j..., p Mj], by formula
Figure FDA0000139605950000058
Can draw j dimensionality reduction eigenmatrix
Figure FDA0000139605950000059
All dimensionality reduction proper vectors (1≤i '≤n) dimension is m.
3. image super-resolution rebuilding method according to claim 1; It is characterized in that associating j similarity group
Figure FDA00001396059500000511
and j scale factor group calculates the specific practice of j normalization embedding weight coefficient group and be among the described step C3:
Calculate j neighbour's low resolution eigenmatrix that step C3 obtains respectively
Figure FDA00001396059500000514
In k training low resolution proper vector
Figure FDA00001396059500000515
(1≤i "≤k) wait to rebuild the low resolution proper vector with j
Figure FDA00001396059500000516
Between similarity
Figure FDA00001396059500000517
With corresponding scale factor
Figure FDA00001396059500000518
Wherein
Figure FDA00001396059500000519
With
Figure FDA00001396059500000520
Be expressed as training low resolution proper vector respectively
Figure FDA00001396059500000521
Wait to rebuild the low resolution proper vector with j
Figure FDA00001396059500000522
In u brightness value, ε is 1*10 -4~1*10 -2Positive number; The k that calculates is to similarity
Figure FDA00001396059500000523
With scale factor c I " jForm j similarity group respectively
Figure FDA00001396059500000524
With j scale factor group
Figure FDA00001396059500000525
Unite j similarity group
Figure FDA00001396059500000526
With j scale factor group
Figure FDA00001396059500000527
Calculate j and embed the weight coefficient group
Figure FDA00001396059500000528
And to j embedding weight coefficient group
Figure FDA0000139605950000061
Do and obtain j normalization embedding weight coefficient group after normalization is handled { w ^ i ′ ′ j } i ′ ′ = 1 k = { w i ′ ′ j / Σ i ′ ′ = 1 k w i ′ ′ j } i ′ ′ = 1 k .
CN201210049804.1A 2012-02-29 2012-02-29 Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding Expired - Fee Related CN102629374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210049804.1A CN102629374B (en) 2012-02-29 2012-02-29 Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210049804.1A CN102629374B (en) 2012-02-29 2012-02-29 Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding

Publications (2)

Publication Number Publication Date
CN102629374A true CN102629374A (en) 2012-08-08
CN102629374B CN102629374B (en) 2014-05-21

Family

ID=46587632

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210049804.1A Expired - Fee Related CN102629374B (en) 2012-02-29 2012-02-29 Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding

Country Status (1)

Country Link
CN (1) CN102629374B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103236049A (en) * 2013-05-05 2013-08-07 西安电子科技大学 Partial K space image reconstruction method based on sequence similarity interpolation
CN103780863A (en) * 2014-01-17 2014-05-07 Tcl集团股份有限公司 High-resolution image generation method and device
CN104182931A (en) * 2013-05-21 2014-12-03 北京大学 Super resolution method and device
CN104484418A (en) * 2014-12-17 2015-04-01 中国科学技术大学 Characteristic quantification method and system based on double resolution factors
CN105046664A (en) * 2015-07-13 2015-11-11 广东工业大学 Image denoising method based on self-adaptive EPLL algorithm
CN105069825A (en) * 2015-08-14 2015-11-18 厦门大学 Image super resolution reconstruction method based on deep belief network
CN106228583A (en) * 2016-07-14 2016-12-14 北京大学 The device that a kind of abdomen images is rebuild
CN106920214A (en) * 2016-07-01 2017-07-04 北京航空航天大学 Spatial target images super resolution ratio reconstruction method
CN107066984A (en) * 2017-04-20 2017-08-18 四川大学 Algorithm for gait recognition based on subspace integrated study
CN108537264A (en) * 2018-03-30 2018-09-14 西安电子科技大学 Heterologous image matching method based on deep learning
CN108616590A (en) * 2018-04-26 2018-10-02 清华大学 The iteration accidental projection algorithm and device of 1000000000 scale networks insertion
CN109636727A (en) * 2018-12-17 2019-04-16 辽宁工程技术大学 A kind of super-resolution rebuilding image spatial resolution evaluation method
WO2021000471A1 (en) * 2019-06-29 2021-01-07 苏州浪潮智能科技有限公司 High-resolution image matching method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070019887A1 (en) * 2004-06-30 2007-01-25 Oscar Nestares Computing a higher resolution image from multiple lower resolution images using model-base, robust bayesian estimation
CN101976435A (en) * 2010-10-07 2011-02-16 西安电子科技大学 Combination learning super-resolution method based on dual constraint

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070019887A1 (en) * 2004-06-30 2007-01-25 Oscar Nestares Computing a higher resolution image from multiple lower resolution images using model-base, robust bayesian estimation
CN101976435A (en) * 2010-10-07 2011-02-16 西安电子科技大学 Combination learning super-resolution method based on dual constraint

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HONG CHANG ET AL: "Super-Resolution Through Neighbor Embedding", 《PROCEEDINGS OF THE 2004 IEEE COMPUTER SOCIETY CONFERENCE》, vol. 1, 27 June 2004 (2004-06-27), pages 1 - 8 *
李民等: "非局部联合稀疏近似的超分辨率重建算法", 《电子与信息学报》, vol. 33, no. 6, 30 June 2011 (2011-06-30), pages 1407 - 1412 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103236049A (en) * 2013-05-05 2013-08-07 西安电子科技大学 Partial K space image reconstruction method based on sequence similarity interpolation
CN103236049B (en) * 2013-05-05 2015-10-28 西安电子科技大学 Based on the partial K space image reconstruction method of sequence similarity interpolation
CN104182931A (en) * 2013-05-21 2014-12-03 北京大学 Super resolution method and device
CN104182931B (en) * 2013-05-21 2017-04-26 北京大学 Super resolution method and device
CN103780863A (en) * 2014-01-17 2014-05-07 Tcl集团股份有限公司 High-resolution image generation method and device
CN103780863B (en) * 2014-01-17 2017-08-25 Tcl集团股份有限公司 A kind of high-definition picture generation method and device
CN104484418A (en) * 2014-12-17 2015-04-01 中国科学技术大学 Characteristic quantification method and system based on double resolution factors
CN104484418B (en) * 2014-12-17 2017-10-31 中国科学技术大学 A kind of characteristic quantification method and system based on dual resolution design
CN105046664A (en) * 2015-07-13 2015-11-11 广东工业大学 Image denoising method based on self-adaptive EPLL algorithm
CN105046664B (en) * 2015-07-13 2018-05-25 广东工业大学 A kind of image de-noising method based on adaptive EPLL algorithms
CN105069825A (en) * 2015-08-14 2015-11-18 厦门大学 Image super resolution reconstruction method based on deep belief network
CN105069825B (en) * 2015-08-14 2018-06-12 厦门大学 Image super-resolution rebuilding method based on depth confidence network
CN106920214A (en) * 2016-07-01 2017-07-04 北京航空航天大学 Spatial target images super resolution ratio reconstruction method
CN106920214B (en) * 2016-07-01 2020-04-14 北京航空航天大学 Super-resolution reconstruction method for space target image
CN106228583A (en) * 2016-07-14 2016-12-14 北京大学 The device that a kind of abdomen images is rebuild
CN107066984A (en) * 2017-04-20 2017-08-18 四川大学 Algorithm for gait recognition based on subspace integrated study
CN108537264A (en) * 2018-03-30 2018-09-14 西安电子科技大学 Heterologous image matching method based on deep learning
CN108616590A (en) * 2018-04-26 2018-10-02 清华大学 The iteration accidental projection algorithm and device of 1000000000 scale networks insertion
CN108616590B (en) * 2018-04-26 2020-07-31 清华大学 Billion-scale network embedded iterative random projection algorithm and device
CN109636727A (en) * 2018-12-17 2019-04-16 辽宁工程技术大学 A kind of super-resolution rebuilding image spatial resolution evaluation method
WO2021000471A1 (en) * 2019-06-29 2021-01-07 苏州浪潮智能科技有限公司 High-resolution image matching method and system

Also Published As

Publication number Publication date
CN102629374B (en) 2014-05-21

Similar Documents

Publication Publication Date Title
CN102629374B (en) Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding
Wu et al. Fourier-based rotation-invariant feature boosting: An efficient framework for geospatial object detection
Wu et al. ORSIm detector: A novel object detection framework in optical remote sensing imagery using spatial-frequency channel features
CN108510532B (en) Optical and SAR image registration method based on deep convolution GAN
El Amin et al. Convolutional neural network features based change detection in satellite images
CN105069811B (en) A kind of Multitemporal Remote Sensing Images change detecting method
Chen et al. Convolutional neural network based dem super resolution
He et al. Learning group-based sparse and low-rank representation for hyperspectral image classification
CN114187520B (en) Building extraction model construction and application method
CN109034213B (en) Hyperspectral image classification method and system based on correlation entropy principle
Yuan et al. ROBUST PCANet for hyperspectral image change detection
Li et al. An aerial image segmentation approach based on enhanced multi-scale convolutional neural network
CN114048810A (en) Hyperspectral image classification method based on multilevel feature extraction network
Finnveden et al. Understanding when spatial transformer networks do not support invariance, and what to do about it
CN115240072A (en) Hyperspectral multi-class change detection method based on multidirectional multi-scale spectrum-space residual convolution neural network
Khurshid et al. A residual-dyad encoder discriminator network for remote sensing image matching
Duan et al. Multi-scale convolutional neural network for SAR image semantic segmentation
Wang et al. Detecting occluded and dense trees in urban terrestrial views with a high-quality tree detection dataset
CN112381144B (en) Heterogeneous deep network method for non-European and Euclidean domain space spectrum feature learning
CN117115675A (en) Cross-time-phase light-weight spatial spectrum feature fusion hyperspectral change detection method, system, equipment and medium
Huang et al. Rotation and scale-invariant object detector for high resolution optical remote sensing images
CN113887656B (en) Hyperspectral image classification method combining deep learning and sparse representation
Yue et al. Remote-sensing image super-resolution using classifier-based generative adversarial networks
Yuan et al. Graph neural network based multi-feature fusion for building change detection
Semnani et al. House price prediction using satellite imagery

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140521

Termination date: 20190228

CF01 Termination of patent right due to non-payment of annual fee