CN103903241B - Image super-resolution method based on band adjacent side sample - Google Patents

Image super-resolution method based on band adjacent side sample Download PDF

Info

Publication number
CN103903241B
CN103903241B CN201410141448.5A CN201410141448A CN103903241B CN 103903241 B CN103903241 B CN 103903241B CN 201410141448 A CN201410141448 A CN 201410141448A CN 103903241 B CN103903241 B CN 103903241B
Authority
CN
China
Prior art keywords
resolution image
low
resolution
sample
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410141448.5A
Other languages
Chinese (zh)
Other versions
CN103903241A (en
Inventor
端木春江
王泽思
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Normal University CJNU
Original Assignee
Zhejiang Normal University CJNU
Filing date
Publication date
Application filed by Zhejiang Normal University CJNU filed Critical Zhejiang Normal University CJNU
Priority to CN201410141448.5A priority Critical patent/CN103903241B/en
Publication of CN103903241A publication Critical patent/CN103903241A/en
Application granted granted Critical
Publication of CN103903241B publication Critical patent/CN103903241B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

A kind of method that the invention discloses image super-resolution based on sample.Sample in the method is different with traditional method, and it has added a limit in the surrounding of the sample of traditional low resolution.During the online treatment of the present invention, do not select traditional method that each image block is each amplified, and use the amplification method having overlapping region in a kind of amplification process.To this end, the present invention newly defines a kind of matching criterior, optimally to select the high-resolution block of storage in data base wanted corresponding to magnification region.Finally, the high-resolution pixel value in overlapping region is weighted averagely, to obtain the final high-definition picture corresponding with given low-resolution image.

Description

Image super-resolution method based on sample with adjacent edges
Technical Field
The present invention relates to super-resolution techniques in image processing, i.e. for a given image, it is desirable to obtain an enlarged image of the image, the sharper the enlarged image the better. The technology can be applied to the fields of amplifying small images stored on the Internet due to bandwidth limitation and the like, and has wide application fields.
Background
There are two types of super-resolution methods in the world today. One type of method is an interpolation-based method and the other type is a sample-based (example) method. Although the super-resolution method based on the difference is simple and low in complexity, the obtained amplified image is generally fuzzy, and the edge part of the image is not clear. To overcome this disadvantage, a super-resolution method based on a sample is proposed. In such a process, two steps are divided. The first step is to build an image sample database, and the second step is to realize super-resolution of images based on samples. In a first step, an off-line training process is first performed. In the process, firstly, a batch of high-resolution clear images are obtained, and then, a corresponding low-resolution image is obtained for each high-resolution image through a down-sampling or filtering method. Then, the low-resolution and high-resolution images are subjected to block processing, the size of the blocks divided by the low-resolution images can be 4 × 4 or 5 × 5, and for each block in the low-resolution images, the corresponding block can be found in the high-resolution image block. For example: for each 4 x 4 block in the low resolution map, there is an 8 x 8 block in the high resolution map that corresponds or matches it, as required at a magnification of 2. Thus, when the size of the low-resolution image block is N × N, the pixel values in the low-resolution image block form a set:
q(x,y,k)={fL(x,y,k,i,j)|x≤i≤x+N-1,y≤j≤y+N-1}
where k denotes the kth image in the training library, (x, y) is the position of the top left corner of the image block in the low resolution image, fL(x, y, k, i, j) is the value of the pixel point at (i, j) of the kth low-resolution image. Corresponding to the low-resolution image block represented by the set q (x, y, k), there is a set of pixel values of the high-resolution image block:
Q(x,y,k)={fH(x,y,k,i,j)|2x≤i≤2x+2N-1,2y≤j≤2y+2N-1}
where f isH(x, y, k, i, j) is the value of the pixel point at (i, j) of the kth high resolution image. The block represented by each low-resolution set Q (x, y, k), the block represented by its matching high-resolution set Q (x, y, k), and such correspondence may then be stored in a sample database. Each stored low resolution block may be referred to as a sample. After this processing for each high resolution image in the training library, a sample database is obtained that describes a large set of low resolution blocks (samples) and their corresponding high resolution blocks.
After the training database is built, the on-line super-resolution processing can be performed on the given low-resolution images that are not in the training database. In this processing step, the low-resolution image to be enlarged, which is not in the training library, is first segmented, and the size of the block is the same as that of the low-resolution block in the sample database. That is, if the samples in the sample database are all 4 × 4 in size, the low-resolution image to be enlarged is divided into blocks having 4 × 4 in size. Then, for each block to be enlarged, the closest sample to the block to be enlarged is found in the sample database. I.e. for a set of pixels in the block to be enlarged
s(x0,y0)={gL(i,j)|x0≤i≤x0+N-1,y0≤j≤y0+N-1}
First, the sum of the absolute differences between the block to be amplified and the block q (x, y, k) in the training library is calculated:
SAD ( x , y , k ) = Σ i = 1 N - 1 Σ j = 1 N - 1 | g L ( x 0 + i , y 0 + j ) - f L ( x , y , k , i , j ) |
here, (x)0,y0) Position of the upper left corner of the block to be enlarged, gL(i, j) is the pixel value at (i, j) in the low resolution image to be magnified. Then, the block which is most matched with the block to be amplified is searched in the training library, namely, the calculation is carried out
( x o , y o , k o ) = arg { min x , y , k ( SAD ( x , y , k ) ) }
Thus, q (x) is assembled in the training libraryo,yo,ko) The represented block and the block to be enlarged are the closest match, i.e. the closest distance between the two. The set q (x) stored in the training database may then be utilizedo,yo,ko) Represented block and set Q (x)o,yo,ko) The correspondence between the represented blocks is used for super-resolution enlargement. I.e. for sets of pixels in blocks of a high resolution image
S(x0,y0)={gH(i,j)|2x0≤i≤2x0+2N-1,2y0≤j≤2y0+2N-1}
Set Q (x) for pixel values in (1)o,yo,ko) The pixel values in (1) are replaced. Thus, there are
gH(2x0+i,2y0+j)=fH(xo,yo,ko,i,j),
Wherein i is more than or equal to 0 and less than or equal to 2N-1, j is more than or equal to 0 and less than or equal to 2N-1, 2xo≤i′≤2xo+2N-1,2yo≤j′≤2yo+2N-1。
For example, for a block with a size of 4 × 4 to be enlarged, a high-resolution 8 × 8 block stored in the training database and best matching with the sample is found, and then the matching block is used to replace the 8 × 8 block at the corresponding position of the block to be enlarged in the high-resolution image.
The following super-resolution methods have also been proposed in the prior art. In these methods, for each block to be amplified, k samples closest to the block are found in the sample database, k high-resolution blocks matched with the k samples are then found, and then the required high-resolution blocks are obtained by performing weighted average on the k high-resolution blocks.
After each low-resolution block in the low-resolution image is obtained a high-resolution block replacing the low-resolution block, the high-resolution blocks can be pieced together to obtain a high-resolution image, and the image super-resolution processing process is completed.
In the above process, the training database is used to obtain the high-resolution block for each low-resolution block. Thus, the resulting high resolution image generally has strong blockiness. That is, a false edge or a false jump may occur at or around the boundary between one image block and another image block. Such blocking artifacts can severely degrade the visual impact of the resulting high resolution image.
In addition, in this method, there are many-to-one cases where originally low-resolution blocks that are small in difference correspond to high-resolution blocks that are large in difference in the training database. I.e. a number of very different high resolution blocks correspond to a low ratio block. Thus, if the low resolution blocks are slightly disturbed, a very different high resolution image will be brought about. This phenomenon is not overcome in this type of method.
Thus, there is a need to propose or find a method for image super-resolution that effectively overcomes the above disadvantages. In the invention, a novel sample mode is established to overcome the defects and obtain better super-resolution performance.
Disclosure of Invention
(1) Creation of samples with adjacent edges
In the present invention, the extraction of the sample is different from the conventional method. In the conventional method, after the low-resolution image is blocked, the size of the block is the size of the sample, and thus information of pixels around the block is ignored. The subject group believes that the performance of super-resolution processing can be greatly improved if the information ignored in the conventional method can be utilized in the image super-resolution processing.
Therefore, the invention provides an image super-resolution method based on samples with adjacent edges. In this method, for example extraction, in addition to the image information of the original block, information of surrounding pixels adjacent to the periphery of the block in the low-resolution image is included as much as possible. Such as: when the size of the block of low resolution is N × N, the size of the extracted sample is no longer 4 × 4, but is instead (N + m) × (N + m) size as shown in fig. 1 of the specification, where m is the number of pixels in the horizontal or vertical direction that the extracted sample extends outside the boundaries of the block. These pixel values can be obtained in a low resolution image.
The size of the high resolution block corresponding to this example remains the same as in the original method. Such as: when the size of the low-resolution block is 4 × 4, the size of the high-resolution block is also 8 × 8.
Thus, stored in a traditional trained database are: a plurality of samples of size N × N, and a high resolution matching block of size (2N) × (2N) corresponding to each sample, where N × N is the size of the block of low resolution. And stored in the training library of the proposed method are: a plurality of (N + m) × (N + m) -sized samples, and a (2N) × (2N) -sized high-resolution matching block corresponding to each sample. That is, the pixel values in the low resolution sample constitute a new set:
q′(x,y,k)={fL(x,y,k,i,j)|x-m≤i≤x+m+N-1,y-m≤j≤y+m+N-1}
(2) determination of edge pixel point and weight value thereof in matching process
Since in an image the human eye is most sensitive to its edge information. Therefore, in block matching optimization in the super-resolution process, the subject group considers that a weight value larger than a value on a common pixel point is given to an edge pixel value so as to obtain an amplified image with clearer edges.
In image processing, there are many methods proposed for edges, but the first-order gradient method is sensitive to noise. Therefore, the invention adopts the Laplace operator of the second order to extract the edge. After the laplace processing is carried out on the one-dimensional signal, two positive and negative signal values with larger absolute values respectively appear on the left side and the right side of the edge point, and the edge point is a zero-crossing point value. Therefore, the following template is first adopted for the pixels in the image
0 1 0 1 - 4 1 0 1 0
The convolution operation is performed, and then a point having an absolute value smaller than | |, which is a small value, is found, and then a positive or negative value is found in directions of 0 °, 45 °, 90 °, 135 °, 180 ° in a region of Q × Q (Q =5 in the present invention) around the found point, respectively, and its absolute value is larger than T (T =200 in the present invention), and then a value having a sign opposite to this value is found in the opposite direction, and its absolute value is also larger than T. If the point is found, the found point is a zero crossing point after the two-dimensional Laplace operation, namely an edge point in the image; the other points are non-edge points. For the edge points and non-edge points in the image, the weights are A and B, namely
Where (i, j) is the location of a pixel point in the image. (in the present invention, A =1 and B =1.5)
(3) Creation and utilization of overlapping regions
Since the present invention creates examples with neighboring pixel information, this results in an overlap region between the example and the magnified region in a low resolution image, and at the same time, in the present invention, after each block magnification, the distance of movement of the top left corner of the next large block to be magnified is less than the length of one block, which results in an overlap region not only in a low resolution image but also in a high resolution image, for example, if the size of the first large block to be magnified is 4 × 4, the coordinates of the top left corner thereof are (0, 0), the size of the next large block to be magnified is 4 × 4, the coordinates of the top left corner thereof are (0, 0), and the coordinates of the top left corner thereof are (2), the high resolution image and the high resolution image are the high resolution image together, the high resolution image and the low resolution image are the overlap region OL, and the high resolution image are the high resolution image together1= { (i, j) |2 ≦ i ≦ 3, 0 ≦ j ≦ 3}, and the set of overlapping regions on the high-resolution image is OH1And = { (i, j) |4 ≦ i ≦ 7, and 0 ≦ j ≦ 7 }. As the super-resolution processing proceeds, it is possible in the present invention that one block to be enlarged and several enlarged blocks have overlapping regions.
(4) Determination of matching difference between region to be amplified and blocks in training library
Because the samples are provided with adjacent edge pixels, the heuristic information can be utilized when the optimal samples in the training library are searched, and simultaneously the problems of blocking effect and many-to-one in the traditional algorithm are avoided, so that the quality of the amplified image is better.
Meanwhile, different from the traditional method, the proposed method not only considers the matching difference between the low-resolution block in the training library and the current region to be amplified, but also considers the matching difference between the high-resolution block corresponding to the low-resolution block in the training library and the amplified region in the amplified image in the amplification process, so as to obtain better effect.
After the sample and training databases are changed as above, the definition of the matching difference between the region to be magnified and the blocks in the training database during the super-resolution image magnification will also be changed somewhat to take advantage of the proposed sample pattern in the super-resolution process.
For this reason, when selecting the best matching example corresponding to each region to be enlarged, not only the values of the pixels in the region but also the values of the pixels adjacent to the region on the low-resolution image are used. Meanwhile, it is also necessary to consider the difference between the low-resolution image and the high-resolution image of the portion of overlap between the region to be enlarged and the enlarged region.
Therefore, first, calculate
SADEO ( x , y , k , L ol ) = Σ ( i , j ) ∈ L ol | f L ( x , y , k , i , j ) - g L ( i , j ) | · p L ( i , j )
Wherein L isolIs the first intersection between the current region to be magnified and the magnified region. f. ofL(x, y, k, i, j) is the pixel point value at (i, j) of the (x, y, k) th sample in the training library, gL(xol+i,yol+ j) is on the low resolution image (x)ol+i,yolPixel point value at + j), pL(i, j) represents whether the weight of the edge pixel is at (i, j) in the low resolution image, and the calculation method is as described above. Next, the matching difference between this candidate sample and all the overlapping areas on the low resolution image is calculated, i.e.
SADLO ( x , y , k ) = Σ L oi SADEO ( x , y , k , L ol )
Thus SADLO (x, y, k) represents the sum of the absolute differences of the (x, y, k) -th sample and each overlap region.
Then, calculating the absolute difference value of the overlapped part of the high-resolution block corresponding to the (x, y, k) th sample and each amplified region in the training library, namely calculating
SADEH ( x , y , k , H oh ) = Σ ( i , j ) ∈ H oh | f H ( x , y , k , i , j ) - g H ( i , j ) | · p H ( i , j )
Here, HohIs the oh-th intersection between the current region to be magnified and the magnified region. f. ofH(x, y, k, i, j) is the value of the pixel point at (i, j) of the block corresponding to the (x, y, k) th sample in the training library, gH(i, j) is the value of the pixel point of the overlapping portion between the oh-th enlarged block and the block to be enlarged. p is a radical ofH(i, j) represents whether the weight of the edge pixel at (i, j) in the high-resolution image is determined as described above. Next, a high resolution block corresponding to the candidate sample is calculated at high resolutionSum of absolute differences in the image and overlapping parts of the magnified area, i.e.
SADHO ( x , y , k ) = Σ H oh SADEO ( x , y , k , H oh )
Thus SADHO (x, y, k) represents the sum of absolute differences of the high resolution block corresponding to the (x, y, k) -th sample and the enlarged overlapping areas.
Finally, the absolute difference value of the sample and the block to be amplified in the non-overlapping area in the low-resolution image is determined. I.e. calculating
SADNO ( x , y , k ) = Σ ( i , j ) ∈ L no | f L ( x , y , k , i , j ) - g L ( i , j ) | · p L ( i , j )
Here, LnoRepresenting the parts of the low resolution image that do not overlap with the sample (x, y, k). p is a radical ofL(i, j) represents whether the weight of the edge pixel is at (i, j) in the low resolution image, and the calculation method is as described above.
Thus, the resulting matching difference corresponding to sample (x, y, k) is
SADE(x,y,k)=αSADLO(x,y,k)+βSADHO(x,y,k)+γSADNO(x,y,k)
Here, α, β, γ are three balancing factors for balancing the effect of the various differences on the total difference. (in the present invention, for convenience, α = β = γ =1)
(5) Determination of pixel values in high resolution images
In a high resolution map, as previously described, there are pixels in the overlap region of several enlarged blocks. Therefore, it is necessary to finally determine the pixel value in the high-resolution image from the value of each enlarged block at this position. Since the pixel point values in the blocks having small average absolute difference values are more reliable, the blocks having small average absolute difference values are weighted with a large weight, and the blocks having large average absolute difference values are weighted with a small weight. For this purpose, first of all, a calculation is made
sum ( x , y ) = Σ i = 1 L o S ( i ) SADE ( x o ( i ) , y o ( i ) , k o ( i ) )
Here, (x, y) is the position of a pixel point in the high resolution image, LoIs the total number of overlapping regions on the pixel, i represents the ith overlapping region on the pixel, SADE (x)o(i),yo(i),ko(i) Is the minimum SADE value over the ith overlap region, where SADE values are defined above. S (i) to calculate SADE (x)o(i),yo(i),ko(i) Total number of pixels used). Then, the pixel value at (x, y) on the high resolution image is:
g H * ( x , y ) = sum ( x , y ) { Σ i = 1 L o [ S ( i ) SADE ( x o ( i ) , y o ( i ) , k o ( i ) ) ] · [ F H ( x o ( i ) , y o ( i ) , k o ( i ) , m , n ) ] }
wherein f isH(xo(i),yo(i),ko(i) And m, n) is the value of the ith matching block at the pixel point.
(6) Initial processing of low resolution and high resolution images for edge addition
Since the sample in the invention is a sample with adjacent edges, for convenience, the invention adds an edge to the periphery of the low-resolution image, and the width of the edge is the same as that of the adjacent edge in the sample. The pixel values at the surrounding pixel points are determined by the pixel values in their nearest neighbor images.
(7) Off-line processing of the invention
The method is divided into an off-line training database establishing process and an on-line image super-resolution amplifying process. The off-line training database is established as follows. That is, each image in the high-resolution image database is processed as follows:
and step 1) carrying out down-sampling according to the high-resolution graph to obtain a low-resolution graph. k =1
Step 2) the initial edge adding process described above is performed on the low-resolution image to add an edge thereto.
Step 3) let xL=0,yL=0
Step 4) extracting the secondary coordinates (x) in the low-resolution imageL-m,yLM) start to (x)L+N+m-1,yLN × N is the size of the block, m is the width of the adjacent edge with the adjacent edge samples, the selection of the values of N and m needs to be considered from the computational complexity and the performance of the super-resolution processing.
Step 5) let xH=2xL,yH=2yL
Step 6) extracting the secondary coordinates (x) in the low-resolution imageH,yH) Starting with (x)H+2N-1,yH+2N-1) image blocks, stored in the database. Thus, a pair of low resolution blocks (samples with adjacent edges) f is storedL(xL,yLK) and high resolution block fH(xH,yH,k)。
Step 7) let xL=xL+1
Step 8) if xLAnd if not, jumping to the step 4) to extract the next pair of low-resolution blocks and high-resolution blocks, wherein W is the width of the image.
Step 9) let xL=0,yL=yL+1
Step 10) if yLAnd H-N, jumping to the step 4) to extract the next pair of low-resolution blocks and high-resolution blocks, wherein H is the height of the image.
And 11) k = k +1, jumping to the step 1), and processing the next high-resolution image until all the images in the training library are processed.
Thus, through the off-line processing method, a database composed of a plurality of samples which can be used for super-resolution processing and high-resolution blocks matched with the samples is obtained.
(8) In-line process of the invention
For a given low resolution image to be magnified that is not available in one of the databases, the present invention employs the following process.
Step 1) processing the initial low-resolution image, and adding edges to the low-resolution image.
Step 2) let xL=0,yL=0
Step 3) calculating the matching difference SADE (x, y, k) at the position on the low-resolution image, the magnified high-resolution image, the (x, y, k) -th stored sample in the training library and the high-resolution block corresponding to the sample.
And 4) finding the best matched sample in the database and the high-resolution block corresponding to the sample. I.e. at this position
( x o , y o , k o ) = arg [ min ( x , y , k ) ∈ SX ( SADE ( x , y , k ) ]
Wherein SX is a set formed by indexes of all samples in the training database.
Step 5) let xL=xL+ step _ x, step _ x is the distance of the next block to jump in the x-axis direction on the low resolution map. Due to step _ x<N, thus constituting a lateral overlap region.
Step 6) jumping to step 4), finding the most matched image block pair in the database for the next overlapped block, namely the most matched sample and the high-resolution block corresponding to the sample, until xLNot less than W-N. Where W is the width of the image and N is the width of one low resolution block.
Step 7) yL=yL+ step _ y, step _ y is the distance of the next block runout in the y-axis direction on the low resolution map. Due to step _ y<N, thus constituting a longitudinal overlap region.
Step 8) jumping to step 4), finding the most matched image block pair in the database for the next overlapped block, namely the most matched sample and the high-resolution block corresponding to the sample, until yLMore than or equal to H-N. Wherein H is a drawingThe image height, N, is the height of one low resolution block.
Step 9) using the found high resolution matching blocks, pixel values in the high resolution image are determined using the proposed method as described above.
In summary, the innovation points of the invention are as follows: 1) in the off-line training and on-line super-resolution amplification process, the samples with adjacent edges are used to overcome the blocking effect and the one-to-many problem in the traditional method. 2) The block effect and one-to-many problems of the conventional method are continuously overcome by using the overlapping region and the proposed matching criterion during the on-line amplification process. 3) In determining the pixel point values in the overlap region, the final value is inversely proportional to the absolute difference of the overlap region and proportional to the pixel value at this point of the best matching high resolution block of the overlap region.
The application process of the invention comprises an online processing process and an offline processing process. The off-line processing process can be completed at one time, and then a large number of proposed low-resolution blocks with adjacent edge samples and corresponding high-resolution blocks are obtained so as to store the information into the training database. The online process is specific to each image to be magnified, so as to obtain a high-resolution image by using a trained database and the idea of using an overlapping area.
The conception, the specific structure and the technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, the features and the effects of the present invention.
Drawings
FIG. 1 is a schematic diagram of a sample of the adjacent edges of the strip proposed by the present invention;
FIG. 2 is a flow chart of a method for establishing a sample with adjacent edges and a corresponding database offline according to the present invention;
FIG. 3 is a flow chart of the present invention for on-line image super-resolution magnification;
FIG. 4 is an image of a human face for training using a portion of the image in the FERET database according to the present invention;
fig. 5 is a diagram of the experimental results of the processing of face images in the FERET database that are not in the training image library according to the present invention and the prior art. Each column of images from left to right is in turn: (a) input low resolution image (b) existing optimal experimental results of super-resolution amplification based on the sample method (c) proposed experimental results of super-resolution amplification based on the method with adjacent edge samples (d) actual high resolution image.
FIG. 6 is a diagram showing the experimental results of the present invention and the prior art method for processing face images that are not in the training image library in the ID card database. Each column of images from left to right is in turn: (a) input low resolution image (b) existing optimal experimental results of super-resolution amplification based on the sample method (c) proposed experimental results of super-resolution amplification based on the method with adjacent edge samples (d) actual high resolution image.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the accompanying drawings: the present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the protection scope of the present invention is not limited to the following embodiments.
Before performing image super-resolution processing on a certain image, the invention needs to establish a training database. The library stores proposed samples with edges and high resolution blocks that match the samples. The present invention obtains this database using the following steps. First, a large number of high resolution images are collected. This can be obtained from some free image database on the web, such as the FERET database utilized by the present invention, the face database of Yale university, etc. Each high resolution image collected is then processed as described in the description below with reference to fig. 2:
and step 1) carrying out down-sampling on the high-resolution image according to the high-resolution image to obtain a low-resolution image. Let k =1(k =1 indicates the first image in the database and k is the index of the second image).
And 2) carrying out initial edge adding processing on the low-resolution image, and adding an edge to the low-resolution image. Since the proposed sample has adjacent edges, for the convenience of processing, it is necessary to add edges to the whole low-resolution image so as to extract the sample with adjacent edges from the image blocks around. The width of the edge added in the present invention is 2.
Step 3) let xL=0,yL=0, here, (x)L,yL) Is the coordinate in the low resolution image of the upper left corner of the low resolution sample except the edge.
Step 4) extracting the secondary coordinates (x) in the low-resolution imageL-2,yL-2) starting from (x)L+4+2-1,yL+4+2-1) is stored in the database 4 × 4 is the block size and 2 is the width of the adjacent side with adjacent side samples the block size selected in the present invention is 4 × 4 and the width of the adjacent side with adjacent side samples is 2.
Step 5) let xH=2xL,yH=2yL,(xH,yH) Coordinates in the high resolution image of the upper left corner of the block in the high resolution image that matches the sample.
Step 6) extracting the secondary coordinates (x) in the low-resolution imageH,yH) Starting with (x)H+2*4-1,yH+ 2x 4-1) of the image blocks, stored in the database. Thus, a pair of low resolution blocks (samples with adjacent edges) f is storedL(xL,yLK) and high resolution block fH(xH,yHK) where the size of the band adjacent edge samples is 6 × 6, the size of the corresponding high resolution matching block is 8 × 8.
Step 7) let xL=xL+1, the sample is shifted left by one pixel unit on the abscissa of the upper left corner of the low-resolution image to extract the next sample.
Step 8) if xLAnd if not more than W-4, jumping to the step 4) to extract the next pair of low-resolution blocks and high-resolution blocks, wherein W is the width of the image. Otherwise, the sample in the row is extracted, and the next row of samples is extracted.
Step 9) let xL=0,yL=yL+1, the upper left corner of the sample moves down to the beginning of the next row at the coordinates on the low resolution graph.
Step 10) if yLAnd H-N, jumping to the step 4) to extract the next pair of low-resolution blocks and high-resolution blocks, wherein H is the height of the image. Otherwise; and finishing the current image sample extraction process.
And 11) k = k +1, jumping to the step 1), and processing the next high-resolution image until all the images in the training library are processed.
Thus, through the off-line processing method, a database composed of a plurality of samples which can be used for super-resolution processing and high-resolution blocks matched with the samples is obtained.
After the database is established, the invention can perform super-resolution processing on any low-resolution image and enlarge the image. The process of magnifying an image is shown in figure 3 of the specification and is as follows:
step 1) processing the initial low-resolution image, and adding edges with the width of 2 to the low-resolution image.
Step 2) let xL=0,yL=0,(xL,yL) The coordinates in the low resolution image for the upper left corner of the block to be enlarged.
Step 3) at this position, calculating the matching difference SADE (x, y, k) corresponding to the (x, y, k) th stored sample in the training library, the high resolution block corresponding to the sample, on the low resolution map, on the enlarged high resolution map. The calculation of the matching difference includes the difference between the sample and the current low-resolution image block on the low-resolution image and the difference between the overlapping areas, and also includes the difference between the enlarged block on the high-resolution image and the high-resolution block corresponding to the sample on the overlapping areas.
And 4) finding the best matched sample in the database and the high-resolution block corresponding to the sample. I.e. at this position
( x o , y o , k o ) = arg [ min ( x , y , k ) &Element; SX ( SADE ( x , y , k ) ]
Wherein SX is a set formed by indexes of all samples in the training database.
Step 5) let xL=xL+ step _ x, step _ x is the distance of the next block jump in the x-axis direction on the low resolution map, step _ x =2 in the present invention. Due to step _ x<N, thus constituting a lateral overlap region. The invention considers that the current amplification area and the previous amplification area are fully utilized at low resolution and high resolutionThe information of the overlapping portion on the resolution image can further reduce the blocking effect and the one-to-many problem in the super resolution processing.
Step 6) jumping to step 4), finding the most matched image block pair in the database for the next overlapped block, namely the most matched sample and the high-resolution block corresponding to the sample, until xLNot less than W-N. Where W is the width of the image and N =4 is the width of one low resolution block.
Step 7) yL=yL+ step _ y, in the present invention step _ y =2 is the distance of the next block runout in the y-axis direction on the low resolution map. Due to step _ y<N, thus constituting a longitudinal overlap region.
Step 8) jumping to step 4), finding the most matched image block pair in the database for the next overlapped block, namely the most matched sample and the high-resolution block corresponding to the sample, until yLH-N, N =4 is the height of a low resolution block. Where H is the height of the image.
Step 9) using the found high resolution matching blocks, the pixel values in the high resolution image are determined using the method proposed in the present invention above. In the overlapped area on the high-resolution image, the final high-resolution pixel value is obtained mainly by using a weighted average method.
The invention takes a human face image database as an experimental object to carry out experiments, and evaluates experimental results through two aspects of subjective evaluation and objective evaluation, wherein the subjective evaluation refers to the feeling of human eyes on details of a reconstructed image through observation, and the objective evaluation mainly comprises evaluation criteria of MSE, PSNR and MSSIM (average value of SSIM). Where MSE is the root mean square error, defined as:
MSE = &Sigma; y = 1 2 H &Sigma; x = 1 2 W ( g H * ( x , y ) - g H ( x , y ) 2 W &times; 2 H
here, 2W × 2H is the size of the high resolution image,for image pixel values at (x, y) of a high resolution image enlarged from a low resolution image, gH(x, y) is the pixel value of the original high resolution image at (x, y).
PSNR is defined as:
PSNR = 10 &times; log 10 ( max 2 MSE )
here, max denotes the maximum value of possible pixel values in the image. SSIM is defined as:
SSIM = ( 2 &mu; x &mu; y + C 1 ) ( 2 &sigma; xy + C 2 ) ( &mu; x 2 + &mu; y 2 + C 1 ) ( &sigma; x 2 + &sigma; y 2 + C 2 )
wherein,μx,μymean values, C, representing the original high-resolution image and the reconstructed image, respectively1,C2Then represents two constants, σx,σyRepresenting the variance, σ, of the original high resolution image and the reconstructed imagexyRepresenting the joint variance of the two images. MSSIM represents the average of the SSIM values of all blocks in the image.
Experiment one training is carried out by using 240 human faces in a FERET database, wherein 40 different people are included, each person has 6 pictures with different sides, and a part of face images in a training set are shown in the specification and figure 4. The original high resolution image was 120 × 120, and the low resolution image after the down-sampling was 60 × 60, i.e., the magnification was set to 2.
In the experiment, four human face images which are not in the off-line training database are used as input images for testing. Meanwhile, the four images have weak relevance with the existing images in the database. The experimental results are shown in figure 5 in the specification.
As can be seen from fig. 5, the super-resolution image expansion method for the sample with adjacent side information according to the present invention has better reconstruction effect than the best existing super-resolution reconstruction method based on the sample. The edge of the amplified image is more prominent, the details are clearer, and the image is closer to the actual high-resolution image. In fig. 5, the overall sense of the first woman is clearer, the right cheek and eyes of the second woman are clearer, the edge of the outline of the eyes are improved, the definition of the left eye and face part of the third man is obviously improved, the overall definition is improved to a certain extent, and the lip part of the fourth man is improved to a certain extent compared with the existing method. In general, the overall image, both in terms of overall sharpness and sharpness of image edges, is better than the best existing sample-based methods by processing with the method of the present invention. For the super-resolution image reconstructed in fig. 5, MSE and MSSIM between the original high-resolution image and the image obtained by the best existing method based on the sample, and MSE and MSSIM between the original high-resolution image and the image obtained by the method of the present invention were calculated, thereby obtaining tables 1 and 2. It can be seen from the table that the proposed method is closer to the original high resolution image, the MSE is smaller, the MSSIM is larger, and the reconstruction effect is better than that of the reconstruction method based on the sample. In addition, after statistical averaging is carried out on the test results of 100 test pictures, the performance of MSE and MSSIM indexes of the method is better than that of the existing best method based on the sample.
Table 1 this comparison of MSE indices for the best sample-based method already in the actual FERET face library image and the method proposed by the present invention
Table 2 this comparison of MSSIM indices for the best method based on samples already in the actual FERET face library image and the method proposed by the present invention
In the second experiment, 120 face photos on the identity card are collected and used as a picture database to be trained, the size of the original high-resolution image is 120 × 120, and the size of the downsampled low-resolution image is 60 × 60. And (3) taking the face image which is not related to the image in the database for testing, wherein the test result is shown in the attached figure 6 of the specification. PSNR values and SSIM values of the original image and the image obtained by the best method based on the example and the reconstructed image obtained by the method proposed by the present invention are calculated for the results of fig. 6, respectively, and the results are shown in tables 3 and 4. It can be seen from the table that the image reconstructed by the method provided by the invention has larger PSNR and higher MSSIM, and the objective index shows that the performance of the method is better. Meanwhile, it can be seen from fig. 6 that the subjective effect of the proposed method is better and closer to the actual high resolution image. The method of the present invention reconstructs high resolution images that are superior to the best current case-based methods in detail such as mouth, eyes, nose, etc. The method of the invention is generally superior to the best current sample-based method, and the proposed method allows for a clearer magnification of the image.
TABLE 3 comparison of PSNR indexes in the actual ID card face image library for the best method currently based on the sample and the method proposed by the present invention
TABLE 4 comparison of MSSIM indices for the best method currently based on the sample in the actual ID card face image library and the method proposed by the present invention
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (3)

1. A super-resolution image amplification method based on samples with adjacent edges is characterized by comprising an off-line processing process and an on-line processing process, wherein in the off-line processing process, a training database required for super-resolution image amplification is established, in the on-line processing process, any given image is amplified, and the coordinate of the image with low resolution is (x)L,yL) The proposed sample with adjacent edges is from (x)L-m,yL-m) to (x)L+N+m-1,yLLow resolution map of squares of + N + m-1) regionA block size of (N +2m) × (N +2m), wherein the corresponding high-resolution image block is selected from (2 x) in the high-resolution imageL,2yL) To (2 x)L+2N-1,2yL+2N-1) area of square image blocks of size 2N × 2N, a large number of high resolution images available for training being prepared first for an off-line database building process, the method comprising:
step A1) carrying out down-sampling according to the high-resolution image to obtain a low-resolution image, and preparing for extracting the sample with adjacent edges and the corresponding high-resolution image block, wherein k is 1, k represents the index of the image in the database, and k is 1 represents the index of the first image in the database;
step A2) carrying out initial edge adding processing on the low-resolution image, wherein because the sample with adjacent edges is provided with the adjacent edges, the whole low-resolution image is added with edges for convenient processing, and the width of the edges is the same as that of the adjacent edges in the sample;
step A3) let xL=0,yL0, here, (x)L,yL) Coordinates of the upper left corner except the edge on the low-resolution sample in the low-resolution image;
step A4) extracting coordinates (x) from the low resolution imageL-m,yLM) start to (x)L+N+m-1,yL+ N + m-1), storing the low-resolution image blocks in the database, wherein (N +2m) × (N +2m) is the size of the low-resolution image block, m is the width of the adjacent side with the adjacent side sample, the size of the selected low-resolution image block in the invention is 4 × 4, and the width of the adjacent side with the adjacent side sample is 2;
step A5) let xH=2xL,yH=2yL,(xH,yH) Coordinates of the upper left corner of the high-resolution image block matched with the sample with the adjacent side in the high-resolution image;
step A6) extracting coordinates (x) from the high resolution imageH,yH) Starting with (x)H+2*N-1,yH+ 2x N-1) of high resolution image blocks, stored in the database, in this way a pair of low resolution images is storedSample f of blocks with adjacent edgesL(xL,yLK) and high resolution image block fH(xH,yHK), where the size of the sample with neighboring edges is (N +2m) × (N +2m), the size of the corresponding high resolution tile is 2N × 2N;
step A7) let xL=xL+1, shifting the sample with the adjacent edge to the right by one pixel unit on the abscissa of the upper left corner of the low-resolution image to extract the next sample with the adjacent edge;
step A8) if xLIf not more than W-N, jumping to the step A4) to extract the next pair of low-resolution image blocks and high-resolution image blocks, wherein W is the width of the low-resolution image, otherwise, extracting the next row of samples with adjacent edges after the samples with adjacent edges of the row are extracted;
step A9) let xL=0,yL=yL+1, moving the upper left corner of the sample case with the adjacent side to the beginning of the next line under the coordinate on the low-resolution image;
step A10) if yLIf not more than H-N, jumping to the step A4) to extract the next pair of low-resolution image blocks and high-resolution image blocks, wherein H is the height of the low-resolution image, otherwise; finishing the process of extracting the current low-resolution image sample with the adjacent side;
step A11) k is k +1, and the process jumps to step A1), the next high-resolution image is processed until all the high-resolution images in the training library are processed;
thus, by the off-line processing method, a database which is composed of a plurality of samples with adjacent edges and can be used for super-resolution processing and high-resolution image blocks matched with the samples with the adjacent edges is obtained;
for an online process, the method comprises:
step B1) processing the initial low-resolution image, adding a side to the low-resolution image, and determining the pixel values of the pixel points on the periphery to be filled on the side-added low-resolution image by the pixel value in the nearest low-resolution image;
step B2) let xL=0,yL=0,(xL,yL) Coordinates of the upper left corner of the low-resolution image block to be amplified in the low-resolution image are obtained;
step B3) in this position,
using formulasCalculating the sum SADEO (x, y, k, L) of the absolute differences of the pixels of the sample (x, y, k) with adjacent edges and the magnified image block in the first overlapped region in the low resolution imageol) Wherein L isolIs the first intersection, f, between the currently to-be-magnified region and the magnified regionL(x, y, k, i, j) is the pixel point value at (i, j) of the adjacent edge sample (x, y, k) in the training library, gL(i, j) is the pixel point value at (i, j) on the low resolution image, pL(i, j) represents whether the weight of the edge pixel at the position (i, j) in the low-resolution image is determined;
using formulasCalculating the sum SADLO (x, y, k) of absolute differences of pixels of all overlapped areas of the sample (x, y, k) with adjacent edges and the amplified image block in the low-resolution image;
using formulasCalculating the sum SADEH (x, y, k, H) of the absolute differences of the pixels of the high-resolution image block corresponding to the sample (x, y, k) with adjacent edges and the magnified image block in the oh-th overlapping area in the high-resolution imageoh) Wherein H isohIs the oh-th intersection, f, between the currently to-be-magnified region and the magnified regionH(x, y, k, i, j) is the value of the pixel point at (i, j) of the block corresponding to the sample with adjacent edges (x, y, k) in the training library, gH(i, j) is the value of the pixel point of the overlap between the oh-th enlarged block and the block to be enlarged, pH(i, j) represents whether the weight of the edge pixel at (i, j) in the high-resolution image is determined;
using formulasCalculating the SADHO (x, y, k) sum of absolute differences of pixels of all overlapped areas of a high-resolution image block corresponding to the sample (x, y, k) with the adjacent side and an amplified image block in the high-resolution image;
using formulasCalculating the sum SADNO (x, y, k) of absolute differences of pixels with adjacent edge samples (x, y, k) and the image block to be magnified in the non-overlapping area in the low-resolution image, wherein LnoRepresenting the parts of the low-resolution image that do not overlap with the adjacent edge samples (x, y, k), gL(i, j) is the pixel point value at (i, j) on the low resolution image, pL(i, j) represents whether the weight of the edge pixel at the position (i, j) in the low-resolution image is determined;
obtaining a comprehensive matching difference SADE (x, y, k) with adjacent edge samples (x, y, k) by using SADE (x, y, k) ═ α SADLO (x, y, k) + β SADHO (x, y, k) + γ SADNO (x, y, k) formula, SADE (x, y, k) is a value after comprehensively considering all matching differences, namely absolute differences of pixels, wherein α, β, γ are three balance factors for balancing the influence of each difference on the total difference, and in the present invention, α ═ β ═ γ is set to 1;
calculating the difference value SADE (x, y, k) corresponding to the adjacent side sample with low resolution, the amplified high resolution image, the adjacent side sample with high resolution stored in the (x, y, k) th sample with high resolution and the high resolution image block corresponding to the adjacent side sample with high resolution on the training library according to the difference value between the adjacent side sample with low resolution and the current low resolution image block on the low resolution image, the difference value of the overlapping area, the difference value between the amplified high resolution image and the high resolution image block corresponding to the adjacent side sample with high resolution and the difference value between the adjacent side sample with high resolution and the high resolution image block corresponding to the adjacent side sample with high resolution on the low resolution image, and simultaneously giving different weight values to the;
the template is utilized when determining the edge pixel pointThe two-dimensional second-order laplacian of (a);
step B4) finding the best matching adjacent edge sample in the database and the high resolution image block corresponding to the adjacent edge sample, i.e. calculating the position of the image block
Wherein SX is a set formed by indexes of all samples with adjacent edges in the training database;
step B5) let xL=xLThe + step _ x, step _ x is the distance of the jump of the next block in the x-axis direction on the low resolution image, step _ x is 2 in the invention, because step _ x < N, thus form the horizontal overlap region;
step B6) jumps to step B4), the best matching image block pair in the database, i.e. the best matching sample with adjacent edges and the high resolution image block corresponding to the sample with adjacent edges, is found for the next overlapping area until xLW-N, wherein W is the width of the high-resolution image, and N-4 is the width of a low-resolution image block;
step B7) yL=yL+ step _ y, where step _ y is 2, which is the distance of the next block jumping up and down in the y-axis direction on the low resolution image, and step _ y is less than N, thus forming a vertical overlapping region;
step B8) jumps to step B4), the best matching image block pair in the database, i.e. the best matching sample with adjacent edges and the high resolution image block corresponding to the sample with adjacent edges, is found for the next overlapping area until yLH-N, wherein N-4 is the height of a low-resolution image block, and H is the height of a high-resolution image;
step B9) determining the pixel value in the high-resolution image by using each found high-resolution image matching block, and obtaining the final pixel value of the high-resolution image by using a weighted average method by adopting the principle that a block with small average absolute difference value has a large weight and a block with large average absolute difference value has a small weight in the overlapping area on the high-resolution image.
2. The method of claim 1, wherein the process of determining edge pixel points specifically comprises:
firstly, the following template is adopted for the pixels in the image
Performing convolution operation, then searching points with absolute values smaller than | |, wherein the points are small values, then respectively searching positive or negative values in the directions of 0 °, 45 °, 90 °, 135 ° and 180 ° in a Q × Q area around the found points, the absolute values of the values are larger than T, then searching values which are different from the values in the opposite directions, and the absolute values of the values are also larger than T, setting Q to be 5 and T to be 200 in the invention, and if the values are found, the found points are zero-crossing points after two-dimensional Laplace operation, namely edge points in the image; the other points are non-edge points, and the weight values of the edge points and the non-edge points in the image are A and B respectively, namely
Wherein (i, j) is the position of a pixel point in the image, and in the invention, A is set to be 1, and B is set to be 1.5;
in the utilization of formulaCalculating absolute difference values over low resolution overlap regions using formulasCalculating absolute difference of non-overlapping area on low resolution block and using formulaThese weights are used separately in calculating the absolute difference over the high resolution overlap region, where pL(i,j)、pH(i, j) are the weights p (i, j) on the low resolution image and the high resolution image, respectively.
3. The method according to claim 1, wherein the obtaining of the final pixel value of the high resolution image by using the weighted average method specifically comprises:
first of all, calculate
Here, (x, y) is the position of a pixel point in the high resolution image, LoIs the total number of overlapping regions on the pixel, i represents the ith overlapping region on the pixel, SADE (x)o(i),yo(i),ko(i) Is the minimum SADE value at the i-th overlap region, S (i) is the calculation of SADE (x)o(i),yo(i),ko(i) Total number of pixel points used), the pixel value at (x, y) on the high resolution image is
Wherein f isH(xo(i),yo(i),ko(i) And m, n) is the value of the ith matching block at the pixel point.
CN201410141448.5A 2014-04-03 Image super-resolution method based on band adjacent side sample Active CN103903241B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410141448.5A CN103903241B (en) 2014-04-03 Image super-resolution method based on band adjacent side sample

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410141448.5A CN103903241B (en) 2014-04-03 Image super-resolution method based on band adjacent side sample

Publications (2)

Publication Number Publication Date
CN103903241A CN103903241A (en) 2014-07-02
CN103903241B true CN103903241B (en) 2016-11-30

Family

ID=

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Example-Based Super Resolution;William T.Freeman et al.;《IEEE Computer Graphics & Applications》;20021231;第22卷(第2期);全文 *
图像超分辨率重建处理算法研究;万雪芬 等;《激光与红外》;20111130;第41卷(第11期);全文 *
图像超分辨率重建技术综述;王春霞 等;《计算机技术与发展》;20110531;第21卷(第5期);全文 *
基于例子的图像超分辨率重建技术研究;范新胜;《中国优秀硕士学位论文全文数据库 信息科技辑》;20111215;全文 *

Similar Documents

Publication Publication Date Title
CN110992262B (en) Remote sensing image super-resolution reconstruction method based on generation countermeasure network
Yan et al. Single image superresolution based on gradient profile sharpness
Le Meur et al. Hierarchical super-resolution-based inpainting
Sun et al. Super-resolution from internet-scale scene matching
WO2021022929A1 (en) Single-frame image super-resolution reconstruction method
Wang et al. Fast image upsampling via the displacement field
CN106683048A (en) Image super-resolution method and image super-resolution equipment
CN111626927B (en) Binocular image super-resolution method, system and device adopting parallax constraint
CN109711268B (en) Face image screening method and device
Han et al. Multi-level U-net network for image super-resolution reconstruction
CN107633482A (en) A kind of super resolution ratio reconstruction method based on sequence image
CN107767357B (en) Depth image super-resolution method based on multi-direction dictionary
CN106447654B (en) Quality evaluating method is redirected based on statistics similarity and the image of two-way conspicuousness fidelity
CN104992403A (en) Hybrid operator image redirection method based on visual similarity measurement
CN108876776B (en) Classification model generation method, fundus image classification method and device
CN116664677B (en) Sight estimation method based on super-resolution reconstruction
Kim et al. Sredgenet: Edge enhanced single image super resolution using dense edge detection network and feature merge network
CN111179173B (en) Image splicing method based on discrete wavelet transform and gradient fusion algorithm
Wu et al. A new sampling algorithm for high-quality image matting
CN109741258B (en) Image super-resolution method based on reconstruction
CN110097530A (en) Based on multi-focus image fusing method super-pixel cluster and combine low-rank representation
CN108492250A (en) The method of depth image Super-resolution Reconstruction based on the guiding of high quality marginal information
CN103903241B (en) Image super-resolution method based on band adjacent side sample
CN105469369B (en) Digital picture filtering method and system based on segmentation figure
Wang et al. Remote sensing image magnification study based on the adaptive mixture diffusion model

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20200212

Address after: 310052 floor 2, No. 1174, Binhe Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee after: Hangzhou fog Technology Co., Ltd.

Address before: 321004 Zhejiang province Jinhua City Yingbin Road No. 688, Zhejiang Normal University

Patentee before: Zhejiang Normal University

TR01 Transfer of patent right

Effective date of registration: 20201202

Address after: 314400 Chunlan West Road, Haining Nongfa District, Haining City, Jiaxing City, Zhejiang Province

Patentee after: Zhejiang Sen Bao Textile Technology Co.,Ltd.

Address before: 310052 floor 2, No. 1174, Binhe Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee before: Hangzhou fog Technology Co.,Ltd.