CN108122262B - Sparse representation single-frame image super-resolution reconstruction algorithm based on main structure separation - Google Patents

Sparse representation single-frame image super-resolution reconstruction algorithm based on main structure separation Download PDF

Info

Publication number
CN108122262B
CN108122262B CN201611065372.8A CN201611065372A CN108122262B CN 108122262 B CN108122262 B CN 108122262B CN 201611065372 A CN201611065372 A CN 201611065372A CN 108122262 B CN108122262 B CN 108122262B
Authority
CN
China
Prior art keywords
resolution
image
main structure
dictionary
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611065372.8A
Other languages
Chinese (zh)
Other versions
CN108122262A (en
Inventor
隋修宝
吴健
高航
陈钱
顾国华
刘源
吴少迟
吴骁斌
匡晓东
刘程威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201611065372.8A priority Critical patent/CN108122262B/en
Publication of CN108122262A publication Critical patent/CN108122262A/en
Application granted granted Critical
Publication of CN108122262B publication Critical patent/CN108122262B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/416Exact reconstruction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/513Sparse representations

Abstract

The invention discloses a main structure separation-based sparse representation single-frame image super-resolution reconstruction algorithm. The method introduces related total variation for solving the super-resolution problem for the first time, so that the edge of the separated main structure is sharp, the strong self-similarity is provided, the reconstruction effect is improved, meanwhile, the complex calculation of the traditional method is avoided, and the efficiency is improved. The complexity of the texture part is reduced, various texture patterns can be reconstructed through an external dictionary, the problem that the size of the dictionary is insufficient and the change of the complex patterns can be dealt with in the traditional dictionary learning super-resolution method is solved, and the method can deal with different types of images.

Description

Sparse representation single-frame image super-resolution reconstruction algorithm based on main structure separation
Technical Field
The invention relates to an image super-resolution technology, in particular to a sparse representation single-frame image super-resolution reconstruction algorithm based on main structure separation.
Background
The development of the information technology era enables people to obtain images more and more widely, and is limited by imaging systems, including Point Spread Function (PSF) and spectrum aliasing effect, some image quality can not reach the expectation of people, compared with the situation that a large amount of expenditure is spent on imaging equipment, the image quality is enhanced through some software algorithms, time and labor are saved, and the image super-resolution technology is one of the image super-resolution technologies, and is widely applied to the fields of medical images, satellite imaging, target recognition, video monitoring and the like.
Early super-resolution algorithms were mainly based on multi-frame images, such as optical flow, POCS, IBP, bayesian estimation, etc., and these algorithms required estimation of motion displacement between frames and registration at sub-pixel accuracy. In practice, it is difficult to obtain a low-resolution image sequence all the time, and it is also difficult to perform high-precision motion estimation and registration, so that such a method is not suitable for practical applications. In subsequent development, single-frame image super-resolution methods gradually occupy the mainstream, and the methods solve many-to-one ill-condition problems by applying the prior knowledge of images. Basic algorithms exist for estimating unknown image points using simple functions, which, while fast, do not help much in increasing the unknown details of the image. More complicated methods based on reconstruction exist, and the method integrates various priori knowledge such as gradient, edge and the like into a cost equation so as to recover some details of the image, however, the effect of the method depends on the used priori knowledge to a great extent, and therefore, the effect cannot be satisfied. The method developed most rapidly in recent years is a dictionary learning-based method, which recovers an image by learning a mapping relationship between image blocks with high and low resolutions, and obtains excellent effects, such as a Neighborhood Embedding (NE) method and a sparse representation method (SC), wherein the SC trains a corresponding dictionary with high and low resolutions through machine learning, and reconstructs an input image by using the dictionary, and the SC is given wide attention due to the accuracy and rapidness of the dictionary in expressing the image blocks. However, this method depends on the size of the dictionary, and if the dictionary is too large, it takes a long time, and if the dictionary is too small, it cannot cope with complicated patterns.
The main structure separation is mainly applied to edge extraction, and full variation, weighted least squares, bilateral filtering and the like are commonly used, but the methods cannot well remove the texture of an image. In the super-resolution field, the main structure separation mainly utilizes total variation, total variation reconstruction is adopted for the separated main structure part, a large amount of time is consumed due to complex calculation, interpolation is simply adopted for texture parts, and finally the two parts are added to obtain a final super-resolution image. This method has very limited effectiveness, and thus gradually exits the stage
Disclosure of Invention
The invention aims to provide a main structure separation-based sparse representation single-frame image super-resolution reconstruction algorithm, which combines the main structure separation with a dictionary learning method, reduces the dependence on the size of a dictionary and training samples, improves the quality of a reconstructed image, and reduces the time complexity to the maximum extent.
The technical solution for realizing the purpose of the invention is as follows: a main structure separation-based sparse representation single-frame image super-resolution reconstruction algorithm comprises the following steps:
step 1: main structure separation of input original low resolution image by RTV, IL=SL+TLIn which ILRepresenting an input low resolution image, SLMain structural image, T, representing a low resolution imageLTexture images representing low-resolution images, wherein the images are all represented as a column vector set consisting of small image blocks;
step 2: to the originalStarting low resolution image ILDown-sampling is carried out to obtain a down-sampled low-resolution image ILLDecomposition of I by RTVLLTo obtain its main structure SLLCalculating the adaptive dictionary size Z from the image information according to the following formula:
Figure BDA0001164291730000021
wherein m is ILLN is ILLC is ILLThe self-similarity coefficient of the image block is rho, which is a fixed parameter;
then to SLAnd SLLPerforming self-driven K-SVD dictionary training to obtain a corresponding main structure high-low resolution dictionary;
and step 3: high-low resolution dictionary pair S using main structureLPerforming super-resolution reconstruction to obtain a high-resolution main structure SH
And 4, step 4: texture part T of image by using offline trained texture dictionaryLDirectly carrying out super-resolution reconstruction to obtain corresponding high-resolution texture TH
And 5: high resolution master structure SHAnd high resolution texture THOverlapping to obtain complete high-resolution image IH=SH+TH
Step 6: for the obtained high resolution image IHPerforming iterative back-projection to satisfy the original low resolution image ILThe formula is as follows:
Figure BDA0001164291730000031
wherein
Figure BDA0001164291730000032
Is a high-resolution estimated image obtained after the nth iteration, u is a gradient descent step length, B is a fuzzy core of bicubic interpolation, and an initial image
Figure BDA0001164291730000033
Is namely IH
And 7: after the iteration is finished, a final output image I is obtainedout
Compared with the prior art, the invention has the remarkable advantages that:
(1) by decomposing the input image into the main structure and the texture and then processing the main structure and the texture separately, compared with the method of directly performing dictionary learning reconstruction on the mixed pattern, the method can remarkably reduce the requirements on the dictionary size and the training sample, thereby greatly improving the quality of the reconstructed image while reducing the computational complexity.
(2) The image is decomposed by adopting the related total variation, and sharper main structure edges and purer textures can be obtained, so that the main structure part of the final image can be reconstructed by directly utilizing self-similarity, and dozens of times of running time is saved.
(3) Proposing an adaptive dictionary size computation function
Figure BDA0001164291730000034
The optimal dictionary size is obtained corresponding to the characteristics of different main structure images, so that the running time is further reduced while over-fitting is prevented, and the provided algorithm is more efficient while the quality of the main structure images is improved.
Drawings
FIG. 1 is a flow chart of a main structure separation-based sparse representation single-frame image super-resolution reconstruction algorithm of the present invention.
Fig. 2 is a reconstruction of a dictionary constructed by using different dictionary sizes in a main structure part in embodiment 1 of the present invention.
Fig. 3 is a reconstruction situation when the main structure portion constructs a dictionary using the optimal dictionary size and the adaptive dictionary size in embodiment 1 of the present invention.
Fig. 4 is a diagram illustrating an influence of an external dictionary obtained by applying different training schemes on a compared reconstruction effect of a classical algorithm in embodiment 1 of the present invention, where "1000 full" represents that a size of a dictionary is 1000, a whole training image library is used during training, "500 full" represents that a size of a dictionary is 500, a whole training image library is used during training, "500 half" represents that a size of a dictionary is 500, a half training image library is used during training, and a relative PSNR represents a difference between a PSNR value reconstructed by the classical algorithm and a PSNR value reconstructed by bicubic interpolation.
Fig. 5 is a diagram illustrating an influence of an external dictionary on a reconstruction effect of the proposed algorithm obtained by applying different training schemes in embodiment 1 of the present invention, where "1000 full" represents that the size of the dictionary is 1000, the whole training image library is used during training, "500 full" represents that the size of the dictionary is 500, the whole training image library is used during training, "500 half" represents that the size of the dictionary is 500, a half of the training image library is used during training, and a relative PSNR represents a difference between a reconstructed PSNR value of the proposed algorithm and a reconstructed PSNR by bicubic interpolation.
Fig. 6 is a comparison of the reconstructed visual effect of the proposed algorithm on image "foreman" with other classical algorithms, where (a) is the high resolution original image, (b) is the reconstructed result of bicubic interpolation amplification, (c) is the reconstructed result of the conventional dictionary learning super-resolution algorithm, (d) is the improved dictionary learning super-resolution algorithm, and (e) is the reconstructed result of the proposed method.
Fig. 7 is a comparison of the reconstructed visual effect of the proposed algorithm on image "combic" with other classical algorithms, where (a) is the high resolution original, (b) is the reconstructed result of bicubic interpolation magnification, (c) is the reconstructed result of the conventional dictionary learning super-resolution algorithm, (d) is the improved dictionary learning super-resolution algorithm, and (e) is the reconstructed result of the proposed method.
Fig. 8 is a comparison of the reconstructed visual effect of the proposed algorithm on the image "baby" with other classical algorithms, where (a) is the high resolution original image, (b) is the reconstructed result of bicubic interpolation amplification, (c) is the reconstructed result of the conventional dictionary learning super-resolution algorithm, (d) is the improved dictionary learning super-resolution algorithm, and (e) is the reconstructed result of the proposed method.
Detailed Description
The present invention is described in further detail below with reference to the attached drawing figures.
The invention relates to a main structure separated sparse representation single-frame image super-resolution reconstruction algorithm. The principle is as follows: by separating the input low-resolution images by an efficient relative total variation method, main structural components only containing sharp edges and parts only containing textures are obtained. And then, the two components are respectively processed differently, so that the two different pattern modes can be reconstructed in a targeted manner. For main structure components, a self-driven dictionary learning algorithm based on self-similarity is provided, original edge information of the main structure components is fully utilized to combine and reconstruct a high-resolution main structure image, compared with the complex calculation of the traditional algorithm, a large amount of time is saved, for texture parts, an external dictionary is adopted to reconstruct, as the texture has no edge interference, the dependence on a dictionary is reduced, and the dictionary does not need a large amount of complex sample training and large size. By the aid of the processing framework, the calculation time can be greatly reduced under the condition of improving the quality of the reconstructed image, and the algorithm is efficient and easy to use.
With reference to fig. 1, a sparse representation single-frame image super-resolution reconstruction algorithm based on main structure separation includes the following steps:
step 1: principal structure separation of an input original low resolution image by correlated total variation (RTV), IL=SL+TLIn which ILRepresenting an input low resolution image, SLMain structural image, T, representing a low resolution imageLThe method comprises the following steps of representing texture images of low-resolution images, wherein the images are all represented as a column vector set formed by small image blocks, and the method specifically comprises the following steps:
1-1) input of a Low resolution image ILSize parameter σ, strength parameter λ;
1-2) calculating weight information:
Figure BDA0001164291730000051
Figure BDA0001164291730000052
Figure BDA0001164291730000053
Figure BDA0001164291730000054
wherein u isxIs the weight, w, of the horizontal neighborhood gradient informationxIs the weight, u, of the pixel gradient information in the horizontal directionyIs the weight, w, of the neighborhood gradient information in the vertical directionyIs the weight, G, of the gradient information of the pixels in the vertical directionσIs a gaussian filter which is used to filter the signal,
Figure BDA0001164291730000055
is the derivative in the horizontal direction of the signal,
Figure BDA0001164291730000056
is the derivative of the vertical direction, ε and εsIs any minimum value used to stabilize the numerical solution.
1-3) solving the Linear equation
Figure BDA0001164291730000057
Wherein C isxToplitz matrices obtained for forward differentiation of discrete gradients in the horizontal direction, CyIs a Toeplitz matrix obtained by forward differential of discrete gradient in vertical direction, lambda is a balance parameter, t is iteration number,
Figure BDA0001164291730000058
Figure BDA0001164291730000061
are diagonal matrixes, and the values on the diagonal are respectively corresponding ux,uy,wx,wyThe value of (c).
1-4) iterating 1-2) and 1-3) three times to obtain separated SLAnd TL
Step 2: for the original lowResolution image ILDown-sampling is carried out to obtain a down-sampled low-resolution image ILLDecomposition of I by RTVLLTo obtain its main structure SLLCalculating the adaptive dictionary size Z from the image information according to the following formula:
Figure BDA0001164291730000062
wherein m is ILLN is ILLC is ILLAnd p is a fixed parameter of the self-similarity coefficient of the image block.
Then to SLAnd SLLCarrying out self-driven K-SVD dictionary training to obtain a corresponding main structure high-low resolution dictionary, and specifically comprising the following steps:
2-1) reacting SLLCarrying out bicubic interpolation amplification to obtain a low-resolution training image SLMSimultaneously inputting SLAnd an initial low resolution dictionary DLAnd turning to the step 2-2);
2-2) according to the objective equation:
Figure BDA0001164291730000063
using OMP algorithm to obtain SLMAt DLSparse coding of (X) { X }1,x2,x3...xiε' is any minimum value;
and (6) turning to the step 2-3).
2-3) fixed sparse coding X and initial Low resolution dictionary DLThe initial low-resolution dictionary DLIs denoted as d in the k-th columnkSimultaneously, let sparse coding X and dkKth action of multiplication
Figure BDA0001164291730000064
The rewrite objective function is:
Figure BDA0001164291730000065
wherein EkDenotes the removal of atom dkThe error caused in the training image by the component(s);
and (5) turning to the step 2-4).
2-4) pairs of EkAnd
Figure BDA0001164291730000066
the transformation is carried out, and the data is transmitted,
Figure BDA0001164291730000067
in which only the coefficients of the non-zero positions, E, are retainedkRetaining only dkAnd
Figure BDA0001164291730000068
the term resulting from the product of the medium non-zero positions, thereby obtaining EkTransformed error
Figure BDA0001164291730000071
And (6) turning to the step 2-5).
2-5) pairs
Figure BDA0001164291730000072
Performing SVD decomposition and updating dkAnd (5) turning to the step 2-6).
2-6) returning to 2-3), repeating for 30 times to obtain the final low-resolution dictionary DLNamely, the low-resolution dictionary with the main structure is obtained.
2-7) obtaining a corresponding main structure high resolution dictionary D by the following equationH
DH=SLXT(XXT)-1 (6)
And step 3: high-low resolution dictionary pair S using main structureLPerforming super-resolution reconstruction to obtain a high-resolution main structure SHThe method comprises the following specific steps:
3-1) to SL3 x 3 image blocks s inLCarrying out bicubic interpolation amplification to obtain an intermediate image block sMAnd then, the sequence is transferred to 3-2).
3-2) lower in main structure by OMP algorithmResolution dictionary DLFinding the best expression sLAnd corresponding sparse coding representation coefficients x are obtained, go to 3-3).
3-3) multiplying sparse coding representation coefficient x by main structure high resolution dictionary DHObtaining the main structure high resolution image block sHAnd then, the sequence is transferred to 3-4).
3-4) Return to 3-2), for SLAll the image blocks in the image processing system are respectively calculated to obtain corresponding high-resolution image blocks, the obtained high-resolution image blocks are placed at corresponding positions in a high-resolution grid, and overlapping areas are averaged to obtain a final main structure image SH
And 4, step 4: texture part T of image by using offline trained texture dictionaryLDirectly carrying out super-resolution reconstruction to obtain corresponding high-resolution texture THThe method comprises the following specific steps:
4-1) carrying out RTV decomposition on a high-resolution picture in an external image library specially used for dictionary learning to obtain a corresponding high-resolution texture image, and simultaneously carrying out down-sampling on the high-resolution picture to obtain a low-resolution image;
4-2) performing K-SVD dictionary training on the high-resolution texture image and the low-resolution image to obtain a high-resolution dictionary and a low-resolution dictionary of the texture;
4-3) reconstructing the texture image of the input low-resolution image on the texture dictionary to obtain a high-resolution texture image.
And 5: high resolution master structure SHAnd high resolution texture THSuperposing to obtain a complete high-resolution image IH=SH+TH
Step 6: for the obtained high resolution image IHPerforming iterative back-projection to satisfy the original low resolution image ILThe formula is as follows:
Figure BDA0001164291730000081
wherein
Figure BDA0001164291730000082
Is a high-resolution estimated image obtained after the nth iteration, u is a gradient descent step length, B is a fuzzy core of bicubic interpolation, and an initial image
Figure BDA0001164291730000083
Is namely IH
And 7: after the iteration is finished, a final output image I is obtainedout
Example 1
With reference to fig. 6, 7 and 8, the three pictures "foreman", "comic" and "baby" are subjected to super-resolution reconstruction processing by a sparse representation single-frame image super-resolution reconstruction algorithm separated based on a main structure, and the amplification factor is 2 times, so that details of the peak signal-to-noise ratio and the running time of the reconstructed image relative to the original high-resolution image are obtained, and compared with other novel algorithms.
Table 1-1 compares the signal-to-noise ratio (PSNR) and the running time of pictures "foreman", "comic" and "baby" after being amplified by a factor of 2
Figure BDA0001164291730000084
With reference to fig. 2-5, it can be seen from fig. 2 and 3 that better high resolution images can be reconstructed by using the proposed adaptive dictionary size compared to other dictionary sizes, and the features of different images are well adapted.
As can be seen from fig. 4 and 5, under the dictionary training scheme using the same pattern, the effect of our algorithm is superior to that of other algorithms, and when the dictionary training scheme is deteriorated, our algorithm is less degraded in reconstruction quality and greatly superior in performance to other algorithms, which confirms the assumption that our main structure separation scheme is less dependent on the dictionary.
In conjunction with tables 1-1, fig. 6-8, we can clearly observe that the proposed algorithm is significantly better than other algorithms in peak signal-to-noise ratio (PSNR) and provides better visual effects, including more detail, sharper edges, and fewer artifacts.
In summary, the present invention introduces the relevant total variation for the first time to perform super-resolution reconstruction of the image, decomposes the original input image into a main structure portion only containing edges and a portion only containing textures, and then separately processes the main structure portion and the texture portion. Meanwhile, the invention provides a self-driven dictionary learning algorithm to reconstruct the separated main structure part, thereby improving the image quality and reducing the complexity of calculation. For texture images, reconstruction is carried out through an external redundant dictionary, the advantage of the decomposition process is benefited, texture parts are hardly interfered by edge information, and therefore, excessive complex pattern modes do not exist, dependence on dictionary size and training samples is avoided, and the quality of the final reconstructed images is further improved.

Claims (5)

1. A single-frame image super-resolution reconstruction method based on main structure separation and sparse representation is characterized by comprising the following steps:
step 1: main structure separation of input original low resolution image by RTV, IL=SL+TLIn which ILRepresenting an input low resolution image, SLMain structural image, T, representing a low resolution imageLTexture images representing low-resolution images, wherein the images are all represented as a column vector set consisting of small image blocks;
for the input original low resolution image ILPerforming RTV decomposition, which comprises the following steps:
1-1) input of a Low resolution image ILSize parameter σ, strength parameter λ;
1-2) calculating weight information:
Figure FDA0002980942850000011
Figure FDA0002980942850000012
Figure FDA0002980942850000013
Figure FDA0002980942850000014
wherein u isxIs the weight, w, of the horizontal neighborhood gradient informationxIs the weight, u, of the pixel gradient information in the horizontal directionyIs the weight, w, of the neighborhood gradient information in the vertical directionyIs the weight of the gradient information of the pixels in the vertical direction, S is the finally obtained main structure image, GσIs a gaussian filter which is used to filter the signal,
Figure FDA0002980942850000015
is the derivative in the horizontal direction of the signal,
Figure FDA0002980942850000016
is the derivative of the vertical direction, ε and εsAll are arbitrary minimum values used for stabilizing numerical solutions;
1-3) solving the Linear equation
Figure FDA0002980942850000017
Wherein C isxToplitz matrices obtained for forward differentiation of discrete gradients in the horizontal direction, CyIs a Toeplitz matrix obtained by forward differential of discrete gradient in vertical direction, lambda is a balance parameter, t is iteration number,
Figure FDA0002980942850000018
Figure FDA0002980942850000019
are diagonal matrixes, and the values on the diagonal are respectively corresponding ux、uy、wx、wyThe value of (a) is,
Figure FDA00029809428500000110
a main structure image which is a low-resolution image after iteration t +1 times;
1-4) iterating 1-2) and 1-3) three times to obtain separated SLAnd TL
Step 2: for original low resolution image ILDown-sampling is carried out to obtain a down-sampled low-resolution image ILLDecomposition of I by RTVLLTo obtain its main structure SLLCalculating the adaptive dictionary size Z from the image information according to the following formula:
Figure FDA0002980942850000021
wherein m is ILLN is ILLC is ILLThe self-similarity coefficient of the image block is rho, which is a fixed parameter;
then to SLAnd SLLPerforming self-driven K-SVD dictionary training to obtain a corresponding main structure high-low resolution dictionary;
and step 3: high-low resolution dictionary pair S using main structureLPerforming super-resolution reconstruction to obtain a high-resolution main structure SH
And 4, step 4: texture part T of image by using offline trained texture dictionaryLDirectly carrying out super-resolution reconstruction to obtain corresponding high-resolution texture TH
And 5: high resolution master structure SHAnd high resolution texture THSuperposing to obtain a complete high-resolution image IH=SH+TH
Step 6: for the obtained high resolution image IHPerforming iterative back-projection to satisfy the original low resolution image ILThe formula is as follows:
Figure FDA0002980942850000022
wherein
Figure FDA0002980942850000023
Is a high-resolution estimated image obtained after the nth iteration, u is a gradient descent step length, B is a fuzzy core of bicubic interpolation,
Figure FDA0002980942850000024
the initial image before iteration is obtained;
and 7: after the iteration is finished, a final output image I is obtainedout
2. The single-frame image super-resolution reconstruction method based on main structure separation and sparse representation according to claim 1, wherein the main structure separation and sparse representation comprises the following steps: the self-driven K-SVD dictionary training in the step 2 comprises the following specific steps:
2-1) reacting SLLCarrying out bicubic interpolation amplification to obtain a low-resolution training image SLMSimultaneously inputting SLAnd an initial low resolution dictionary DLAnd turning to the step 2-2);
2-2) according to the objective equation:
Figure FDA0002980942850000031
using OMP algorithm to obtain SLMAt DLSparse coding of (X) { X }1,x2,x3...xnN represents the column number of the sparse coding X, and epsilon' is any minimum value;
turning to the step 2-3);
2-3) fixed sparse coding X and initial Low resolution dictionary DLThe initial low-resolution dictionary DLIs denoted as d in the k-th columnkSimultaneously, let sparse coding X and dkKth action of multiplication
Figure FDA0002980942850000032
The rewrite objective function is:
Figure FDA0002980942850000033
wherein EkDenotes the removal of atom dkThe error caused in the training image by the component(s);
turning to the step 2-4);
2-4) pairs of EkAnd
Figure FDA0002980942850000034
the transformation is carried out, and the data is transmitted,
Figure FDA0002980942850000035
in which only the coefficients of the non-zero positions, E, are retainedkRetaining only dkAnd
Figure FDA0002980942850000036
the term resulting from the product of the medium non-zero positions, thereby obtaining EkTransformed error
Figure FDA0002980942850000037
Turning to the step 2-5);
2-5) pairs
Figure FDA0002980942850000038
Performing SVD decomposition and updating dkAnd turning to the step 2-6);
2-6) returning to 2-3), repeating for 30 times to obtain the final low-resolution dictionary DLThe low-resolution dictionary of the main structure is obtained;
2-7) obtaining a corresponding main structure high resolution dictionary D by the following equationH
DH=SLXT(XXT)-1 (6)。
3. The single-frame image super-resolution reconstruction method based on main structure separation and sparse representation according to claim 1, wherein the main structure separation and sparse representation comprises the following steps: the main structure super-resolution reconstruction in the step 3 specifically comprises the following steps:
3-1) to SL3 x 3 image blocks s inLCarrying out bicubic interpolation amplification to obtain an intermediate image block sMAnd then, turning to 3-2);
3-2) low resolution dictionary D on main structure by OMP algorithmLFinding the best expression sLAnd obtaining corresponding sparse coding representation coefficients x, and turning into 3-3);
3-3) multiplying sparse coding representation coefficient x by main structure high resolution dictionary DHObtaining the main structure high resolution image block sHAnd then, turning to 3-4);
3-4) Return to 3-2), for SLAll the image blocks in the image processing system are respectively calculated to obtain corresponding high-resolution image blocks, the obtained high-resolution image blocks are placed at corresponding positions in a high-resolution grid, and overlapping areas are averaged to obtain a final main structure image SH
4. The single-frame image super-resolution reconstruction method based on main structure separation and sparse representation according to claim 1, wherein the main structure separation and sparse representation comprises the following steps: the texture super-resolution reconstruction in the step 4 specifically comprises the following steps:
4-1) carrying out RTV decomposition on the high-resolution picture in the external image library to obtain a corresponding high-resolution texture image, and simultaneously carrying out down-sampling on the high-resolution picture to obtain a low-resolution image;
4-2) performing K-SVD dictionary training on the high-resolution texture image and the low-resolution image to obtain a high-resolution dictionary and a low-resolution dictionary of the texture;
4-3) reconstructing the texture image of the input low-resolution image on the texture dictionary to obtain a high-resolution texture image.
5. The single-frame image super-resolution reconstruction method based on main structure separation and sparse representation according to claim 4, wherein the main structure separation and sparse representation comprises the following steps: the external image library is a set of high-resolution image sets that are used exclusively for dictionary training.
CN201611065372.8A 2016-11-28 2016-11-28 Sparse representation single-frame image super-resolution reconstruction algorithm based on main structure separation Active CN108122262B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611065372.8A CN108122262B (en) 2016-11-28 2016-11-28 Sparse representation single-frame image super-resolution reconstruction algorithm based on main structure separation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611065372.8A CN108122262B (en) 2016-11-28 2016-11-28 Sparse representation single-frame image super-resolution reconstruction algorithm based on main structure separation

Publications (2)

Publication Number Publication Date
CN108122262A CN108122262A (en) 2018-06-05
CN108122262B true CN108122262B (en) 2021-05-07

Family

ID=62223733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611065372.8A Active CN108122262B (en) 2016-11-28 2016-11-28 Sparse representation single-frame image super-resolution reconstruction algorithm based on main structure separation

Country Status (1)

Country Link
CN (1) CN108122262B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215761A (en) * 2019-07-12 2021-01-12 华为技术有限公司 Image processing method, device and equipment
CN110443754B (en) * 2019-08-06 2022-09-13 安徽大学 Method for improving resolution of digital image
CN110675318B (en) * 2019-09-10 2023-01-03 中国人民解放军国防科技大学 Sparse representation image super-resolution reconstruction method based on main structure separation
CN112737595B (en) * 2020-12-28 2023-10-24 南京航空航天大学 Reversible projection compressed sensing method based on FPGA

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102142137A (en) * 2011-03-10 2011-08-03 西安电子科技大学 High-resolution dictionary based sparse representation image super-resolution reconstruction method
CN102722865A (en) * 2012-05-22 2012-10-10 北京工业大学 Super-resolution sparse representation method
CN102968766A (en) * 2012-11-23 2013-03-13 上海交通大学 Dictionary database-based adaptive image super-resolution reconstruction method
CN103077511A (en) * 2013-01-25 2013-05-01 西安电子科技大学 Image super-resolution reconstruction method based on dictionary learning and structure similarity
CN103077505A (en) * 2013-01-25 2013-05-01 西安电子科技大学 Image super-resolution reconstruction method based on dictionary learning and structure clustering
CN103093444A (en) * 2013-01-17 2013-05-08 西安电子科技大学 Image super-resolution reconstruction method based on self-similarity and structural information constraint
CN103150713A (en) * 2013-01-29 2013-06-12 南京理工大学 Image super-resolution method of utilizing image block classification sparse representation and self-adaptive aggregation
CN105844590A (en) * 2016-03-23 2016-08-10 武汉理工大学 Image super-resolution reconstruction method and system based on sparse representation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10613208B2 (en) * 2015-05-15 2020-04-07 Texas Instruments Incorporated Low complexity super-resolution technique for object detection in frequency modulation continuous wave radar

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102142137A (en) * 2011-03-10 2011-08-03 西安电子科技大学 High-resolution dictionary based sparse representation image super-resolution reconstruction method
CN102722865A (en) * 2012-05-22 2012-10-10 北京工业大学 Super-resolution sparse representation method
CN102968766A (en) * 2012-11-23 2013-03-13 上海交通大学 Dictionary database-based adaptive image super-resolution reconstruction method
CN103093444A (en) * 2013-01-17 2013-05-08 西安电子科技大学 Image super-resolution reconstruction method based on self-similarity and structural information constraint
CN103077511A (en) * 2013-01-25 2013-05-01 西安电子科技大学 Image super-resolution reconstruction method based on dictionary learning and structure similarity
CN103077505A (en) * 2013-01-25 2013-05-01 西安电子科技大学 Image super-resolution reconstruction method based on dictionary learning and structure clustering
CN103150713A (en) * 2013-01-29 2013-06-12 南京理工大学 Image super-resolution method of utilizing image block classification sparse representation and self-adaptive aggregation
CN105844590A (en) * 2016-03-23 2016-08-10 武汉理工大学 Image super-resolution reconstruction method and system based on sparse representation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Multi-scale Dictionary for Single Image Super-resolution;Kaibing Zhang等;《IEEE Xplore》;20120726;第1114-1121页 *
基于稀疏表示的图像去噪和超分辨率重建研究;李珅;《中国博士学位论文全文数据库 信息科技辑》;20150415;第2015年卷(第04期);第I138-61页 *

Also Published As

Publication number Publication date
CN108122262A (en) 2018-06-05

Similar Documents

Publication Publication Date Title
CN110969577B (en) Video super-resolution reconstruction method based on deep double attention network
Jiji et al. Single‐frame image super‐resolution using learned wavelet coefficients
CN108122262B (en) Sparse representation single-frame image super-resolution reconstruction algorithm based on main structure separation
CN110443768B (en) Single-frame image super-resolution reconstruction method based on multiple consistency constraints
CN105631807B (en) The single-frame image super-resolution reconstruction method chosen based on sparse domain
CN110675347B (en) Image blind restoration method based on group sparse representation
CN108830791B (en) Image super-resolution method based on self sample and sparse representation
CN108764368B (en) Image super-resolution reconstruction method based on matrix mapping
CN113808032A (en) Multi-stage progressive image denoising algorithm
CN108765330A (en) Image de-noising method and device based on the joint constraint of global and local priori
CN114723630A (en) Image deblurring method and system based on cavity double-residual multi-scale depth network
CN108460723B (en) Bilateral total variation image super-resolution reconstruction method based on neighborhood similarity
CN110675318B (en) Sparse representation image super-resolution reconstruction method based on main structure separation
CN116563100A (en) Blind super-resolution reconstruction method based on kernel guided network
CN107146202B (en) Image blind deblurring method based on L0 regularization and fuzzy kernel post-processing
CN107481189B (en) Super-resolution image reconstruction method based on learning sparse representation
CN114022809A (en) Video motion amplification method based on improved self-coding network
CN113096032A (en) Non-uniform blur removing method based on image area division
CN113240581A (en) Real world image super-resolution method for unknown fuzzy kernel
CN110211037B (en) Image super-resolution method based on multi-stage sparse dictionary learning
Laghrib et al. An improved PDE-constrained optimization fluid registration for image multi-frame super resolution
CN116797456A (en) Image super-resolution reconstruction method, system, device and storage medium
CN112348745B (en) Video super-resolution reconstruction method based on residual convolutional network
CN114638761A (en) Hyperspectral image panchromatic sharpening method, device and medium
Esmaeilzehi et al. UPDCNN: A new scheme for image upsampling and deblurring using a deep convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant