CN113436078A - Self-adaptive image super-resolution reconstruction method and device - Google Patents

Self-adaptive image super-resolution reconstruction method and device Download PDF

Info

Publication number
CN113436078A
CN113436078A CN202110912643.3A CN202110912643A CN113436078A CN 113436078 A CN113436078 A CN 113436078A CN 202110912643 A CN202110912643 A CN 202110912643A CN 113436078 A CN113436078 A CN 113436078A
Authority
CN
China
Prior art keywords
image
images
super
low
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110912643.3A
Other languages
Chinese (zh)
Other versions
CN113436078B (en
Inventor
安丽军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Novartis Film Technology Jiangsu Co Ltd
Original Assignee
Novartis Film Technology Jiangsu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Novartis Film Technology Jiangsu Co Ltd filed Critical Novartis Film Technology Jiangsu Co Ltd
Priority to CN202110912643.3A priority Critical patent/CN113436078B/en
Publication of CN113436078A publication Critical patent/CN113436078A/en
Application granted granted Critical
Publication of CN113436078B publication Critical patent/CN113436078B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]

Abstract

The invention relates to the technical field of image reconstruction, and discloses a self-adaptive image super-resolution reconstruction method, which comprises the following steps: normalizing the input image, and sampling the normalized image twice to obtain two sampled images; denoising the sampled image by using a dual-tree wavelet algorithm to obtain a denoised input image; carrying out gray level extraction on the denoised image to obtain a gray level matrix of the image, and decomposing by utilizing a singular value to obtain fractional order representation of the image; carrying out feature extraction on fractional order images of different orders by using a multi-scale channel network to obtain detail feature images of the images under different orders; and overlapping the detail characteristic images under different orders to the input image to obtain a super-resolution image. The invention also provides a self-adaptive image super-resolution reconstruction device. The invention realizes self-adaptive super-resolution reconstruction of the image.

Description

Self-adaptive image super-resolution reconstruction method and device
Technical Field
The invention relates to the technical field of image reconstruction, in particular to a self-adaptive image super-resolution reconstruction method and device.
Background
In the process of image shooting and transmission, a lot of subjective factors and objective factors can introduce a lot of noise into an image, for example, the image acquisition equipment configuration is relatively lagged, the shooting environment is bad and the like, so that the detail information and the edge information of the image are polluted, and how to perform noise reduction processing on the acquired image becomes a hot topic of current research.
A traditional image noise reduction algorithm, such as a DWT-based noise reduction algorithm, achieves good overall noise reduction performance, but the situations of lack of translation invariance and lack of direction selectivity are existed, and meanwhile, edge information of an image is weakened.
In view of this, how to achieve more efficient image super-resolution reconstruction becomes an urgent problem to be solved by those skilled in the art.
Disclosure of Invention
The invention provides a self-adaptive image super-resolution reconstruction method, which is characterized in that an input image is denoised by using a dual-tree wavelet algorithm, the grayscale of the denoised image is extracted to obtain a grayscale matrix of the image, and fractional order representation of the image is obtained by using singular value decomposition; and performing feature extraction on the fractional order images with different orders by using a multi-scale channel network to obtain detail feature images of the images under different orders, and overlapping the obtained detail feature images with different orders to the input image to obtain a super-resolution image.
In order to achieve the above object, the present invention provides a method for reconstructing a super-resolution of a self-adaptive image, comprising:
normalizing the input image, and sampling the normalized image twice to obtain two sampled images;
denoising the sampled image by using a dual-tree wavelet algorithm to obtain a denoised input image;
carrying out gray level extraction on the denoised image to obtain a gray level matrix of the image, and decomposing by utilizing a singular value to obtain fractional order representation of the image;
carrying out feature extraction on fractional order images of different orders by using a multi-scale channel network to obtain detail feature images of the images under different orders;
and overlapping the detail characteristic images under different orders to the input image to obtain a super-resolution image.
Optionally, the sampling the normalized image twice includes:
normalizing the input images to make all the input images have a uniform image size, wherein the normalized image size is M pixels by N pixels, in one embodiment of the invention, the step of normalizing the images comprises performing expansion, rotation and other processing on the images, wherein the value of M is 512, and the value of N is 256;
sampling the normalized image twice to obtain two sampled images, wherein the formula of the image sampling is as follows:
Figure BDA0003204383800000011
Figure BDA0003204383800000012
Figure BDA0003204383800000013
Figure BDA0003204383800000014
wherein:
f0for normalizing the processed image, f1For the first sampled image, f2The image after the second sampling is obtained;
DFT (-) denotes performing fourier transform on the image, IDFT (-) denotes performing inverse fourier transform;
Figure BDA0003204383800000015
is a high-pass filter of the scale j,
Figure BDA0003204383800000016
d represents the distance between the current sampling pixel for filtering and the center of a frequency rectangle in the Fourier spectrum to be filtered;
and performing Fourier transform processing on the normalized image, performing product calculation on the normalized image and a high-pass filter and a low-pass filter respectively, performing inverse Fourier transform on a filtered result to obtain an image after first sampling, and repeating the steps on the image after first sampling to obtain an image after second sampling.
In a specific embodiment of the invention, the invention makes the sampling pixels above the scale and the amplitude value in the image Fourier spectrum pass through a high-pass filter by setting different scales, makes the sampling pixels below the scale and the amplitude value in the image Fourier spectrum pass through a low-pass filter, and samples the original image into images with the same size but different scales by utilizing a multi-scale filter bank, wherein the image f sampled for the first time is1For low-frequency sampled images corresponding to smaller scales, second-sampled image f2The high-frequency sampling image corresponding to a larger scale.
Optionally, the denoising the sampled image by using the dual-tree wavelet algorithm includes:
1) the image f after the first sampling1Extracting pixel values in lines along the horizontal direction of the image, and expressing the pixel values according to the change of the pixel values along the x axis to obtain an image f1Image signal F of1(t); the second sampled image f2Extracting pixel values in lines along the horizontal direction of the image, and expressing the pixel values according to the change of the pixel values along the x axis to obtain an image f1Image signal F of2(t); wherein t is the position of the pixel extracting the pixel value according to the row on the x axis, namely a certain signal in the image signal is represented, and the signal amplitude is the pixel value;
2) image signal F1(t) performing wavelet transform processing of Tree A in dual-Tree wavelet algorithm, using low-pass filter h0(t) for the image signal F1(t) decomposing to output a low frequency sub-band and a high frequency sub-band, passing through a high pass filter h1(t) and h2(t) decomposing and outputting two high frequency sub-bands, inputting the low frequency sub-bands to a low pass filter h0(t) decomposing and outputting a low frequency-low frequency sub-band and a low frequency-high frequency sub-band, and inputting the high frequency sub-band into a high pass filter h1(t) and h2(t) transfusion4 high-frequency and high-frequency sub-bands are generated, wherein the low-frequency sub-band of the image is a main part of the image, and the high-frequency sub-band of the image is a detail information part of the image; the calculation formula of the filter is as follows:
Figure BDA0003204383800000021
Figure BDA0003204383800000022
wherein:
j represents the scale of the filter;
g (t) represents the distance from the position of the image signal t to the center of the frequency rectangle in the image signal;
the formula of the image signal decomposition is as follows:
Figure BDA0003204383800000023
wherein:
Figure BDA0003204383800000024
representing the scaling coefficients in a filter of scale j,
Figure BDA0003204383800000025
represents the low frequency subband of the decomposition, i ═ 0;
Figure BDA0003204383800000026
representing the wavelet coefficients in a filter of scale j,
Figure BDA0003204383800000027
represents the decomposed high frequency subband, i ═ 1, 2;
Figure BDA0003204383800000028
for the scaling function, i is 0,1,2,
Figure BDA0003204383800000029
Figure BDA00032043838000000210
for the wavelet function, i is 0,1,2, the relationship of the filter and the wavelet function is:
Figure BDA00032043838000000211
Figure BDA00032043838000000212
Figure BDA00032043838000000213
the relationship between the wavelet function and the scale function is:
Figure BDA00032043838000000214
3) image signal F2(t) wavelet transform processing of Tree B in the dual-Tree wavelet algorithm is performed using a low-pass filter g0(t) for the image signal F2(t) decomposing to output a low frequency sub-band and a high frequency sub-band, passing through a high pass filter g1(t) and g2(t) decomposing and outputting two high frequency sub-bands, inputting the low frequency sub-bands to a low pass filter g0(t) decomposing and outputting a low frequency-low frequency sub-band and a low frequency-high frequency sub-band, and inputting the high frequency sub-band into a high pass filter g1(t) and g2(t) outputting 4 high-frequency-high-frequency sub-bands, wherein the low-frequency sub-band of the image is a main part of the image, and the high-frequency sub-band of the image is a detail information part of the image;
the formula of the image signal decomposition is as follows:
Figure BDA0003204383800000031
wherein:
Figure BDA0003204383800000032
representing the scaling coefficients in a filter of scale j,
Figure BDA0003204383800000033
represents the low frequency subband of the decomposition, i ═ 0;
Figure BDA0003204383800000034
representing the wavelet coefficients in a filter of scale j,
Figure BDA0003204383800000035
represents the decomposed high frequency subband, i ═ 1, 2;
Figure BDA0003204383800000036
for the scaling function, i is 0,1,2,
Figure BDA0003204383800000037
Figure BDA0003204383800000038
for the wavelet function, i is 0,1,2, the relationship of the filter and the wavelet function is:
Figure BDA0003204383800000039
Figure BDA00032043838000000310
Figure BDA00032043838000000311
the relationship between the wavelet function and the scale function is:
Figure BDA00032043838000000312
in one embodiment of the invention, the wavelet function
Figure BDA00032043838000000313
As a function of wavelets
Figure BDA00032043838000000314
Figure BDA00032043838000000314
1/2 offset and the two sets of wavelet functions form an approximate hilbert transform pair:
Figure BDA00032043838000000315
Figure BDA00032043838000000316
Figure BDA00032043838000000317
4) taking the image sub-band obtained by the tree A as the low-frequency information real part of the image wavelet transform, namely, performing convolution on the low-frequency-low-frequency sub-band obtained by the tree A and the low-frequency-high-frequency sub-band, and taking the convolution result as the low-frequency information real part of the noise-reduced image;
5) taking the image sub-band obtained by the tree B as the high-frequency information imaginary part of the image wavelet transform, namely, performing convolution on the high-frequency-high-frequency sub-band obtained by the tree B, and taking the convolution result as the high-frequency information imaginary part of the noise-reduced image;
6) adding the low-frequency information real part and the high-frequency information imaginary part obtained by convolution to form a reconstructed signal sequence, and adding the reconstructed signal sequence and the image signal F1(t) comparing the lengths of the twoAnd dividing the continuation part of the signal sequence, and forming the input image after noise reduction by carrying out inverse operation on the formed signal sequence according to the process 1).
Optionally, the performing gray scale extraction on the noise-reduced image to obtain a gray scale matrix of the image includes:
acquiring the RGB color pixel value of each pixel point in the noise-reduced image;
converting the RGB color pixel value of each pixel point into a gray value:
Gray(i,j)=0.299×R(i,j)+0.587×G(i,j)+0.114×B(i,j)
wherein:
gray (i, j) is the Gray value of the pixel point (i, j);
r (i, j) is a red component value of the pixel point (i, j), G (i, j) is a green component of the pixel point (i, j), and B (i, j) is a blue component of the pixel point (i, j);
and constructing an M multiplied by N gray matrix Q, and filling the gray value of each pixel point into the gray matrix according to the position of the pixel point.
Optionally, the obtaining a fractional order representation of the image by using singular value decomposition includes:
the gray matrix Q has the following singular value decomposition results:
Q=PΛRT
wherein:
p is an M orthogonal matrix, each column of which represents the left singular vector of Q;
Λ is a diagonal matrix, Λ ═ diag (δ)12,…,δk0, …,0), element δ in the diagonaliDenotes the singular value of Q, i ∈ (0, k)]K represents the rank of the matrix Q;
r is an N orthogonal matrix, each column of which represents the right singular vector of Q;
according to the singular value decomposition result of the gray matrix Q, obtaining fractional order representation of an image:
Qγ=PΛγRT
wherein:
gamma is a fractional order parameter, and gamma belongs to [0,1 ].
Optionally, the performing, by using a multi-scale channel network, feature extraction on fractional order images of different orders includes:
1) taking the gray matrix Q as the input of a multi-scale channel network, carrying out singular value decomposition on the gray matrix Q by the multi-scale channel network, and obtaining n groups of different image fractional order expression x by adjusting a fractional order parameter gamma1,x2,…,xnWhere n is [1, n ]],xnCorresponding fractional order parameter is gammanThe calculation formula of the image fractional order expression is as follows:
xn=PΛγnRT
wherein:
p is an M orthogonal matrix, each column of which represents the left singular vector of Q;
Λ is a diagonal matrix, Λ ═ diag (δ)12,…,δk0, …,0), element δ in the diagonaliDenotes the singular value of Q, i ∈ (0, k)]K represents the rank of the matrix Q;
r is an N orthogonal matrix, each column of which represents the right singular vector of Q;
2) carrying out feature extraction on fractional order images of different orders by using a feature extraction network to obtain detail feature images of the images under different orders, wherein the feature extraction formula is as follows:
gn=f(xn)
wherein:
gnrepresenting the extracted n-order detail characteristic image; in one embodiment of the invention, the feature extraction network in the multi-scale channel network is a ResNet-101 network model.
Optionally, the superimposing the detail feature images at different orders to the input image includes:
overlapping detail characteristic images under different orders to an original input image to obtain a super-resolution image, wherein the overlapping formula of the image is as follows:
Figure BDA0003204383800000041
wherein:
{g1,g2,…,gnrepresenting n groups of detail characteristic images with different orders;
f0representing an original input image;
f' represents a super-resolution image.
Further, to achieve the above object, the present invention provides a self-adaptive image super-resolution reconstruction apparatus, comprising:
the image acquisition device is used for acquiring an image to be subjected to image super-resolution reconstruction;
the image processor is used for normalizing the input image, sampling the normalized image twice to obtain two sampled images, denoising the sampled images by using a dual-tree wavelet algorithm to obtain denoised input images, performing gray level extraction on the denoised images to obtain a gray level matrix of the images, and decomposing by using singular values to obtain fractional order representation of the images;
the image super-resolution reconstruction device is used for extracting the characteristics of the fractional order images with different orders by using the multi-scale channel network to obtain detail characteristic images of the images under different orders, and overlapping the detail characteristic images under different orders to the input image to obtain the super-resolution image.
Further, to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon program instructions for image super-resolution reconstruction, the program instructions being executable by one or more processors to implement the steps of the implementation method for adaptive image super-resolution reconstruction as described above.
Compared with the prior art, the invention provides a self-adaptive image super-resolution reconstruction method, which has the following advantages:
firstly, the scheme performs sampling twice on an image after normalization processing, and the formula of the image sampling is as follows:
Figure BDA0003204383800000051
Figure BDA0003204383800000052
Figure BDA0003204383800000053
Figure BDA0003204383800000054
wherein: f. of0For normalizing the processed image, f1For the first sampled image, f2The image after the second sampling is obtained; DFT (-) denotes performing fourier transform on the image, IDFT (-) denotes performing inverse fourier transform;
Figure BDA0003204383800000055
is a high-pass filter of the scale j,
Figure BDA0003204383800000056
d represents the distance between the current sampling pixel for filtering and the center of a frequency rectangle in the Fourier spectrum to be filtered; and performing Fourier transform processing on the normalized image, performing product calculation on the normalized image and a high-pass filter and a low-pass filter respectively, performing inverse Fourier transform on a filtered result to obtain an image after first sampling, and repeating the steps on the image after first sampling to obtain an image after second sampling. In a specific embodiment of the invention, different scales are set, sampling pixels and amplitudes higher than the scales in an image Fourier spectrum are enabled to pass through a high-pass filter, sampling pixels and amplitudes lower than the scales in the image Fourier spectrum are enabled to pass through a low-pass filter, and an original image is sampled into samples with the same size but different scales by using a multi-scale filter bankWherein the image f of the first sampling1For low-frequency sampled images corresponding to smaller scales, second-sampled image f2For a high-frequency sampling image corresponding to a larger scale, the low-frequency sampling image comprises main information of the image, the high-frequency sampling image comprises detail information of the image, and the image f after the first sampling1Extracting pixel values in lines along the horizontal direction of the image, and expressing the pixel values according to the change of the pixel values along the x axis to obtain an image f1Image signal F of1(t) second sampling the image f2Extracting pixel values in lines along the horizontal direction of the image, and expressing the pixel values according to the change of the pixel values along the x axis to obtain an image f1Image signal F of2(t); image signal F1(t) performing wavelet transform processing of Tree A in dual-Tree wavelet algorithm, using low-pass filter h0(t) for the image signal F1(t) decomposing to output a low frequency sub-band and a high frequency sub-band, passing through a high pass filter h1(t) and h2(t) decomposing and outputting two high frequency sub-bands, inputting the low frequency sub-bands to a low pass filter h0(t) decomposing and outputting a low frequency-low frequency sub-band and a low frequency-high frequency sub-band, and inputting the high frequency sub-band into a high pass filter h1(t) and h2(t) outputting 4 high-frequency-high-frequency sub-bands, wherein the low-frequency sub-band of the image is a main part of the image, and the high-frequency sub-band of the image is a detail information part of the image; the formula of the image signal decomposition is as follows:
Figure BDA0003204383800000057
Figure BDA0003204383800000058
representing the scaling coefficients in a filter of scale j,
Figure BDA0003204383800000059
representing the decomposed low frequency subbands;
Figure BDA00032043838000000510
representing the wavelet coefficients in a filter of scale j,
Figure BDA00032043838000000511
representing decomposed high frequency subbands; image signal F2(t) wavelet transform processing of Tree B in the dual-Tree wavelet algorithm is performed using a low-pass filter g0(t) for the image signal F2(t) decomposing to output a low frequency sub-band and a high frequency sub-band, passing through a high pass filter g1(t) and g2(t) decomposing and outputting two high frequency sub-bands, inputting the low frequency sub-bands to a low pass filter g0(t) decomposing and outputting a low frequency-low frequency sub-band and a low frequency-high frequency sub-band, and inputting the high frequency sub-band into a high pass filter g1(t) and g2(t) outputting 4 high-frequency-high-frequency sub-bands, wherein the low-frequency sub-band of the image is a main part of the image, and the high-frequency sub-band of the image is a detail information part of the image; taking the image sub-band obtained by the tree A as a low-frequency information real part of the image wavelet transform, namely, convolving the low-frequency-low-frequency sub-band obtained by the tree A with the low-frequency-high-frequency sub-band, taking a convolution result as a low-frequency information real part of the noise-reduced image, taking the image sub-band obtained by the tree B as a high-frequency information imaginary part of the image wavelet transform, namely, convolving the high-frequency-high-frequency sub-band obtained by the tree B, and taking a convolution result as a high-frequency information imaginary part of the noise-reduced image; and carrying out image reconstruction on the low-frequency information and the high-frequency information obtained by convolution, wherein the reconstruction result is the input image subjected to noise reduction. Compared with the traditional scheme, the scheme divides the image into the low-frequency sampling image and the high-frequency sampling image, establishes the AB tree respectively to carry out filtering and noise reduction processing on the low-frequency information and the high-frequency information of the image, thereby completing double noise reduction of the low-frequency information and the high-frequency information, realizing reconstruction of the image, fusing noise reduction processing results and forming a reconstructed image after noise reduction.
Meanwhile, the scheme provides a method for extracting features of fractional order images with different orders by using a multi-scale channel network, so as to obtain detailed feature images of the images under different orders, wherein the process of extracting the features of the fractional order images with different orders by using the multi-scale channel network comprises the following steps: using the gray matrix Q as a multi-scaleInputting a degree channel network, carrying out singular value decomposition on a gray matrix Q by a multi-scale channel network, and obtaining n groups of different image fractional order expression x by adjusting a fractional order parameter gamma1,x2,…,xnWhere n is [1, n ]],xnCorresponding fractional order parameter is gammanThe calculation formula of the image fractional order expression is as follows:
xn=PΛγnRT
wherein: p is an M orthogonal matrix, each column of which represents the left singular vector of Q; Λ is a diagonal matrix, Λ ═ diag (δ)12,…,δk0, …,0), element δ in the diagonaliDenotes the singular value of Q, i ∈ (0, k)]K represents the rank of the matrix Q; r is an N orthogonal matrix, each column of which represents the right singular vector of Q; carrying out feature extraction on fractional order images of different orders by using a feature extraction network to obtain detail feature images of the images under different orders, wherein the feature extraction formula is as follows:
gn=f(xn)
wherein: gnRepresenting the extracted n-order detail characteristic image; compared with the traditional scheme, the scheme introduces the concept of fractional order in image reconstruction, weakens or strengthens the sensitive components of the image by changing the value of the fractional order parameter, strengthens the robustness of the detail characteristic to the environment information such as illumination and the like, extracts and obtains more accurate image detail characteristic, obtains a plurality of groups of different image fractional order representations by adjusting the fractional order parameter, and accordingly superposes the detail characteristic images under different orders on the input image to obtain the super-resolution image.
Drawings
Fig. 1 is a schematic flowchart of a self-adaptive image super-resolution reconstruction method according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a self-adaptive image super-resolution reconstruction apparatus according to an embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Denoising an input image by using a dual-tree wavelet algorithm, performing gray extraction on the denoised image to obtain a gray matrix of the image, and decomposing by using singular values to obtain fractional order representation of the image; and performing feature extraction on the fractional order images with different orders by using a multi-scale channel network to obtain detail feature images of the images under different orders, and overlapping the obtained detail feature images with different orders to the input image to obtain a super-resolution image. Fig. 1 is a schematic diagram illustrating a self-adaptive image super-resolution reconstruction method according to an embodiment of the present invention.
In this embodiment, the self-adaptive image super-resolution reconstruction method includes:
and S1, normalizing the input image, and sampling the normalized image twice to obtain two sampled images.
Firstly, acquiring an input image to be subjected to image super-resolution reconstruction, and normalizing the input image to make all the input images have a uniform image size, wherein the normalized image size is M pixels by N pixels, in a specific embodiment of the invention, the image normalization step comprises the steps of performing expansion, rotation and other processing on the image, wherein the value of M is 512, and the value of N is 256;
sampling the normalized image twice to obtain two sampled images, wherein the formula of the image sampling is as follows:
Figure BDA0003204383800000061
Figure BDA0003204383800000071
Figure BDA0003204383800000072
Figure BDA0003204383800000073
wherein:
f0for normalizing the processed image, f1For the first sampled image, f2The image after the second sampling is obtained;
DFT (-) denotes performing fourier transform on the image, IDFT (-) denotes performing inverse fourier transform;
Figure BDA0003204383800000074
is a high-pass filter of the scale j,
Figure BDA0003204383800000075
d represents the distance between the current sampling pixel for filtering and the center of a frequency rectangle in the Fourier spectrum to be filtered;
and performing Fourier transform processing on the normalized image, performing product calculation on the normalized image and a high-pass filter and a low-pass filter respectively, performing inverse Fourier transform on a filtered result to obtain an image after first sampling, and repeating the steps on the image after first sampling to obtain an image after second sampling.
In a specific embodiment of the invention, the invention makes the sampling pixels above the scale and the amplitude value in the image Fourier spectrum pass through a high-pass filter by setting different scales, makes the sampling pixels below the scale and the amplitude value in the image Fourier spectrum pass through a low-pass filter, and samples the original image into images with the same size but different scales by utilizing a multi-scale filter bank, wherein the image f sampled for the first time is1For low-frequency sampled images corresponding to smaller scales, second-sampled image f2The high-frequency sampling image corresponding to a larger scale.
And S2, denoising the sampling image by using a dual-tree wavelet algorithm to obtain a denoised input image.
Furthermore, the invention utilizes a dual-tree wavelet algorithm to perform denoising processing on the sampling image, and the flow of the dual-tree wavelet algorithm is as follows:
1) the image f after the first sampling1Extracting pixel values in lines along the horizontal direction of the image, and expressing the pixel values according to the change of the pixel values along the x axis to obtain an image f1Image signal F of1(t); the second sampled image f2Extracting pixel values in lines along the horizontal direction of the image, and expressing the pixel values according to the change of the pixel values along the x axis to obtain an image f1Image signal F of2(t); wherein t is the position of the pixel extracting the pixel value according to the row on the x axis, namely a certain signal in the image signal is represented, and the signal amplitude is the pixel value;
2) image signal F1(t) performing wavelet transform processing of Tree A in dual-Tree wavelet algorithm, using low-pass filter h0(t) for the image signal F1(t) decomposing to output a low frequency sub-band and a high frequency sub-band, passing through a high pass filter h1(t) and h2(t) decomposing and outputting two high frequency sub-bands, inputting the low frequency sub-bands to a low pass filter h0(t) decomposing and outputting a low frequency-low frequency sub-band and a low frequency-high frequency sub-band, and inputting the high frequency sub-band into a high pass filter h1(t) and h2(t) outputting 4 high-frequency-high-frequency sub-bands, wherein the low-frequency sub-band of the image is a main part of the image, and the high-frequency sub-band of the image is a detail information part of the image; the calculation formula of the filter is as follows:
Figure BDA0003204383800000076
Figure BDA0003204383800000077
wherein:
j represents the scale of the filter;
g (t) represents the distance from the position of the image signal t to the center of the frequency rectangle in the image signal;
the formula of the image signal decomposition is as follows:
Figure BDA0003204383800000078
wherein:
Figure BDA0003204383800000079
representing the scaling coefficients in a filter of scale j,
Figure BDA00032043838000000710
represents the low frequency subband of the decomposition, i ═ 0;
Figure BDA00032043838000000711
representing the wavelet coefficients in a filter of scale j,
Figure BDA00032043838000000712
represents the decomposed high frequency subband, i ═ 1, 2;
Figure BDA0003204383800000081
for the scaling function, i is 0,1,2,
Figure BDA0003204383800000082
Figure BDA0003204383800000083
for the wavelet function, i is 0,1,2, the relationship of the filter and the wavelet function is:
Figure BDA0003204383800000084
Figure BDA0003204383800000085
Figure BDA0003204383800000086
the relationship between the wavelet function and the scale function is:
Figure BDA0003204383800000087
3) image signal F2(t) wavelet transform processing of Tree B in the dual-Tree wavelet algorithm is performed using a low-pass filter g0(t) for the image signal F2(t) decomposing to output a low frequency sub-band and a high frequency sub-band, passing through a high pass filter g1(t) and g2(t) decomposing and outputting two high frequency sub-bands, inputting the low frequency sub-bands to a low pass filter g0(t) decomposing and outputting a low frequency-low frequency sub-band and a low frequency-high frequency sub-band, and inputting the high frequency sub-band into a high pass filter g1(t) and g2(t) outputting 4 high-frequency-high-frequency sub-bands, wherein the low-frequency sub-band of the image is a main part of the image, and the high-frequency sub-band of the image is a detail information part of the image;
the formula of the image signal decomposition is as follows:
Figure BDA0003204383800000088
wherein:
Figure BDA0003204383800000089
representing the scaling coefficients in a filter of scale j,
Figure BDA00032043838000000810
represents the low frequency subband of the decomposition, i ═ 0;
Figure BDA00032043838000000811
representing the wavelet coefficients in a filter of scale j,
Figure BDA00032043838000000812
represents the decomposed high frequency subband, i ═ 1, 2;
Figure BDA00032043838000000813
for the scaling function, i is 0,1,2,
Figure BDA00032043838000000814
Figure BDA00032043838000000815
for the wavelet function, i is 0,1,2, the relationship of the filter and the wavelet function is:
Figure BDA00032043838000000816
Figure BDA00032043838000000817
Figure BDA00032043838000000818
the relationship between the wavelet function and the scale function is:
Figure BDA00032043838000000819
in one embodiment of the invention, the wavelet function
Figure BDA00032043838000000820
As a function of wavelets
Figure BDA00032043838000000821
Figure BDA00032043838000000821
1/2 offset and the two sets of wavelet functions form an approximate hilbert transform pair:
Figure BDA00032043838000000822
Figure BDA00032043838000000823
Figure BDA00032043838000000824
4) taking the image sub-band obtained by the tree A as the low-frequency information real part of the image wavelet transform, namely, performing convolution on the low-frequency-low-frequency sub-band obtained by the tree A and the low-frequency-high-frequency sub-band, and taking the convolution result as the low-frequency information real part of the noise-reduced image;
5) taking the image sub-band obtained by the tree B as the high-frequency information imaginary part of the image wavelet transform, namely, performing convolution on the high-frequency-high-frequency sub-band obtained by the tree B, and taking the convolution result as the high-frequency information imaginary part of the noise-reduced image;
6) adding the low-frequency information real part and the high-frequency information imaginary part obtained by convolution to form a reconstructed signal sequence, and adding the reconstructed signal sequence and the image signal F1And (t) comparing the lengths of the signals, removing the continuation part of the signal sequence, and forming the input image after noise reduction by performing inverse operation on the formed signal sequence according to the process 1).
And S3, carrying out gray level extraction on the image subjected to noise reduction to obtain a gray level matrix of the image, and obtaining fractional order representation of the image by utilizing singular value decomposition.
Further, the invention performs gray level extraction on the noise-reduced image, and the gray level extraction process comprises:
acquiring the RGB color pixel value of each pixel point in the noise-reduced image;
converting the RGB color pixel value of each pixel point into a gray value:
Gray(i,j)=0.299×R(i,j)+0.587×G(i,j)+0.114×B(i,j)
wherein:
gray (i, j) is the Gray value of the pixel point (i, j);
r (i, j) is a red component value of the pixel point (i, j), G (i, j) is a green component of the pixel point (i, j), and B (i, j) is a blue component of the pixel point (i, j);
constructing an M multiplied by N gray matrix Q, and filling the gray value of each pixel point into the gray matrix according to the position of the pixel point; the gray matrix Q has the following singular value decomposition results:
Q=PΛRT
wherein:
p is an M orthogonal matrix, each column of which represents the left singular vector of Q;
Λ is a diagonal matrix, Λ ═ diag (δ)12,…,δk0, …,0), element δ in the diagonaliDenotes the singular value of Q, i ∈ (0, k)]K represents the rank of the matrix Q;
r is an N orthogonal matrix, each column of which represents the right singular vector of Q;
according to the singular value decomposition result of the gray matrix Q, obtaining fractional order representation of an image:
Qγ=PΛγRT
wherein:
gamma is a fractional order parameter, and gamma belongs to [0,1 ].
And S4, performing feature extraction on the fractional order images of different orders by using a multi-scale channel network to obtain detailed feature images of the images under different orders.
Further, the invention utilizes a multi-scale channel network to perform feature extraction on the fractional order images with different orders, and the process of performing feature extraction on the fractional order images with different orders by utilizing the multi-scale channel network comprises the following steps:
1) taking the gray matrix Q as the input of a multi-scale channel network, carrying out singular value decomposition on the gray matrix Q by the multi-scale channel network, and obtaining n groups of different image fractional order expression x by adjusting a fractional order parameter gamma1,x2,…,xnWhere n is [1, n ]],xnCorresponding fractional order parameter is gammanThe calculation formula of the image fractional order expression is as follows:
xn=PΛγnRT
wherein:
p is an M orthogonal matrix, each column of which represents the left singular vector of Q;
Λ is a diagonal matrix, Λ ═ diag (δ)12,…,δk0, …,0), element δ in the diagonaliDenotes the singular value of Q, i ∈ (0, k)]K represents the rank of the matrix Q;
r is an N orthogonal matrix, each column of which represents the right singular vector of Q;
2) carrying out feature extraction on fractional order images of different orders by using a feature extraction network to obtain detail feature images of the images under different orders, wherein the feature extraction formula is as follows:
gn=f(xn)
wherein:
gnrepresenting the extracted n-order detail characteristic image; in one embodiment of the invention, the feature extraction network in the multi-scale channel network is a ResNet-101 network model.
And S5, overlapping the detail characteristic images of different orders to the input image to obtain a super-resolution image.
Further, the invention superimposes detail feature images under different orders into an original input image to obtain a super-resolution image, wherein the superimposing formula of the image is as follows:
Figure BDA0003204383800000101
wherein:
{g1,g2,…,gnrepresenting n groups of detail characteristic images with different orders;
f0representing an original input image;
f' represents a super-resolution image.
The following describes embodiments of the present invention through an algorithmic experiment and tests of the inventive treatment method. The hardware test environment of the algorithm of the invention is as follows: inter (R) core (TM) i7-6700K CPU with software Matlab2018 b; the contrast method is a self-adaptive image super-resolution reconstruction method based on wavelet transformation and a self-adaptive image super-resolution reconstruction method based on RNN.
In the algorithmic experiments described in the present invention, the data set was a 10G low resolution image. In the experiment, the low-resolution image is input into the algorithm model, and the effectiveness of the self-adaptive image super-resolution reconstruction is used as an evaluation index of the feasibility of the algorithm, wherein the higher the effectiveness of the self-adaptive image super-resolution reconstruction is, the higher the effectiveness and the feasibility of the algorithm are.
According to the experimental result, the reconstruction effectiveness of the super-resolution of the self-adaptive image based on wavelet transformation is 77.62, the reconstruction effectiveness of the super-resolution of the self-adaptive image based on LSTM is 84.12, the reconstruction effectiveness of the super-resolution of the self-adaptive image based on the method is 89.26, and compared with a comparison algorithm, the reconstruction method of the super-resolution of the self-adaptive image provided by the invention can realize more effective reconstruction of the super-resolution of the image.
The invention also provides a self-adaptive image super-resolution reconstruction device. Fig. 2 is a schematic diagram illustrating an internal structure of a self-adaptive image super-resolution reconstruction apparatus according to an embodiment of the present invention.
In the present embodiment, the self-adaptive image super-resolution reconstruction apparatus 1 includes at least an image acquisition apparatus 11, an image processor 12, an image super-resolution reconstruction apparatus 13, a communication bus 14, and a network interface 15.
The image capturing device 11 may be a PC (Personal Computer), a terminal device such as a smart phone, a tablet Computer, a portable Computer, or a camera, or may be a server.
Image processor 12 includes at least one type of readable storage medium including flash memory, a hard disk, a multi-media card, a card-type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The image processor 12 may in some embodiments be an internal storage unit of the self-adaptive image super-resolution reconstruction apparatus 1, for example a hard disk of the self-adaptive image super-resolution reconstruction apparatus 1. The image processor 12 may also be an external storage device of the self-adaptive image super-resolution reconstruction apparatus 1 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), and the like equipped on the self-adaptive image super-resolution reconstruction apparatus 1. Further, the image processor 12 may also include both an internal storage unit and an external storage device of the self-adaptive image super-resolution reconstruction apparatus 1. The image processor 12 may be used not only to store application software installed in the self-adaptive image super-resolution reconstruction apparatus 1 and various types of data, but also to temporarily store data that has been output or is to be output.
The image super-resolution reconstruction device 13 may be, in some embodiments, a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data Processing chip, including a monitoring Unit, for running program code stored in the image processor 12 or Processing data, such as the image super-resolution reconstruction program instructions 16.
The communication bus 14 is used to enable connection communication between these components.
The network interface 15 may optionally include a standard wired interface, a wireless interface (such as a WI-FI interface), and is generally used to establish a communication connection between the self-adaptive image super-resolution reconstruction apparatus 1 and other electronic devices.
Optionally, the self-adaptive image super-resolution reconstruction apparatus 1 may further include a user interface, the user interface may include a Display (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface may further include a standard wired interface and a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. Here, the display, which may also be referred to as a display screen or a display unit as appropriate, is used for displaying information processed in the self-adaptive image super-resolution reconstruction apparatus 1 and for displaying a user interface for visualization.
Fig. 2 only shows the self-adaptive image super-resolution reconstruction apparatus 1 with the components 11-15, it will be understood by those skilled in the art that the structure shown in fig. 1 does not constitute a limitation of the self-adaptive image super-resolution reconstruction apparatus 1, and may comprise fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
In the self-adaptive image super-resolution reconstruction apparatus 1 embodiment shown in fig. 2, image processor 12 has stored therein image super-resolution reconstruction program instructions 16; the steps of the image super-resolution reconstruction apparatus 13 executing the image super-resolution reconstruction program instructions 16 stored in the image processor 12 are the same as the implementation method of the self-adaptive image super-resolution reconstruction method, and are not described here.
Furthermore, an embodiment of the present invention also provides a computer-readable storage medium having stored thereon image super-resolution reconstruction program instructions executable by one or more processors to implement the following operations:
normalizing the input image, and sampling the normalized image twice to obtain two sampled images;
denoising the sampled image by using a dual-tree wavelet algorithm to obtain a denoised input image;
carrying out gray level extraction on the denoised image to obtain a gray level matrix of the image, and decomposing by utilizing a singular value to obtain fractional order representation of the image;
carrying out feature extraction on fractional order images of different orders by using a multi-scale channel network to obtain detail feature images of the images under different orders;
and overlapping the detail characteristic images under different orders to the input image to obtain a super-resolution image.
It should be noted that the above-mentioned numbers of the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (9)

1. An adaptive image super-resolution reconstruction method is characterized by comprising the following steps:
normalizing the input image, and sampling the normalized image twice to obtain two sampled images;
denoising the sampled image by using a dual-tree wavelet algorithm to obtain a denoised input image;
carrying out gray level extraction on the denoised image to obtain a gray level matrix of the image, and decomposing by utilizing a singular value to obtain fractional order representation of the image;
carrying out feature extraction on fractional order images of different orders by using a multi-scale channel network to obtain detail feature images of the images under different orders;
and overlapping the detail characteristic images under different orders to the input image to obtain a super-resolution image.
2. The method for reconstructing the super-resolution image of the self-adaptive image as claimed in claim 1, wherein the sampling twice of the normalized image comprises:
normalizing the input images to make all the input images have a uniform image size, wherein the normalized image size is M pixels by N pixels;
sampling the normalized image twice to obtain two sampled images, wherein the formula of the image sampling is as follows:
Figure FDA0003204383790000011
Figure FDA0003204383790000012
Figure FDA0003204383790000013
Figure FDA0003204383790000014
wherein:
f0for normalizing the processed image, f1For the first sampled image, f2After the second samplingAn image;
DFT (-) denotes performing fourier transform on the image, IDFT (-) denotes performing inverse fourier transform;
Figure FDA0003204383790000015
is a high-pass filter of the scale j,
Figure FDA0003204383790000016
d represents the distance between the current sampling pixel for filtering and the center of a frequency rectangle in the Fourier spectrum to be filtered;
and performing Fourier transform processing on the normalized image, performing product calculation on the normalized image and a high-pass filter and a low-pass filter respectively, performing inverse Fourier transform on a filtered result to obtain an image after first sampling, and repeating the steps on the image after first sampling to obtain an image after second sampling.
3. The method for reconstructing the super-resolution of the self-adaptive image as claimed in claim 2, wherein the denoising the sampled image by using the dual-tree wavelet algorithm comprises:
1) the image f after the first sampling1Extracting pixel values in lines along the horizontal direction of the image, and expressing the pixel values according to the change of the pixel values along the x axis to obtain an image f1Image signal F of1(t); the second sampled image f2Extracting pixel values in lines along the horizontal direction of the image, and expressing the pixel values according to the change of the pixel values along the x axis to obtain an image f1Image signal F of2(t); wherein t is the position of the pixel extracting the pixel value according to the row on the x axis, namely a certain signal in the image signal is represented, and the signal amplitude is the pixel value;
2) image signal F1(t) performing wavelet transform processing of Tree A in dual-Tree wavelet algorithm, using low-pass filter h0(t) for the image signal F1(t) decomposing to output a low frequency sub-band and a high frequency sub-band, passing through a high passFilter h1(t) and h2(t) decomposing and outputting two high frequency sub-bands, inputting the low frequency sub-bands to a low pass filter h0(t) decomposing and outputting a low frequency-low frequency sub-band and a low frequency-high frequency sub-band, and inputting the high frequency sub-band into a high pass filter h1(t) and h2(t) outputting 4 high-frequency-high-frequency sub-bands, wherein the low-frequency sub-band of the image is a main part of the image, and the high-frequency sub-band of the image is a detail information part of the image; the calculation formula of the filter is as follows:
Figure FDA0003204383790000017
Figure FDA0003204383790000018
wherein:
j represents the scale of the filter;
g (t) represents the distance from the position of the image signal t to the center of the frequency rectangle in the image signal;
the formula of the image signal decomposition is as follows:
Figure FDA0003204383790000021
wherein:
Figure FDA0003204383790000022
representing the scaling coefficients in a filter of scale j,
Figure FDA0003204383790000023
represents the low frequency subband of the decomposition, i ═ 0;
Figure FDA0003204383790000024
representing a small in a filter of dimension jThe wave coefficient of the wave is calculated,
Figure FDA0003204383790000025
represents the decomposed high frequency subband, i ═ 1, 2;
Figure FDA0003204383790000026
for the scaling function, i is 0,1,2,
Figure FDA0003204383790000027
Figure FDA0003204383790000028
for the wavelet function, i is 0,1,2, the relationship of the filter and the wavelet function is:
Figure FDA0003204383790000029
Figure FDA00032043837900000210
Figure FDA00032043837900000211
the relationship between the wavelet function and the scale function is:
Figure FDA00032043837900000212
3) image signal F2(t) wavelet transform processing of Tree B in the dual-Tree wavelet algorithm is performed using a low-pass filter g0(t) for the image signal F2(t) decomposing to output a low frequency sub-band and a high frequency sub-band, passing through a high pass filter g1(t) and g2(t) Decomposing and outputting two high-frequency sub-bands, and inputting the low-frequency sub-bands into a low-pass filter g0(t) decomposing and outputting a low frequency-low frequency sub-band and a low frequency-high frequency sub-band, and inputting the high frequency sub-band into a high pass filter g1(t) and g2(t) outputting 4 high-frequency-high-frequency sub-bands, wherein the low-frequency sub-band of the image is a main part of the image, and the high-frequency sub-band of the image is a detail information part of the image;
the formula of the image signal decomposition is as follows:
Figure FDA00032043837900000213
wherein:
Figure FDA00032043837900000214
representing the scaling coefficients in a filter of scale j,
Figure FDA00032043837900000215
represents the low frequency subband of the decomposition, i ═ 0;
Figure FDA00032043837900000216
representing the wavelet coefficients in a filter of scale j,
Figure FDA00032043837900000217
represents the decomposed high frequency subband, i ═ 1, 2;
Figure FDA00032043837900000218
for the scaling function, i is 0,1,2,
Figure FDA00032043837900000219
Figure FDA00032043837900000220
for the wavelet function, i is 0,1,2, the relationship of the filter and the wavelet function is:
Figure FDA00032043837900000221
Figure FDA00032043837900000222
Figure FDA00032043837900000223
the relationship between the wavelet function and the scale function is:
Figure FDA00032043837900000224
wavelet function
Figure FDA0003204383790000031
As a function of wavelets
Figure FDA0003204383790000032
1/2 offset and the two sets of wavelet functions form an approximate hilbert transform pair:
Figure FDA0003204383790000033
Figure FDA0003204383790000034
Figure FDA0003204383790000035
4) taking the image sub-band obtained by the tree A as the low-frequency information real part of the image wavelet transform, namely, performing convolution on the low-frequency-low-frequency sub-band obtained by the tree A and the low-frequency-high-frequency sub-band, and taking the convolution result as the low-frequency information real part of the noise-reduced image;
5) taking the image sub-band obtained by the tree B as the high-frequency information imaginary part of the image wavelet transform, namely, performing convolution on the high-frequency-high-frequency sub-band obtained by the tree B, and taking the convolution result as the high-frequency information imaginary part of the noise-reduced image;
6) adding the low-frequency information real part and the high-frequency information imaginary part obtained by convolution to form a reconstructed signal sequence, and adding the reconstructed signal sequence and the image signal F1And (t) comparing the lengths of the signals, removing the continuation part of the signal sequence, and forming the input image after noise reduction by performing inverse operation on the formed signal sequence according to the process 1).
4. The method for reconstructing the super-resolution image of an adaptive image as claimed in claim 3, wherein the performing gray scale extraction on the noise-reduced image to obtain a gray scale matrix of the image comprises:
acquiring the RGB color pixel value of each pixel point in the noise-reduced image;
converting the RGB color pixel value of each pixel point into a gray value:
Gray(i,j)=0.299×R(i,j)+0.587×G(i,j)+0.114×B(i,j)
wherein:
gray (i, j) is the Gray value of the pixel point (i, j);
r (i, j) is a red component value of the pixel point (i, j), G (i, j) is a green component of the pixel point (i, j), and B (i, j) is a blue component of the pixel point (i, j);
and constructing an M multiplied by N gray matrix Q, and filling the gray value of each pixel point into the gray matrix according to the position of the pixel point.
5. The method for reconstructing the super-resolution image of the self-adaptive image as claimed in claim 4, wherein the obtaining the fractional order representation of the image by using the singular value decomposition comprises:
the gray matrix Q has the following singular value decomposition results:
Q=PΛRT
wherein:
p is an M orthogonal matrix, each column of which represents the left singular vector of Q;
Λ is a diagonal matrix, Λ ═ diag (δ)1,δ2,…,δk0, …,0), element δ in the diagonaliDenotes the singular value of Q, i ∈ (0, k)]K represents the rank of the matrix Q;
r is an N orthogonal matrix, each column of which represents the right singular vector of Q;
according to the singular value decomposition result of the gray matrix Q, obtaining fractional order representation of an image:
Qγ=PΛγRT
wherein:
gamma is a fractional order parameter, and gamma belongs to [0,1 ].
6. The method for reconstructing the super-resolution of the self-adaptive image as claimed in claim 5, wherein the extracting the features of the fractional order images of different orders by using the multi-scale channel network comprises:
1) taking the gray matrix Q as the input of a multi-scale channel network, carrying out singular value decomposition on the gray matrix Q by the multi-scale channel network, and obtaining n groups of different image fractional order expression x by adjusting a fractional order parameter gamma1,x2,…,xnWhere n is [1, n ]],xnCorresponding fractional order parameter is gammanThe calculation formula of the image fractional order expression is as follows:
Figure FDA0003204383790000036
wherein:
p is an M orthogonal matrix, each column of which represents the left singular vector of Q;
Λ is a diagonal matrix, Λ ═ diag (δ)12,…,δk0, …,0), element δ in the diagonaliDenotes the singular value of Q, i ∈ (0, k)]K represents the rank of the matrix Q;
r is an N orthogonal matrix, each column of which represents the right singular vector of Q;
2) carrying out feature extraction on fractional order images of different orders by using a feature extraction network to obtain detail feature images of the images under different orders, wherein the feature extraction formula is as follows:
gn=f(xn)
wherein:
gnand representing the extracted n-order detail feature image.
7. The method for reconstructing the super resolution of the self-adaptive image as claimed in claim 6, wherein the step of superimposing the detail feature images at different orders on the input image comprises:
overlapping detail characteristic images under different orders to an original input image to obtain a super-resolution image, wherein the overlapping formula of the image is as follows:
Figure FDA0003204383790000041
wherein:
{g1,g2,…,gnrepresenting n groups of detail characteristic images with different orders;
f0representing an original input image;
fthe super-resolution image is represented.
8. An adaptive image super-resolution reconstruction apparatus, comprising:
the image acquisition device is used for acquiring an image to be subjected to image super-resolution reconstruction;
the image processor is used for normalizing the input image, sampling the normalized image twice to obtain two sampled images, denoising the sampled images by using a dual-tree wavelet algorithm to obtain denoised input images, performing gray level extraction on the denoised images to obtain a gray level matrix of the images, and decomposing by using singular values to obtain fractional order representation of the images;
the image super-resolution reconstruction device is used for extracting the characteristics of the fractional order images with different orders by using the multi-scale channel network to obtain detail characteristic images of the images under different orders, and overlapping the detail characteristic images under different orders to the input image to obtain the super-resolution image.
9. A computer-readable storage medium having stored thereon image super-resolution reconstruction program instructions executable by one or more processors to implement the steps of an implementation method of self-adaptive image super-resolution reconstruction as described above.
CN202110912643.3A 2021-08-10 2021-08-10 Self-adaptive image super-resolution reconstruction method and device Active CN113436078B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110912643.3A CN113436078B (en) 2021-08-10 2021-08-10 Self-adaptive image super-resolution reconstruction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110912643.3A CN113436078B (en) 2021-08-10 2021-08-10 Self-adaptive image super-resolution reconstruction method and device

Publications (2)

Publication Number Publication Date
CN113436078A true CN113436078A (en) 2021-09-24
CN113436078B CN113436078B (en) 2022-03-15

Family

ID=77763088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110912643.3A Active CN113436078B (en) 2021-08-10 2021-08-10 Self-adaptive image super-resolution reconstruction method and device

Country Status (1)

Country Link
CN (1) CN113436078B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114528871A (en) * 2022-01-15 2022-05-24 中国电子科技集团公司第二十研究所 Noise reduction method using fractional wavelet decomposition and reconstruction technology
CN115409715A (en) * 2022-11-01 2022-11-29 北京科技大学 Hodges-Lehmann-based fire-fighting dangerous goods image super-sorting method and device
CN114528871B (en) * 2022-01-15 2024-04-19 中国电子科技集团公司第二十研究所 Noise reduction method using fractional wavelet decomposition and reconstruction technology

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050259889A1 (en) * 2004-05-21 2005-11-24 Ferrari Ricardo J De-noising digital radiological images
US20090097721A1 (en) * 2004-12-17 2009-04-16 Cambridge University Technical Services Limited Method of identifying features within a dataset
US20130114906A1 (en) * 2011-10-04 2013-05-09 Imagination Technologies Limited Detecting image impairments in an interpolated image
EP3028240A1 (en) * 2013-07-31 2016-06-08 Digitalglobe, Inc. Automatic generation of built-up layers from high resolution satellite image data
CN107609530A (en) * 2017-09-25 2018-01-19 常州工学院 One kind is with brill orientation electromagnetic resistivity imaging features extracting method
CN107896301A (en) * 2017-12-15 2018-04-10 诺华视创电影科技(江苏)股份有限公司 Panoramic scanning device and panoramic scanning method for movie and television play
CN108038438A (en) * 2017-12-06 2018-05-15 广东世纪晟科技有限公司 A kind of multi-source facial image union feature extracting method based on singular value decomposition
CN109447903A (en) * 2018-10-17 2019-03-08 江苏商贸职业学院 A kind of method for building up of half reference type super-resolution reconstruction image quality evaluation model
CN109829884A (en) * 2018-12-21 2019-05-31 广东电网有限责任公司 A kind of Infrared Image Features vector extracting method based on singular value decomposition
CN110717858A (en) * 2019-10-09 2020-01-21 济源职业技术学院 Image preprocessing method and device under low-illuminance environment
CN111275655A (en) * 2020-01-20 2020-06-12 上海理工大学 Multi-focus multi-source image fusion method
CN111798396A (en) * 2020-07-01 2020-10-20 中通服咨询设计研究院有限公司 Multifunctional image processing method based on wavelet transformation
CN112215787A (en) * 2020-04-30 2021-01-12 温州大学智能锁具研究院 Infrared and visible light image fusion method based on significance analysis and adaptive filter
CN113160047A (en) * 2020-11-23 2021-07-23 南京邮电大学 Single image super-resolution method based on multi-scale channel attention mechanism

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050259889A1 (en) * 2004-05-21 2005-11-24 Ferrari Ricardo J De-noising digital radiological images
US20090097721A1 (en) * 2004-12-17 2009-04-16 Cambridge University Technical Services Limited Method of identifying features within a dataset
US20130114906A1 (en) * 2011-10-04 2013-05-09 Imagination Technologies Limited Detecting image impairments in an interpolated image
EP3028240A1 (en) * 2013-07-31 2016-06-08 Digitalglobe, Inc. Automatic generation of built-up layers from high resolution satellite image data
CN107609530A (en) * 2017-09-25 2018-01-19 常州工学院 One kind is with brill orientation electromagnetic resistivity imaging features extracting method
CN108038438A (en) * 2017-12-06 2018-05-15 广东世纪晟科技有限公司 A kind of multi-source facial image union feature extracting method based on singular value decomposition
CN107896301A (en) * 2017-12-15 2018-04-10 诺华视创电影科技(江苏)股份有限公司 Panoramic scanning device and panoramic scanning method for movie and television play
CN109447903A (en) * 2018-10-17 2019-03-08 江苏商贸职业学院 A kind of method for building up of half reference type super-resolution reconstruction image quality evaluation model
CN109829884A (en) * 2018-12-21 2019-05-31 广东电网有限责任公司 A kind of Infrared Image Features vector extracting method based on singular value decomposition
CN110717858A (en) * 2019-10-09 2020-01-21 济源职业技术学院 Image preprocessing method and device under low-illuminance environment
CN111275655A (en) * 2020-01-20 2020-06-12 上海理工大学 Multi-focus multi-source image fusion method
CN112215787A (en) * 2020-04-30 2021-01-12 温州大学智能锁具研究院 Infrared and visible light image fusion method based on significance analysis and adaptive filter
CN111798396A (en) * 2020-07-01 2020-10-20 中通服咨询设计研究院有限公司 Multifunctional image processing method based on wavelet transformation
CN113160047A (en) * 2020-11-23 2021-07-23 南京邮电大学 Single image super-resolution method based on multi-scale channel attention mechanism

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JUNAID AHMED .ETAL: "Coupled dictionary learning in wavelet domain for Single-Image Super-Resolution", 《SIGNAL, IMAGE AND VIDEO PROCESSING VOLUME》 *
庞纪勇: "基于NSDTCT域图像融合及超分辨率重建算法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114528871A (en) * 2022-01-15 2022-05-24 中国电子科技集团公司第二十研究所 Noise reduction method using fractional wavelet decomposition and reconstruction technology
CN114528871B (en) * 2022-01-15 2024-04-19 中国电子科技集团公司第二十研究所 Noise reduction method using fractional wavelet decomposition and reconstruction technology
CN115409715A (en) * 2022-11-01 2022-11-29 北京科技大学 Hodges-Lehmann-based fire-fighting dangerous goods image super-sorting method and device

Also Published As

Publication number Publication date
CN113436078B (en) 2022-03-15

Similar Documents

Publication Publication Date Title
Liu et al. Multi-level wavelet convolutional neural networks
Yang et al. BM3D-Net: A convolutional neural network for transform-domain collaborative filtering
Mahmood et al. A robust technique for copy-move forgery detection and localization in digital images via stationary wavelet and discrete cosine transform
Brifman et al. Turning a denoiser into a super-resolver using plug and play priors
Bnou et al. A wavelet denoising approach based on unsupervised learning model
CN110276726B (en) Image deblurring method based on multichannel network prior information guidance
Salmon et al. From patches to pixels in non-local methods: Weighted-average reprojection
Lyu et al. A nonsubsampled countourlet transform based CNN for real image denoising
Briand et al. The Heeger & Bergen pyramid based texture synthesis algorithm
Kuo et al. Improved visual information fidelity based on sensitivity characteristics of digital images
Witwit et al. Satellite image resolution enhancement using discrete wavelet transform and new edge-directed interpolation
CN113436078B (en) Self-adaptive image super-resolution reconstruction method and device
Witwit et al. Global motion based video super-resolution reconstruction using discrete wavelet transform
Feng et al. Single‐image super‐resolution with total generalised variation and Shearlet regularisations
CN111968073B (en) No-reference image quality evaluation method based on texture information statistics
CN112396564A (en) Product packaging quality detection method and system based on deep learning
CN110428402B (en) Image tampering identification method and device, computer equipment and storage medium
Wang et al. Image enhancement
CN111192204A (en) Image enhancement method, system and computer readable storage medium
Khan et al. Hybrid sharpening transformation approach for multifocus image fusion using medical and nonmedical images
Prager et al. Image enhancement and filtering using wavelets
US8041124B2 (en) Identifying computer graphics from digital photographs
Türkan et al. Self-content super-resolution for ultra-hd up-sampling
CN113674144A (en) Image processing method, terminal equipment and readable storage medium
CN113902618B (en) Image super-resolution algorithm based on multi-modal spatial filtering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant