CN106327428B - image super-resolution method and system based on transfer learning - Google Patents

image super-resolution method and system based on transfer learning Download PDF

Info

Publication number
CN106327428B
CN106327428B CN201610791756.1A CN201610791756A CN106327428B CN 106327428 B CN106327428 B CN 106327428B CN 201610791756 A CN201610791756 A CN 201610791756A CN 106327428 B CN106327428 B CN 106327428B
Authority
CN
China
Prior art keywords
resolution
low
image
resolution image
neighborhood
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610791756.1A
Other languages
Chinese (zh)
Other versions
CN106327428A (en
Inventor
苏美
钟圣华
江健民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201610791756.1A priority Critical patent/CN106327428B/en
Publication of CN106327428A publication Critical patent/CN106327428A/en
Application granted granted Critical
Publication of CN106327428B publication Critical patent/CN106327428B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution

Abstract

The invention is suitable for the technical field of computers, and provides an image super-resolution method and system based on transfer learning, wherein the method comprises the following steps: obtaining a low-resolution image by down-sampling a high-resolution image, the high-resolution image comprising: an original image and a migration image; extracting feature pairs of the high-resolution image and the low-resolution image according to the high-resolution image and the low-resolution image; for the characteristics of each low-resolution image, calculating a low-resolution neighborhood and a corresponding high-resolution neighborhood to obtain a projection matrix; and forming a reconstructed high-resolution image according to the characteristics of the low-resolution image and the projection matrix during reconstruction. The training database is richer by adding the migration image, and favorable conditions are provided for later reconstruction images.

Description

Image super-resolution method and system based on transfer learning
Technical Field
The invention belongs to the technical field of computers, and particularly relates to an image super-resolution method and system based on transfer learning.
Background
In the field of image applications, it is often desirable to obtain High Resolution (HR) images, which means that the pixels in the image are dense and provide more detail, which is essential in many practical applications. Image Super Resolution (SR), which is a process of generating a High Resolution (HR) image from an input Low Resolution (LR) image, is widely used in many fields including computer vision, video surveillance, remote sensing images, etc., and breaks through the limitations of image equipment and environment, resulting in a high resolution image that cannot be acquired by a conventional digital camera. For these reasons, many methods for super-resolution of images have been developed and have achieved significant results over the past several decades.
the existing learning-based image super-resolution methods have some defects: the training database used is too small compared to other image processing tasks; these small numbers of training databases do not provide sufficient information, especially for test images of a particular type.
Disclosure of Invention
The invention aims to provide an image super-resolution method and system based on transfer learning, and aims to solve the problem that reconstruction effect is influenced due to the fact that a training database is small in the prior art.
in one aspect, the invention provides an image super-resolution method based on transfer learning, which comprises the following steps:
Obtaining a low-resolution image by down-sampling a high-resolution image, the high-resolution image comprising: an original image and a migration image;
Extracting feature pairs of the high-resolution image and the low-resolution image according to the high-resolution image and the low-resolution image;
for the characteristics of each low-resolution image, calculating a low-resolution neighborhood and a corresponding high-resolution neighborhood to obtain a projection matrix;
And forming a reconstructed high-resolution image according to the characteristics of the low-resolution image and the projection matrix during reconstruction.
In another aspect, the present invention provides a system for super-resolution of images based on transfer learning, the system comprising:
a low resolution image obtaining unit configured to obtain a low resolution image by down-sampling a high resolution image, the high resolution image including: an original image and a migration image;
A feature pair extraction unit configured to extract a feature pair of a high resolution image and a low resolution image from the high resolution image and the low resolution image;
The projection matrix obtaining unit is used for obtaining a projection matrix by calculating a low-resolution neighborhood and a corresponding high-resolution neighborhood for the characteristics of each low-resolution image; and
And the high-resolution image reconstruction unit is used for forming a reconstructed high-resolution image according to the characteristics of the low-resolution image and the projection matrix during reconstruction.
the embodiment of the invention is based on the fact that the migration learning is carried out, on the basis of an original image, the migration image is randomly selected from other fields and is used as a high-resolution image, the low-resolution image is obtained through down-sampling, the feature pair of the high-resolution image and the low-resolution image is extracted, the projection matrix is obtained according to the low-resolution neighborhood and the corresponding high-resolution neighborhood, so that the reconstructed high-resolution image is formed during reconstruction, the training database is richer through adding the migration image, and favorable conditions are provided for the subsequent reconstructed image.
drawings
fig. 1 is a flowchart of an implementation of a method for super-resolution of an image based on transfer learning according to an embodiment of the present invention;
Fig. 2 is a flowchart illustrating an implementation of acquiring a low-resolution image in the image super-resolution method based on transfer learning according to an embodiment of the present invention;
Fig. 3 is a flowchart illustrating an implementation of feature pair acquisition in the image super-resolution method based on transfer learning according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating an implementation of obtaining a projection matrix in the image super-resolution method based on transfer learning according to an embodiment of the present invention;
FIG. 5 is a flowchart of an implementation of reconstructing an image in the image super-resolution method based on transfer learning according to an embodiment of the present invention; and
FIG. 6 is a block diagram of a super-resolution image system based on transfer learning according to a second embodiment of the present invention;
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The following detailed description of specific implementations of the present invention is provided in conjunction with specific embodiments:
The first embodiment is as follows:
Fig. 1 shows a flowchart for implementing a method for super-resolution of images based on transfer learning according to an embodiment of the present invention, and for convenience of description, only the parts related to the embodiment of the present invention are shown, which is detailed as follows:
In step S101, a low-resolution image is obtained by down-sampling a high-resolution image, the high-resolution image including: an original image and a migrated image.
In the embodiment of the invention, the super-resolution of the images requires the establishment of a training database, the training database acquires the migration images from other fields on the basis of the original image T 0, the method based on the migration learning is to use the knowledge learned from one environment to help the learning task in a new environment, and the method can migrate information in different fields to a specific field, therefore, the invention randomly selects T t images from other fields different from the field to which the original image belongs as the migration images, so that the training database in the invention is composed of the original image T 0 and the migration image T t, and the images composed of the original image T 0 and the migration image T t are used as high-resolution images, and the low-resolution images are obtained by downsampling the high-resolution images.
further, fig. 2 shows a flowchart of an implementation of acquiring a low-resolution image in the image super-resolution method based on transfer learning according to an embodiment of the present invention, which is detailed as follows:
In step S201, an original image and a transition image, which is an image selected at random in a region different from the area to which the original image belongs, are acquired as high-resolution images.
in the embodiment of the invention, the method based on the transfer learning enriches the training database by taking the randomly selected image different from the field to which the original image belongs as the transfer image, and forms the image consisting of the original image T 0 and the transfer image T t as the high-resolution image.
in step S202, the high-resolution image is converted into a YCbCr space image.
In the embodiment of the present invention, the YCbCr space image is one of color space images, and Y in the YCbCr space image refers to a luminance component, Cb refers to a blue chrominance component, and Cr refers to a red chrominance component. Since the human visual system is sensitive to intensity variations of high frequencies rather than color variations of high frequencies, the color channel variations are negligible for super-resolution, and only the luminance component may be calculated when converting to a YCbCr spatial image in order to reduce the amount of computation.
in step S203, the high resolution image is sequentially down-sampled and interpolated according to a preset amplification factor to obtain a low resolution image.
In the embodiment of the invention, a corresponding low-resolution image is obtained from the high-resolution image by down-sampling with a preset amplification factor, and further, the low-resolution image is subjected to bicubic interpolation with the same preset amplification factor to obtain an interpolated high-resolution image. These interpolated high resolution images are also referred to as low resolution images because the high frequency information is lost.
In step S102, feature pairs of the high-resolution image and the low-resolution image are extracted from the high-resolution image and the low-resolution image.
in the embodiment of the invention, after the low-resolution image is obtained, the high-resolution image and the low-resolution image are respectively subjected to gridding decomposition, and the feature pairs of the high-resolution image and the low-resolution image are extracted.
Further, fig. 3 shows a flowchart of implementing the feature pair acquisition in the image super-resolution method based on the transfer learning according to the first embodiment of the present invention, which is detailed as follows:
in step S301, a preset number of high-pass filters are used to filter the low-resolution image, so as to obtain a filtered low-resolution image.
in an embodiment of the present invention, a predetermined number of filtered low resolution images are formed using a predetermined number of high pass filters for all low resolution images in order to extract local features and corresponding high frequency content.
in step S302, the filtered low-resolution image and the filtered high-resolution image are respectively subjected to gridding, and feature pairs of the high-resolution image and the low-resolution image are extracted.
In the embodiment of the invention, small fragments of the low-resolution images are extracted from the filtered low-resolution images by a gridding method, and the small fragments of the low-resolution images are combined into the features of the low-resolution images; similarly, small patches of the high-resolution image are extracted at the same position by the gridding method for the high-resolution image, and thus, feature pairs of the high-resolution image and the low-resolution image are extracted.
In step S103, for the feature of each low-resolution image, a projection matrix is obtained by calculating a low-resolution neighborhood and a corresponding high-resolution neighborhood.
in the embodiment of the invention, according to the feature pair of the high-resolution image and the low-resolution image, a low-resolution neighborhood and a corresponding high-resolution neighborhood are obtained, and a projection matrix is calculated.
further, fig. 4 shows a flowchart of an implementation of acquiring a projection matrix in the image super-resolution method based on transfer learning according to an embodiment of the present invention, which is detailed as follows:
in step S401, a low-resolution sparse dictionary and a corresponding high-resolution sparse dictionary are constructed.
In the embodiment of the invention, a dictionary training method is used for obtaining a low-resolution sparse dictionary D L, and a sparse representation vector delta is obtained by constructing a low-resolution sparse dictionary D L, wherein the obtaining formula is as follows:
where δ is the sparse representation vector, x is the feature of the low resolution image, and λ is a weighting factor.
Assuming that the coefficients of the gridding decomposition of the small patches of the high-resolution image and the low-resolution image are the same, a high-resolution sparse dictionary D H corresponding to the low-resolution sparse dictionary D L is constructed therefrom, the goal is to recover the feature y of the high-resolution image, whose approximate value is y ≈ D H δ the acquisition formula of the high-resolution sparse dictionary D H is:
Where δ is the sparse representation vector, y is the feature of the high resolution image, and the solution of the above formula is:
DH=yδT(δδT)-1
after constructing the low resolution sparse dictionary D L and the corresponding high resolution sparse dictionary D H, a domain for regression needs to be generated by using the samples in the training database.
in step S402, dictionary atoms of the features of each low-resolution image in the low-resolution sparse dictionary are obtained to obtain a low-resolution neighborhood and a corresponding high-resolution neighborhood, where the dictionary atoms are atoms closest to the features of the low-resolution image in the low-resolution sparse dictionary.
For each low resolution image feature x i whose nearest neighbor in the low resolution sparse dictionary is calculated, namely dictionary atom d k, low resolution neighborhood N L,k contains m training samples that are closest to the dictionary atom corresponding to feature x i of the low resolution image, while the high resolution neighborhood corresponding to low resolution neighborhood N L,k is N H,k.
in step S403, a projection matrix corresponding to the dictionary atom is obtained through the low-resolution neighborhood and the corresponding high-resolution neighborhood.
in the embodiment of the present invention, a coefficient vector γ is set, and the calculation formula is as follows:
where γ is the coefficient vector, N L,k is the low resolution neighborhood, and x i is the feature of the low resolution image.
γ=(NL,k TNL,k+λI)-1NL,k Txi
From this, a small patch y H,i of the high resolution image can be reconstructed, which is calculated as:
yH,i=NH,k(NL,k TNL,k+λI)-1NL,k Txi
wherein K is 1,2, …, K.
Further, the projection matrix P k is obtained as P k ═ N H,k (N L,k T N L,k + λ I) -1 N L,k T.
In step S104, a reconstructed high-resolution image is formed from the features of the low-resolution image and the projection matrix at the time of reconstruction.
T,j L T,j k k kIn the embodiment of the invention, in the reconstruction stage, for a given test image, firstly, the features of the low-resolution image are extracted from the given test image, and the extraction process is the same as the extraction process in training.
Further, fig. 5 shows a flowchart of implementing image reconstruction in the image super-resolution method based on transfer learning according to an embodiment of the present invention, which is detailed as follows:
In step S501, during reconstruction, features of the low-resolution image are extracted, and dictionary atoms corresponding to the low-resolution sparse dictionary are acquired.
In step S502, a projection matrix corresponding to a dictionary atom is acquired from the dictionary atom.
In step S503, a high resolution image fragment is obtained according to the feature of the low resolution image and the projection matrix.
In the embodiment of the present invention, the calculation formula of the high resolution image fragment y T,j is y T,j ═ P k x T,j.
In step S504, the high-resolution image patches are combined to form a reconstructed high-resolution image.
In the embodiment of the invention, based on the migration learning, on the basis of an original image, the migration images are randomly selected from other fields and are used as high-resolution images together, a low-resolution image is obtained through down-sampling, the feature pairs of the high-resolution image and the low-resolution image are extracted, a projection matrix is obtained according to the low-resolution neighborhood and the corresponding high-resolution neighborhood, so that the reconstructed high-resolution image is formed during reconstruction, a training database is richer by adding the migration images, and favorable conditions are provided for the subsequent reconstructed images.
Example two:
Fig. 6 shows a structure diagram of a super-resolution image system based on transfer learning according to a second embodiment of the present invention, and for convenience of description, only the parts related to the second embodiment of the present invention are shown. The image super-resolution system based on the transfer learning comprises: a low resolution image acquisition unit 61, a feature pair extraction unit 62, a projection matrix acquisition unit 63, and a high resolution image reconstruction unit 64, wherein:
A low resolution image obtaining unit 61, configured to obtain a low resolution image by down-sampling the high resolution image, where the high resolution image includes: an original image and a migrated image.
in the embodiment of the invention, the super-resolution of the images requires the establishment of a training database, the training database acquires the migration images from other fields on the basis of the original image T 0, the method based on the migration learning is to use the knowledge learned from one environment to help the learning task in a new environment, and the method can migrate information in different fields to a specific field, therefore, the invention randomly selects T t images from other fields different from the field to which the original image belongs as the migration images, so that the training database in the invention is composed of the original image T 0 and the migration image T t, and the images composed of the original image T 0 and the migration image T t are used as high-resolution images, and the low-resolution images are obtained by downsampling the high-resolution images.
further, the low resolution image acquiring unit 61 includes: an image migration unit 611, an image conversion unit 612, and a low resolution image acquisition sub-unit 613, in which:
the image migration unit 611 is configured to acquire an original image and a migration image as a high-resolution image, the migration image being an image selected at random in a region different from the area to which the original image belongs.
In the embodiment of the invention, the method based on the transfer learning enriches the training database by taking the randomly selected image different from the field to which the original image belongs as the transfer image, and forms the image consisting of the original image T 0 and the transfer image T t as the high-resolution image.
an image conversion unit 612 for converting the high resolution image into a YCbCr space image.
In the embodiment of the present invention, the YCbCr space image is one of color space images, and Y in the YCbCr space image refers to a luminance component, Cb refers to a blue chrominance component, and Cr refers to a red chrominance component. Since the human visual system is sensitive to intensity variations of high frequencies rather than color variations of high frequencies, the color channel variations are negligible for super-resolution, and only the luminance component may be calculated when converting to a YCbCr spatial image in order to reduce the amount of computation.
A low-resolution image obtaining subunit 613, configured to perform downsampling and interpolation operation on the high-resolution image in sequence according to a preset amplification factor, to obtain a low-resolution image.
In the embodiment of the invention, a corresponding low-resolution image is obtained from the high-resolution image by down-sampling with a preset amplification factor, and further, the low-resolution image is subjected to bicubic interpolation with the same preset amplification factor to obtain an interpolated high-resolution image. These interpolated high resolution images are also referred to as low resolution images because the high frequency information is lost.
a feature pair extraction unit 62 for extracting a feature pair of the high resolution image and the low resolution image from the high resolution image and the low resolution image.
In the embodiment of the invention, after the low-resolution image is obtained, the high-resolution image and the low-resolution image are respectively subjected to gridding decomposition, and the feature pairs of the high-resolution image and the low-resolution image are extracted.
further, the feature pair extraction unit 62 includes: a filtering unit 621 and a feature pair extraction subunit 622, where:
the filtering unit 621 is configured to filter the low-resolution image by using a preset number of high-pass filters, so as to obtain a filtered low-resolution image.
In an embodiment of the present invention, a predetermined number of filtered low resolution images are formed using a predetermined number of high pass filters for all low resolution images in order to extract local features and corresponding high frequency content.
And a feature pair extraction subunit 622, configured to perform meshing on the filtered low-resolution image and the filtered high-resolution image, respectively, and extract feature pairs of the high-resolution image and the low-resolution image.
in the embodiment of the invention, small fragments of the low-resolution image are extracted from the filtered low-resolution image by a gridding method, and the small fragments of the low-resolution image are the features of the low-resolution image; similarly, small patches of the high-resolution image are extracted at the same position by the gridding method for the high-resolution image, and thus, feature pairs of the high-resolution image and the low-resolution image are extracted.
And a projection matrix obtaining unit 63, configured to obtain a projection matrix by calculating a low resolution neighborhood and a corresponding high resolution neighborhood for the feature of each low resolution image.
In the embodiment of the invention, according to the feature pair of the high-resolution image and the low-resolution image, a low-resolution neighborhood and a corresponding high-resolution neighborhood are obtained, and a projection matrix is calculated.
Further, the projection matrix obtaining unit 63 includes: sparse dictionary constructing unit 631, neighborhood acquiring unit 632, and projection matrix acquiring subunit 633, where:
a sparse dictionary constructing unit 631 for constructing a low resolution sparse dictionary and a corresponding high resolution sparse dictionary.
In the embodiment of the invention, a dictionary training method is used for obtaining a low-resolution sparse dictionary D L, and a sparse representation vector delta is obtained by constructing a low-resolution sparse dictionary D L, wherein the obtaining formula is as follows:
Where δ is the sparse representation vector, x is the feature of the low resolution image, and λ is a weighting factor.
assuming that the coefficients of the gridding decomposition of the small patches of the high-resolution image and the low-resolution image are the same, a high-resolution sparse dictionary D H corresponding to the low-resolution sparse dictionary D L is constructed therefrom, the goal is to recover the feature y of the high-resolution image, whose approximate value is y ≈ D H δ the acquisition formula of the high-resolution sparse dictionary D H is:
where δ is the sparse representation vector, y is the feature of the high resolution image, and the solution of the above formula is:
DH=yδT(δδT)-1
after constructing the low resolution sparse dictionary D L and the corresponding high resolution sparse dictionary D H, a domain for regression needs to be generated by using the samples in the training database.
the neighborhood obtaining unit 632 is configured to obtain a dictionary atom of the feature of each low-resolution image in the low-resolution sparse dictionary to obtain a low-resolution neighborhood and a corresponding high-resolution neighborhood, where the dictionary atom is an atom closest to the feature of the low-resolution image in the low-resolution sparse dictionary.
For each low resolution image feature x i whose nearest neighbor in the low resolution sparse dictionary is calculated, namely dictionary atom d k, low resolution neighborhood N L,k contains m training samples that are closest to the dictionary atom corresponding to feature x i of the low resolution image, while the high resolution neighborhood corresponding to low resolution neighborhood N L,k is N H,k.
and the projection matrix obtaining subunit 633 is configured to obtain a projection matrix corresponding to the dictionary atom through the low-resolution neighborhood and the corresponding high-resolution neighborhood.
in the embodiment of the present invention, a coefficient vector γ is set, and the calculation formula is as follows:
where γ is the coefficient vector, N L,k is the low resolution neighborhood, and x i is the feature of the low resolution image.
γ=(NL,k TNL,k+λI)-1NL,k Txi
From this, a small patch y H,i of the high resolution image can be reconstructed, which is calculated as:
yH,i=NH,k(NL,k TNL,k+λI)-1NL,k Txi
Wherein K is 1,2, …, K.
Further, the projection matrix P k is obtained as P k ═ N H,k (N L,k T N L,k + λ I) -1 N L,k T.
and a high-resolution image reconstruction unit 64 configured to form a reconstructed high-resolution image based on the feature of the low-resolution image and the projection matrix.
T,j L T,j k k kin the embodiment of the invention, in the reconstruction stage, for a given test image, firstly, the features of the low-resolution image are extracted from the given test image, and the extraction process is the same as the extraction process in training.
further, the high resolution image reconstructing unit 64 includes: a first reconfiguration unit 641, a second reconfiguration unit 642, a third reconfiguration unit 643 and a fourth reconfiguration unit 644, wherein:
the first reconstructing unit 641 is configured to, during reconstruction, extract features of the low-resolution image, and obtain dictionary atoms corresponding to the low-resolution sparse dictionary;
the second reconstruction unit 642 is configured to obtain, according to the dictionary atom, a projection matrix corresponding to the dictionary atom;
A third reconstructing unit 643, configured to obtain a high-resolution image fragment according to the feature of the low-resolution image and the projection matrix; and
And a fourth reconstructing unit 644, configured to combine the high resolution image fragments to form a reconstructed high resolution image.
The embodiment of the invention is based on the fact that the migration learning is carried out, on the basis of an original image, the migration image is randomly selected from other fields and is used as a high-resolution image, the low-resolution image is obtained through down-sampling, the feature pair of the high-resolution image and the low-resolution image is extracted, the projection matrix is obtained according to the low-resolution neighborhood and the corresponding high-resolution neighborhood, so that the reconstructed high-resolution image is formed during reconstruction, the training database is richer through adding the migration image, and favorable conditions are provided for the subsequent reconstructed image.
in the embodiment of the present invention, each unit may be implemented by a corresponding hardware or software unit, and each unit may be an independent software or hardware unit, or may be integrated into a software or hardware unit, which is not limited herein.
the above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (8)

1. An image super-resolution method based on transfer learning is characterized by comprising the following steps:
Obtaining a low-resolution image by down-sampling a high-resolution image, the high-resolution image comprising: an original image and a migration image;
Extracting feature pairs of the high-resolution image and the low-resolution image according to the high-resolution image and the low-resolution image;
For the characteristics of each low-resolution image, calculating a low-resolution neighborhood and a corresponding high-resolution neighborhood to obtain a projection matrix;
forming a reconstructed high-resolution image according to the characteristics of the low-resolution image and the projection matrix during reconstruction,
The step of obtaining a projection matrix by calculating a low resolution neighborhood and a corresponding high resolution neighborhood for the features of each low resolution image includes:
Constructing a low-resolution sparse dictionary and a corresponding high-resolution sparse dictionary;
obtaining dictionary atoms of the features of each low-resolution image in the low-resolution sparse dictionary to obtain a low-resolution neighborhood and a corresponding high-resolution neighborhood, wherein the dictionary atoms are atoms closest to the features of the low-resolution image in the low-resolution sparse dictionary;
And obtaining a projection matrix corresponding to the dictionary atoms through the low-resolution neighborhood and the corresponding high-resolution neighborhood.
2. the method of claim 1, wherein the step of obtaining the low resolution image by down-sampling the high resolution image comprises:
Acquiring an original image and a migration image as the high-resolution image, wherein the migration image is an image randomly selected in a field different from the original image;
Converting the high-resolution image into a YCbCr space image;
And sequentially carrying out down-sampling and interpolation operation on the high-resolution image according to a preset amplification factor to obtain the low-resolution image.
3. the method of claim 1, wherein said step of extracting feature pairs of a high resolution image and a low resolution image from said high resolution image and said low resolution image comprises:
filtering the low-resolution images by using a preset number of high-pass filters to obtain filtered low-resolution images;
and respectively gridding the filtered low-resolution image and the filtered high-resolution image, and extracting the feature pairs of the high-resolution image and the low-resolution image.
4. The method of claim 1, wherein the step of forming a reconstructed high resolution image based on the features of the low resolution image and the projection matrix when reconstructing comprises:
during reconstruction, extracting the characteristics of a low-resolution image, and acquiring corresponding dictionary atoms in the low-resolution sparse dictionary;
Acquiring a projection matrix corresponding to the dictionary atom according to the dictionary atom;
obtaining high-resolution image fragments according to the characteristics of the low-resolution image and the projection matrix;
and combining the high-resolution image fragments to form a reconstructed high-resolution image.
5. An image super-resolution system based on transfer learning, characterized in that the system comprises:
a low resolution image obtaining unit configured to obtain a low resolution image by down-sampling a high resolution image, the high resolution image including: an original image and a migration image;
A feature pair extraction unit configured to extract a feature pair of a high resolution image and a low resolution image from the high resolution image and the low resolution image;
The projection matrix obtaining unit is used for obtaining a projection matrix by calculating a low-resolution neighborhood and a corresponding high-resolution neighborhood for the characteristics of each low-resolution image; and
a high resolution image reconstruction unit for forming a reconstructed high resolution image based on the feature of the low resolution image and the projection matrix at the time of reconstruction,
The projection matrix acquisition unit includes:
The sparse dictionary constructing unit is used for constructing a low-resolution sparse dictionary and a corresponding high-resolution sparse dictionary;
The neighborhood acquiring unit is used for acquiring dictionary atoms of the features of each low-resolution image in the low-resolution sparse dictionary to obtain a low-resolution neighborhood and a corresponding high-resolution neighborhood, wherein the dictionary atoms are atoms which are closest to the features of the low-resolution images in the low-resolution sparse dictionary; and
And the projection matrix obtaining subunit is used for obtaining the projection matrix corresponding to the dictionary atoms through the low-resolution neighborhood and the corresponding high-resolution neighborhood.
6. The system of claim 5, wherein the low resolution image acquisition unit comprises:
an image migration unit configured to acquire an original image and a migration image as the high-resolution image, the migration image being an image selected at random in a field to which the original image belongs;
An image conversion unit for converting the high resolution image into a YCbCr space image; and
And the low-resolution image acquisition subunit is used for sequentially performing down-sampling and interpolation operation on the high-resolution image according to a preset amplification factor to obtain the low-resolution image.
7. The system of claim 5, wherein the feature pair extraction unit comprises:
the filtering unit is used for filtering the low-resolution images by using a preset number of high-pass filters to obtain filtered low-resolution images; and
And the feature pair extraction subunit is used for respectively meshing the filtered low-resolution image and the filtered high-resolution image and extracting feature pairs of the high-resolution image and the low-resolution image.
8. the system of claim 5, wherein the high resolution image reconstruction unit comprises:
The first reconstruction unit is used for extracting the characteristics of the low-resolution image during reconstruction and acquiring corresponding dictionary atoms in the low-resolution sparse dictionary;
The second reconstruction unit is used for acquiring a projection matrix corresponding to the dictionary atom according to the dictionary atom;
The third reconstruction unit is used for obtaining high-resolution image fragments according to the characteristics of the low-resolution image and the projection matrix; and
And the fourth reconstruction unit is used for combining the high-resolution image fragments to form a reconstructed high-resolution image.
CN201610791756.1A 2016-08-31 2016-08-31 image super-resolution method and system based on transfer learning Expired - Fee Related CN106327428B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610791756.1A CN106327428B (en) 2016-08-31 2016-08-31 image super-resolution method and system based on transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610791756.1A CN106327428B (en) 2016-08-31 2016-08-31 image super-resolution method and system based on transfer learning

Publications (2)

Publication Number Publication Date
CN106327428A CN106327428A (en) 2017-01-11
CN106327428B true CN106327428B (en) 2019-12-10

Family

ID=57789742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610791756.1A Expired - Fee Related CN106327428B (en) 2016-08-31 2016-08-31 image super-resolution method and system based on transfer learning

Country Status (1)

Country Link
CN (1) CN106327428B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194879A (en) * 2017-06-26 2017-09-22 司马大大(北京)智能系统有限公司 Super resolution ratio reconstruction method, device and electronic equipment
CN110996171B (en) * 2019-12-12 2021-11-26 北京金山云网络技术有限公司 Training data generation method and device for video tasks and server
US11157763B2 (en) 2020-02-07 2021-10-26 Wipro Limited System and method for identifying target sections within images

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778671A (en) * 2015-04-21 2015-07-15 重庆大学 Image super-resolution method based on SAE and sparse representation
CN104899830A (en) * 2015-05-29 2015-09-09 清华大学深圳研究生院 Image super-resolution method
CN104992407A (en) * 2015-06-17 2015-10-21 清华大学深圳研究生院 Image super-resolution method
CN105335929A (en) * 2015-09-15 2016-02-17 清华大学深圳研究生院 Depth map super-resolution method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104778671A (en) * 2015-04-21 2015-07-15 重庆大学 Image super-resolution method based on SAE and sparse representation
CN104899830A (en) * 2015-05-29 2015-09-09 清华大学深圳研究生院 Image super-resolution method
CN104992407A (en) * 2015-06-17 2015-10-21 清华大学深圳研究生院 Image super-resolution method
CN105335929A (en) * 2015-09-15 2016-02-17 清华大学深圳研究生院 Depth map super-resolution method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
On Single Image Scale-Up Using Sparse-Representations;Roman Zeyde 等;《International conference on curves and surfaces》;20121231;第3.2节 *
基于迁移学习的SAR目标超分辨重建;徐舟 等;《航空学报》;20150625;第36卷(第6期);第1段,第2.1-2.2节 *

Also Published As

Publication number Publication date
CN106327428A (en) 2017-01-11

Similar Documents

Publication Publication Date Title
Zhang et al. Color demosaicking by local directional interpolation and nonlocal adaptive thresholding
CN107067380B (en) High-resolution image reconstruction method based on low-rank tensor and hierarchical dictionary learning
WO2016045242A1 (en) Image magnification method, image magnification apparatus and display device
CN112419151B (en) Image degradation processing method and device, storage medium and electronic equipment
CN108921786A (en) Image super-resolution reconstructing method based on residual error convolutional neural networks
JP6324155B2 (en) Image processing apparatus, image processing method, and program
CN108109109B (en) Super-resolution image reconstruction method, device, medium and computing equipment
CN106327428B (en) image super-resolution method and system based on transfer learning
Wei et al. Improving resolution of medical images with deep dense convolutional neural network
CN105550989A (en) Image super-resolution method based on nonlocal Gaussian process regression
Tian et al. Anchored neighborhood regression based single image super-resolution from self-examples
CN107220934B (en) Image reconstruction method and device
CN114494022B (en) Model training method, super-resolution reconstruction method, device, equipment and medium
CN108122218B (en) Image fusion method and device based on color space
JP2021043874A (en) Image processing apparatus, image processing method, and program
Zamani et al. Multiple-frames super-resolution for closed circuit television forensics
WO2022061879A1 (en) Image processing method, apparatus and system, and computer-readable storage medium
CN110689486A (en) Image processing method, device, equipment and computer storage medium
Rafinazari et al. Demosaicking algorithm for the Kodak-RGBW color filter array
CN107767342B (en) Wavelet transform super-resolution image reconstruction method based on integral adjustment model
Suzuki et al. New learning-based super resolution utilizing total variation regularization method
CN108492264B (en) Single-frame image fast super-resolution method based on sigmoid transformation
Mokari et al. An adaptive single image method for super resolution
Shi et al. Region-adaptive demosaicking with weighted values of multidirectional information
Lin et al. Fast deconvolution-based image super-resolution using gradient prior

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20191210

Termination date: 20210831