CN111080527A - Image super-resolution method and device, electronic equipment and storage medium - Google Patents

Image super-resolution method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111080527A
CN111080527A CN201911329473.5A CN201911329473A CN111080527A CN 111080527 A CN111080527 A CN 111080527A CN 201911329473 A CN201911329473 A CN 201911329473A CN 111080527 A CN111080527 A CN 111080527A
Authority
CN
China
Prior art keywords
image
resolution
network
super
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911329473.5A
Other languages
Chinese (zh)
Other versions
CN111080527B (en
Inventor
鲁方波
汪贤
樊鸿飞
蔡媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Cloud Network Technology Co Ltd
Original Assignee
Beijing Kingsoft Cloud Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Cloud Network Technology Co Ltd filed Critical Beijing Kingsoft Cloud Network Technology Co Ltd
Priority to CN201911329473.5A priority Critical patent/CN111080527B/en
Publication of CN111080527A publication Critical patent/CN111080527A/en
Priority to PCT/CN2020/135037 priority patent/WO2021121108A1/en
Priority to US17/772,306 priority patent/US20220383452A1/en
Application granted granted Critical
Publication of CN111080527B publication Critical patent/CN111080527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image super-resolution method, an image super-resolution device, electronic equipment and a storage medium, wherein the method comprises the following steps: inputting an image to be processed into a first super-resolution network model and a second super-resolution network model which are trained in advance; the first super-resolution network model is a trained convolutional neural network; the second super-resolution network model is a generation network contained in the trained generative confrontation network; acquiring a first image output by the first super-resolution network model and a second image output by the second super-resolution network model; and after the first image and the second image are fused, obtaining a target image, wherein the resolution of the target image is greater than that of the image to be processed. Therefore, by applying the embodiment of the invention, the target image has the advantages of the first image output by the first super-resolution network model and the second image output by the second super-resolution network model, and the obtained target image has higher definition.

Description

Image super-resolution method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for super-resolution of an image, an electronic device, and a storage medium.
Background
At present, due to the environmental influence, cost control and the like, image acquisition equipment can acquire a plurality of low-resolution images, the definition is not high, and the visual experience of a user is poor.
In order to improve the definition of an image, an image super-resolution method is adopted to process an image to be processed with a lower resolution so as to obtain a target image with a resolution greater than that of the image to be processed.
In the related art, the method for super-resolution of an image mainly performs interpolation processing on an image to be processed to obtain a target image with a resolution greater than that of the image to be processed, for example: and processing the image to be processed by methods such as nearest neighbor interpolation, linear interpolation, cubic spline interpolation and the like to obtain a target image with the resolution ratio higher than that of the image to be processed.
However, with the above image super-resolution method, the sharpness of the obtained target image is still to be improved.
Disclosure of Invention
The invention aims to provide a method, a device, an electronic device and a storage medium for image super-resolution, so as to obtain a target image with higher definition. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for super-resolution of an image, where the method includes:
acquiring an image to be processed;
respectively inputting the images to be processed into a first super-resolution network model and a second super-resolution network model which are trained in advance; the first super-resolution network model is a convolutional neural network which is trained by a plurality of original sample images and corresponding target sample images; the second super-resolution network model is a generation network contained in a generation countermeasure network which is trained by a plurality of original sample images and corresponding target sample images; the network structures of the first super-resolution network model and the second super-resolution network model are the same; the resolution of the target sample image is greater than the resolution of the original sample image;
acquiring a first image output by the first super-resolution network model and a second image output by the second super-resolution network model; the resolution of the first image and the resolution of the second image are both greater than the resolution of the image to be processed;
and fusing the first image and the second image to obtain a target image, wherein the resolution of the target image is greater than that of the image to be processed.
Optionally, the training process of the first super-resolution network model includes:
acquiring a training sample set; the training sample set comprises a plurality of training samples; wherein each training sample comprises: an original sample image and a corresponding target sample image; the resolution of the target sample image is greater than the resolution of the original sample image;
inputting a first preset number of first original sample images in the training sample set into a current convolutional neural network, and acquiring each first reconstruction target image corresponding to each first original sample image;
calculating a loss value based on each first reconstructed target image, each first target sample image corresponding to each first original sample image and a preset first loss function;
judging whether the current convolutional neural network is converged or not according to a loss value of a preset first loss function; if so, taking the current convolutional neural network as a trained first super-resolution network model; and if not, adjusting the network parameters of the current convolutional neural network, and returning to the step of inputting the first original sample images of the first preset number in the training sample set into the current convolutional neural network to obtain each first reconstruction target image corresponding to each first original sample image.
Optionally, the training process of the second super-resolution network model includes:
taking the network parameters of the first super-resolution network model as initial parameters of a generation network in a generation type countermeasure network to obtain a current generation network; setting initial parameters of a discrimination network in the generating countermeasure network to obtain a current discrimination network;
inputting a second preset number of second original sample images in the training sample set into a current generation network, and acquiring each second reconstruction target image corresponding to each second original sample image;
inputting each second reconstruction target image into a current discrimination network to obtain each first current prediction probability value of each second reconstruction target image as a second target sample image; inputting each second target sample image corresponding to each second original sample image into a current discrimination network to obtain each second current prediction probability value of each second target sample image as a second target sample image;
calculating a loss value according to each first current prediction probability value, each second current prediction probability value, whether the current prediction probability value is a real result of a second target sample image and a preset second loss function;
adjusting the network parameters of the current discrimination network according to the loss value of a preset second loss function to obtain a current intermediate discrimination network;
inputting a third preset number of third original sample images in the training sample set into a current generation network, and acquiring each third reconstruction target image corresponding to each third original sample image;
inputting each third reconstruction target image into the current intermediate discrimination network to obtain each third current prediction probability value of each third reconstruction target image as a third target sample image;
calculating a loss value according to each third current prediction probability value, whether the value is a real result of a third target sample image, each third target sample image corresponding to the third original sample image, each third reconstructed target image and a preset third loss function;
and adjusting the network parameters of the current generation network according to the loss value of a third loss function, adding 1 iteration time, returning to the step of inputting a second preset number of second original sample images in the training sample set into the current generation network, acquiring each second reconstruction target image corresponding to each second original sample image until the preset iteration time is reached, and taking the trained current generation network as a second super-resolution network model.
Optionally, the step of obtaining the target image after fusing the first image and the second image includes:
fusing the pixel value of the first image and the pixel value of the second image according to the weight to obtain a target image; the weight is preset or determined based on the resolution of the first image and the resolution of the second image.
Optionally, the step of fusing the pixel value of the first image and the pixel value of the second image according to the weight to obtain the target image includes:
and fusing the pixel value of the first image and the pixel value of the second image according to the weight according to the following formula to obtain a fused image as a target image:
img3=alpha1*img1+(1-alpha1)*img2
wherein, alpha1 is the weight of each pixel value corresponding to each pixel point of the first image, img1 is each pixel value corresponding to each pixel point of the first image, img2 is each pixel value corresponding to each pixel point of the second image, and img3 is each pixel value corresponding to each pixel point of the target image; alpha1 has a value in the range of [0, 1 ].
In a second aspect, an embodiment of the present invention provides an apparatus for super-resolution of an image, the apparatus including:
the image processing device comprises a to-be-processed image acquisition unit, a processing unit and a processing unit, wherein the to-be-processed image acquisition unit is used for acquiring an image to be processed;
the input unit is used for respectively inputting the images to be processed into a first super-resolution network model and a second super-resolution network model which are trained in advance; the first super-resolution network model is a convolutional neural network which is trained by a plurality of original sample images and corresponding target sample images; the second super-resolution network model is a generation network contained in a generation countermeasure network which is trained by a plurality of original sample images and corresponding target sample images; the network structures of the first super-resolution network model and the second super-resolution network model are the same; the resolution of the target sample image is greater than the resolution of the original sample image;
an acquisition unit, configured to acquire a first image output by the first super-resolution network model and a second image output by the second super-resolution network model; the resolution of the first image and the resolution of the second image are both greater than the resolution of the image to be processed;
and the target image obtaining unit is used for obtaining a target image after the first image and the second image are fused, wherein the resolution of the target image is greater than that of the image to be processed.
Optionally, the apparatus further comprises: a first super-resolution network model training unit;
the first super-resolution network model training unit is specifically configured to:
acquiring a training sample set; the training sample set comprises a plurality of training samples; wherein each training sample comprises: an original sample image and a corresponding target sample image; the resolution of the target sample image is greater than the resolution of the original sample image;
inputting a first preset number of first original sample images in the training sample set into a current convolutional neural network, and acquiring each first reconstruction target image corresponding to each first original sample image;
calculating a loss value based on each first reconstructed target image, each first target sample image corresponding to each first original sample image and a preset first loss function;
judging whether the current convolutional neural network is converged or not according to a loss value of a preset first loss function; if so, taking the current convolutional neural network as a trained first super-resolution network model; and if not, adjusting the network parameters of the current convolutional neural network, and returning to the step of inputting the first original sample images of the first preset number in the training sample set into the current convolutional neural network to obtain each first reconstruction target image corresponding to each first original sample image.
Optionally, the apparatus further comprises: a second super-resolution network model training unit;
the second super-resolution network model training unit is specifically configured to:
taking the network parameters of the first super-resolution network model as initial parameters of a generation network in a generation type countermeasure network to obtain a current generation network; setting initial parameters of a discrimination network in the generating countermeasure network to obtain a current discrimination network;
inputting a second preset number of second original sample images in the training sample set into a current generation network, and acquiring each second reconstruction target image corresponding to each second original sample image;
inputting each second reconstruction target image into a current discrimination network to obtain each first current prediction probability value of each second reconstruction target image as a second target sample image; inputting each second target sample image corresponding to each second original sample image into a current discrimination network to obtain each second current prediction probability value of each second target sample image as a second target sample image;
calculating a loss value according to each first current prediction probability value, each second current prediction probability value, whether the current prediction probability value is a real result of a second target sample image and a preset second loss function;
adjusting the network parameters of the current discrimination network according to the loss value of a preset second loss function to obtain a current intermediate discrimination network;
inputting a third preset number of third original sample images in the training sample set into a current generation network, and acquiring each third reconstruction target image corresponding to each third original sample image;
inputting each third reconstruction target image into the current intermediate discrimination network to obtain each third current prediction probability value of each third reconstruction target image as a third target sample image;
calculating a loss value according to each third current prediction probability value, whether the value is a real result of a third target sample image, each third target sample image corresponding to the third original sample image, each third reconstructed target image and a preset third loss function;
and adjusting the network parameters of the current generation network according to the loss value of a third loss function, adding 1 iteration time, returning to the step of inputting a second preset number of second original sample images in the training sample set into the current generation network, acquiring each second reconstruction target image corresponding to each second original sample image until the preset iteration time is reached, and taking the trained current generation network as a second super-resolution network model.
Optionally, the target image obtaining unit is specifically configured to: fusing the pixel value of the first image and the pixel value of the second image according to the weight to obtain a target image; the weight is preset or determined based on the resolution of the first image and the resolution of the second image.
Optionally, the target image obtaining unit is specifically configured to:
and fusing the pixel value of the first image and the pixel value of the second image according to the weight according to the following formula to obtain a fused image as a target image:
img3=alpha1*img1+(1-alpha1)*img2
wherein, alpha1 is the weight of each pixel value corresponding to each pixel point of the first image, img1 is each pixel value corresponding to each pixel point of the first image, img2 is each pixel value corresponding to each pixel point of the second image, and img3 is each pixel value corresponding to each pixel point of the target image; alpha1 has a value in the range of [0, 1 ].
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor and the communication interface complete communication between the memory and the processor through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the steps of any image super-resolution method when executing the program stored in the memory.
In a fourth aspect, the present invention provides a computer-readable storage medium, in which a computer program is stored, and the computer program is executed by a processor to perform the steps of any one of the image super-resolution methods described above.
In a fifth aspect, the present invention further provides a computer program product containing instructions, which when run on a computer, causes the computer to execute any of the image super-resolution methods described above.
The image super-resolution method, the device, the electronic equipment and the storage medium provided by the embodiment of the invention can acquire the image to be processed; respectively inputting the images to be processed into a first super-resolution network model and a second super-resolution network model which are trained in advance; the first super-resolution network model is a convolutional neural network which is trained by a plurality of original sample images and corresponding target sample images; the second super-resolution network model is a generation network contained in a generation countermeasure network which is trained by a plurality of original sample images and corresponding target sample images; the network structures of the first super-resolution network model and the second super-resolution network model are the same; the resolution of the target sample image is greater than the resolution of the original sample image; acquiring a first image output by the first super-resolution network model and a second image output by the second super-resolution network model; the resolution of the first image and the resolution of the second image are both greater than the resolution of the image to be processed; and fusing the first image and the second image to obtain a target image, wherein the resolution of the target image is greater than that of the image to be processed.
By applying the embodiment of the invention, the target image can be obtained after the first image output by the first super-resolution network model and the second image output by the second super-resolution network model are fused, the target image takes the advantages of the first image output by the first super-resolution network model and the second image output by the second super-resolution network model into account, and the obtained target image has higher definition.
Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart of a method for super-resolution of an image according to an embodiment of the present invention;
fig. 2 is a flowchart of a training method of a first super-resolution reconstruction model according to an embodiment of the present invention;
fig. 3 is a flowchart of a training method of a second super-resolution reconstruction model according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an apparatus for super-resolution of images according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The image super-resolution method provided by the embodiment of the invention can be applied to any electronic equipment which needs to process a low-resolution image to obtain a target image with higher resolution, such as: a computer or a mobile terminal, etc., which are not limited herein. For convenience of description, the electronic device is hereinafter referred to simply as an electronic device.
Referring to fig. 1, for the method for image super-resolution provided by the embodiment of the present invention, as shown in fig. 1, a specific processing flow of the method may include:
step S101, acquiring an image to be processed.
Step S102, inputting the images to be processed into a first super-resolution network model and a second super-resolution network model which are trained in advance respectively; the first super-resolution network model is a convolutional neural network which is trained by a plurality of original sample images and corresponding target sample images; the second super-resolution network model is a generation network contained in a generation countermeasure network which is trained by a plurality of original sample images and corresponding target sample images; the network structures of the first super-resolution network model and the second super-resolution network model are the same; the resolution of the target sample image is greater than the resolution of the original sample image.
Step S103, acquiring a first image output by the first super-resolution network model and a second image output by the second super-resolution network model; the resolution of the first image and the resolution of the second image are both greater than the resolution of the image to be processed.
And step S104, fusing the first image and the second image to obtain a target image, wherein the resolution of the target image is greater than that of the image to be processed.
By applying the embodiment of the invention, the target image can be obtained after the first image output by the first super-resolution network model and the second image output by the second super-resolution network model are fused, the target image takes the advantages of the first image output by the first super-resolution network model and the second image output by the second super-resolution network model into account, and the obtained target image has higher definition.
The step S104 may be implemented by: fusing the pixel value of the first image and the pixel value of the second image according to the weight to obtain a target image; the weight is preset or determined based on the resolution of the first image and the resolution of the second image.
That is, at least the following two embodiments are included.
In the first embodiment, the pixel value of the first image and the pixel value of the second image may be weighted and fused according to a preset weight to obtain a target image;
the method specifically comprises the following steps: and fusing the pixel value of the first image and the pixel value of the second image according to the weight according to the following formula to obtain a fused image as a target image:
img3=alpha1*img1+(1-alpha1)*img2
wherein, alpha1 is the weight of each pixel value corresponding to each pixel point of the first image, img1 is each pixel value corresponding to each pixel point of the first image, img2 is each pixel value corresponding to each pixel point of the second image, and img3 is each pixel value corresponding to each pixel point of the target image; alpha1 has a value in the range of [0, 1 ].
In the second embodiment, a weight may be determined based on the resolution of the first image and the resolution of the second image, and the pixel value of the first image and the pixel value of the second image may be fused according to the weight to obtain the target image.
Specifically, a larger weight value may be used for an image with a higher resolution.
For example: and calculating a target difference value of the resolution of the first image and the resolution of the second image, and dynamically adjusting the weight according to the target difference value and a preset rule based on a larger weighted value for the image with higher resolution.
In a more specific example: when the target difference is larger than a first preset threshold value and during fusion, a first weight is taken for the pixel value of the first image, and a second weight is taken for the pixel value of the second image; and when the target difference value is not larger than the first preset threshold value, taking a third weight for the pixel value of the first image and taking a fourth weight for the pixel value of the second image.
In practice, the training process of the first super-resolution reconstruction model in the above embodiment may specifically refer to fig. 2; the training process of the second super-resolution reconstruction model in the above embodiment can be specifically referred to fig. 3.
Referring to fig. 2, a flowchart of a training method for a first super-resolution reconstruction model according to an embodiment of the present invention is shown in fig. 2, and a specific processing flow of the method may include:
step S201, acquiring a training sample set; the training sample set comprises a plurality of training samples; wherein each training sample comprises: an original sample image and a corresponding target sample image; the resolution of the target sample image is greater than the resolution of the original sample image.
That is, the original sample image is a low-resolution sample image, and the target sample image is a high-resolution sample image.
In practice, the original sample image may be obtained by down-sampling the target sample image, and the target sample image and the original sample image may be used as a training sample. The original sample image and the corresponding target sample image may also be obtained by shooting the same object at the same position by the low-definition camera and the high-definition camera, which is not specifically limited herein.
Step S202, inputting a first preset number of first original sample images in the training sample set into a current convolutional neural network, and obtaining each first reconstructed target image corresponding to each first original sample image.
In this step, the first original sample image may be referred to as a first low-resolution sample image. The resolution of the obtained first reconstructed target image is greater than the resolution of the first original sample image. Therefore, the first reconstructed target image may be referred to as a first reconstructed high resolution image.
In an implementation manner, a first preset number of first original sample images in the training sample set are input into a current Convolutional Neural Network (CNN), so as to obtain a first reconstructed target image. The first preset number may be 8, 16, 32, etc., and is not limited in detail.
Step S203, calculating a loss value based on each first reconstructed target image, each first target sample image corresponding to each first original sample image, and a preset first loss function.
In this step, the first target sample image may also be referred to as a first high resolution sample image.
In practice, the first loss function may be specifically:
Figure BDA0002329216180000101
wherein L1 is the loss value of the first loss function;
Figure BDA0002329216180000102
for the first reconstruction of the object image I1HR′(i.e., the first reconstructed high resolution image) pixel values of pixel points with row number i and column number j of the kth channel; for example, a first reconstructed high resolution image I1HR′Expressed in an RGB color space model, the pixel size is 128 × 128. The first reconstructed high resolution image I1HR′There are 3 channels, the value of k is 1 when representing the first channel; comprising 128 rows and 128 columns. If the first reconstructed high resolution image I is to be represented1HR′The pixel values of the pixel points in the first row and the first column of the first channel can be expressed as
Figure BDA0002329216180000111
Figure BDA0002329216180000112
Is a first target sample image I1HRPixel values of pixel points with row number i and column number j of the kth channel (i.e., the first high resolution sample image);
h1、w1and c1The number of the height, width and channel of the first reconstructed high-resolution image respectively; h is1w1c1Is the product of the height, width and number of channels of the first reconstructed high resolution image.
In other embodiments, other loss functions may be used, for example, the formula of L1, the mean square error loss function in the related art, or the like. The specific formula of the first loss function is not limited herein.
And step S204, judging whether the current convolutional neural network is converged or not according to a loss value of a preset first loss function.
If the result of the judgment is negative, that is, the current convolutional neural network does not converge, executing step S205; if the result of the determination is yes, that is, the current convolutional neural network converges, step S206 is performed.
Step S205, adjusting the network parameters of the current convolutional neural network. The process returns to step S202.
And step S206, taking the current convolutional neural network as a trained first super-resolution network model.
The convolutional neural network is trained to obtain a first super-resolution network model, a first image output by the first super-resolution network model is stable, and artifacts generally do not occur.
Further, after obtaining the trained first super-resolution network model, a Generative Adaptive Network (GAN) may be trained, and a generation network in the trained Generative adaptive network may be used as a second super-resolution network model. Specifically, the training process of the second super-resolution network model can be seen in fig. 3.
As shown in fig. 3, a flowchart of a training method for a second super-resolution network model according to an embodiment of the present invention may include:
step S301, taking the network parameters of the first super-resolution network model as initial parameters of a generated network in a generative countermeasure network to obtain a current generated network; and setting initial parameters of a discrimination network in the generating countermeasure network to obtain the current discrimination network.
In practice, the discrimination network in the Generative Adaptive Networks (GAN) may be a convolutional neural network or another network. The network structure of the discrimination network is not particularly limited. The network structures of the preset convolutional neural network, the generation network and the judgment network are not particularly limited, and can be set according to actual needs.
Step S302, inputting a second preset number of second original sample images in the training sample set into the current generation network, and obtaining each second reconstruction target image corresponding to each second original sample image.
In this step, the second original sample image may be referred to as a second low resolution sample image. The resolution of the second reconstruction target image is greater than the resolution of the second original sample image, and thus, the second reconstruction target image may be referred to as a second reconstruction high-resolution image.
The second preset number may be 8, 16, 32, etc., and is not limited herein.
Step S303, inputting each second reconstruction target image into a current discrimination network, and obtaining each first current prediction probability value of each second reconstruction target image as a second target sample image; and inputting each second target sample image corresponding to each second original sample image into a current discrimination network to obtain each second current prediction probability value of each second target sample image as a second target sample image.
In this step, the second target sample image may be referred to as a second high resolution sample image.
Step S304, calculating a loss value according to each first current prediction probability value, each second current prediction probability value, whether the current prediction probability value is a real result of a second target sample image and a preset second loss function.
The preset second loss function may be implemented specifically as follows:
Dloss=∑[logD(I2HR)]+∑[1-logD(G(12LR))];
wherein D is a discrimination network;
D1ossto determine the loss value of the network, i.e. the loss value of the second loss function;
12HRa second target sample image, i.e. a second high resolution sample image;
D(I2HR) Obtaining a second current prediction probability value after a second high-resolution sample image is input into the current discrimination network;
I2LRthe second original sample image is the second low-resolution sample image;
G(I2LR) A second reconstructed high resolution image obtained after inputting the second low resolution sample image into the current generation network;
D(G(ILR) To a first current prediction probability value obtained for inputting the second reconstructed high resolution image into the current discrimination network.
Step S305, adjusting the network parameters of the current discrimination network according to the loss value of the preset second loss function, and obtaining the current intermediate discrimination network.
Step S306, inputting a third preset number of third original sample images in the training sample set into the current generation network, and obtaining each third reconstruction target image corresponding to each third original sample image.
In this step, the third original sample image may be referred to as a third low-resolution sample image. The resolution of the third reconstruction target image is greater than that of the third original sample image, and therefore, the third reconstruction target image may be referred to as a third reconstruction high-resolution image.
The third preset number may be 8, 16, 32, etc., and is not limited in detail. The first preset number, the second preset number and the third preset number may be the same or different, and are not limited herein.
Step S307, inputting each third reconstruction target image into the current intermediate discrimination network, and obtaining each third current prediction probability value that each third reconstruction target image is a third target sample image.
In this step, the third target sample image may be referred to as a third high-resolution sample image.
Step S308, calculating a loss value according to each third current prediction probability value, whether the third current prediction probability value is a true result of a third target sample image, each third target sample image corresponding to the third original sample image, each third reconstructed target image, and a preset third loss function.
The preset third loss function may be specifically:
Figure BDA0002329216180000131
wherein L1
Figure BDA0002329216180000132
And
Figure BDA0002329216180000133
α, β and gamma are respectively L1
Figure BDA0002329216180000134
And
Figure BDA0002329216180000135
the weight coefficient of (a);
Figure BDA0002329216180000141
wherein, L1 'is the loss value of the L1' loss function in the third loss function;
Figure BDA0002329216180000142
for the third reconstruction of the target image I3HR′(i.e., the third reconstructed high resolution image) pixel values of pixel points with row number i and column number j of the kth channel;
Figure BDA0002329216180000143
for a third target sample image I3HR(i.e., the third high-resolution sample image) pixel values of pixel points with row number i and column number j of the kth channel;
h2、w2and c2The number of the height, width and channel of the third reconstructed high-resolution image respectively; h is2w2c2The product of the height, width and number of channels of the high resolution image is reconstructed for the third time.
Figure BDA0002329216180000144
Wherein the content of the first and second substances,
Figure BDA0002329216180000145
as a function of the third loss
Figure BDA0002329216180000146
A loss value of a loss function;
w is the width of the filter; h is the height of the filter;
i is the number of layers of a pre-trained VGG network model of a filter in the related technology; j represents the filter at jth of the layer in the VGG network model;
Wi,jthe width of the jth filter of the ith layer in the VGG network model;
Hi,jis the high of the jth filter of the ith layer in the VGG network model;
Figure BDA0002329216180000147
a jth filter at the ith layer of a VGG network model trained in advance in the related art is subjected to a third high-resolution sample image I3HRThe row serial number of (1) is x, and the column serial number is a characteristic value at a position corresponding to y;
Figure BDA0002329216180000148
reconstructing a high-resolution image G (I) for the ith filter of a VGG network model trained in advance in the related art at the third layer3LR) The abscissa of the graph is x, and the ordinate is a characteristic value of a position corresponding to y; i is3LRIs the third original sample image, i.e., the third low resolution sample image.
Figure BDA0002329216180000151
Wherein the content of the first and second substances,
Figure BDA0002329216180000152
as a function of the third loss
Figure BDA0002329216180000153
A loss value of a loss function;
I3LRis a third original sample image, i.e. a third low resolution sample image;
D(G(I3LR) For the third reconstructed high-resolution image G (I) for the current intermediate discrimination network3LR) And outputting a third current prediction probability value after the judgment.
Step S309, according to the loss value of the third loss function, adjusting the network parameter of the current generation network, and adding 1 iteration time.
Step S310, determining whether a preset number of iterations is reached.
The preset number of iterations may be 100, 200, 1000, etc., and is not limited in detail herein.
If the judgment result is negative, that is, the preset iteration number is not reached, returning to execute the step S302; if the result of the determination is yes, that is, the preset number of iterations is reached, step S311 is executed.
And step S311, taking the trained current generation network as a second super-resolution network model.
The generative confrontation network is trained to obtain a second super-resolution network model, and a second image output by the second super-resolution network model can generate more high-frequency information and has more image details.
The trained first super-resolution network model has the advantages that the generated image is stable, the defect is that the image lacks part of high-frequency information, the trained second super-resolution network model has the advantages that the generated image contains more high-frequency information, and the defect is that the image is possibly artifact and not stable enough.
Fusing a first image output by the first super-resolution network model and a second image output by the second super-resolution network model, wherein the fused target image can contain more high-frequency information and has more image details; and the image artifact problem is balanced because of stability. Therefore, the sharpness of the target image is high.
The structure schematic diagram of the image super-resolution device provided by the embodiment of the invention is as shown in fig. 4, and the device comprises:
a to-be-processed image acquisition unit 401 configured to acquire a to-be-processed image;
an input unit 402, configured to input the to-be-processed image into a first super-resolution network model and a second super-resolution network model trained in advance, respectively; the first super-resolution network model is a convolutional neural network which is trained by a plurality of original sample images and corresponding target sample images; the second super-resolution network model is a generation network contained in a generation countermeasure network which is trained by a plurality of original sample images and corresponding target sample images; the network structures of the first super-resolution network model and the second super-resolution network model are the same; the resolution of the target sample image is greater than the resolution of the original sample image;
an obtaining unit 403, configured to obtain a first image output by the first super-resolution network model and a second image output by the second super-resolution network model; the resolution of the first image and the resolution of the second image are both greater than the resolution of the image to be processed;
a target image obtaining unit 404, configured to obtain a target image after fusing the first image and the second image, where a resolution of the target image is greater than a resolution of the image to be processed.
Optionally, the apparatus further comprises: a first super-resolution network model training unit;
the first super-resolution network model training unit is specifically configured to:
acquiring a training sample set; the training sample set comprises a plurality of training samples; wherein each training sample comprises: an original sample image and a corresponding target sample image; the resolution of the target sample image is greater than the resolution of the original sample image;
inputting a first preset number of first original sample images in the training sample set into a current convolutional neural network, and acquiring each first reconstruction target image corresponding to each first original sample image;
calculating a loss value based on each first reconstructed target image, each first target sample image corresponding to each first original sample image and a preset first loss function;
judging whether the current convolutional neural network is converged or not according to a loss value of a preset first loss function; if so, taking the current convolutional neural network as a trained first super-resolution network model; and if not, adjusting the network parameters of the current convolutional neural network, and returning to the step of inputting the first original sample images of the first preset number in the training sample set into the current convolutional neural network to obtain each first reconstruction target image corresponding to each first original sample image.
Optionally, the apparatus further comprises: a second super-resolution network model training unit;
the second super-resolution network model training unit is specifically configured to:
taking the network parameters of the first super-resolution network model as initial parameters of a generation network in a generation type countermeasure network to obtain a current generation network; setting initial parameters of a discrimination network in the generating countermeasure network to obtain a current discrimination network;
inputting a second preset number of second original sample images in the training sample set into a current generation network, and acquiring each second reconstruction target image corresponding to each second original sample image;
inputting each second reconstruction target image into a current discrimination network to obtain each first current prediction probability value of each second reconstruction target image as a second target sample image; inputting each second target sample image corresponding to each second original sample image into a current discrimination network to obtain each second current prediction probability value of each second target sample image as a second target sample image;
calculating a loss value according to each first current prediction probability value, each second current prediction probability value, whether the current prediction probability value is a real result of a second target sample image and a preset second loss function;
adjusting the network parameters of the current discrimination network according to the loss value of a preset second loss function to obtain a current intermediate discrimination network;
inputting a third preset number of third original sample images in the training sample set into a current generation network, and acquiring each third reconstruction target image corresponding to each third original sample image;
inputting each third reconstruction target image into the current intermediate discrimination network to obtain each third current prediction probability value of each third reconstruction target image as a third target sample image;
calculating a loss value according to each third current prediction probability value, whether the value is a real result of a third target sample image, each third target sample image corresponding to the third original sample image, each third reconstructed target image and a preset third loss function;
and adjusting the network parameters of the current generation network according to the loss value of a third loss function, adding 1 iteration time, returning to the step of inputting a second preset number of second original sample images in the training sample set into the current generation network, acquiring each second reconstruction target image corresponding to each second original sample image until the preset iteration time is reached, and taking the trained current generation network as a second super-resolution network model.
Optionally, the target image obtaining unit is specifically configured to: fusing the pixel value of the first image and the pixel value of the second image according to the weight to obtain a target image; the weight is preset or determined based on the resolution of the first image and the resolution of the second image.
Optionally, the target image obtaining unit is specifically configured to:
and fusing the pixel value of the first image and the pixel value of the second image according to the weight according to the following formula to obtain a fused image as a target image:
img3=alpha1*img1+(1-alpha1)*img2
wherein, alpha1 is the weight of each pixel value corresponding to each pixel point of the first image, img1 is each pixel value corresponding to each pixel point of the first image, img2 is each pixel value corresponding to each pixel point of the second image, and img3 is each pixel value corresponding to each pixel point of the target image; alpha1 has a value in the range of [0, 1 ].
By applying the embodiment of the invention, the target image can be obtained after the first image output by the first super-resolution network model and the second image output by the second super-resolution network model are fused, the target image takes the advantages of the first image output by the first super-resolution network model and the second image output by the second super-resolution network model into account, and the obtained target image has higher definition.
An embodiment of the present invention further provides an electronic device, as shown in fig. 5, which includes a processor 501, a communication interface 502, a memory 503 and a communication bus 504, where the processor 501, the communication interface 502 and the memory 503 complete mutual communication through the communication bus 504,
a memory 503 for storing a computer program;
the processor 501, when executing the program stored in the memory 503, implements the following steps:
acquiring an image to be processed;
respectively inputting the images to be processed into a first super-resolution network model and a second super-resolution network model which are trained in advance; the first super-resolution network model is a convolutional neural network which is trained by a plurality of original sample images and corresponding target sample images; the second super-resolution network model is a generation network contained in a generation countermeasure network which is trained by a plurality of original sample images and corresponding target sample images; the network structures of the first super-resolution network model and the second super-resolution network model are the same; the resolution of the target sample image is greater than the resolution of the original sample image;
acquiring a first image output by the first super-resolution network model and a second image output by the second super-resolution network model; the resolution of the first image and the resolution of the second image are both greater than the resolution of the image to be processed;
and fusing the first image and the second image to obtain a target image, wherein the resolution of the target image is greater than that of the image to be processed.
By applying the embodiment of the invention, the target image can be obtained after the first image output by the first super-resolution network model and the second image output by the second super-resolution network model are fused, the target image takes the advantages of the first image output by the first super-resolution network model and the second image output by the second super-resolution network model into account, and the obtained target image has higher definition.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
In yet another embodiment of the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any of the above-mentioned image super-resolution methods.
In yet another embodiment provided by the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the image super-resolution methods of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for embodiments such as the apparatus, the electronic device, the computer-readable storage medium, and the computer program product, since they are substantially similar to the method embodiments, the description is simple, and for relevant points, reference may be made to part of the description of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A method for super-resolution of an image, the method comprising:
acquiring an image to be processed;
respectively inputting the images to be processed into a first super-resolution network model and a second super-resolution network model which are trained in advance; the first super-resolution network model is a convolutional neural network which is trained by a plurality of original sample images and corresponding target sample images; the second super-resolution network model is a generation network contained in a generation countermeasure network which is trained by a plurality of original sample images and corresponding target sample images; the network structures of the first super-resolution network model and the second super-resolution network model are the same; the resolution of the target sample image is greater than the resolution of the original sample image;
acquiring a first image output by the first super-resolution network model and a second image output by the second super-resolution network model; the resolution of the first image and the resolution of the second image are both greater than the resolution of the image to be processed;
and fusing the first image and the second image to obtain a target image, wherein the resolution of the target image is greater than that of the image to be processed.
2. The method of claim 1, wherein the training process of the first super-resolution network model comprises:
acquiring a training sample set; the training sample set comprises a plurality of training samples; wherein each training sample comprises: an original sample image and a corresponding target sample image; the resolution of the target sample image is greater than the resolution of the original sample image;
inputting a first preset number of first original sample images in the training sample set into a current convolutional neural network, and acquiring each first reconstruction target image corresponding to each first original sample image;
calculating a loss value based on each first reconstructed target image, each first target sample image corresponding to each first original sample image and a preset first loss function;
judging whether the current convolutional neural network is converged or not according to a loss value of a preset first loss function; if so, taking the current convolutional neural network as a trained first super-resolution network model; and if not, adjusting the network parameters of the current convolutional neural network, and returning to the step of inputting the first original sample images of the first preset number in the training sample set into the current convolutional neural network to obtain each first reconstruction target image corresponding to each first original sample image.
3. The method of claim 2, wherein the training process of the second super-resolution network model comprises:
taking the network parameters of the first super-resolution network model as initial parameters of a generation network in a generation type countermeasure network to obtain a current generation network; setting initial parameters of a discrimination network in the generating countermeasure network to obtain a current discrimination network;
inputting a second preset number of second original sample images in the training sample set into a current generation network, and acquiring each second reconstruction target image corresponding to each second original sample image;
inputting each second reconstruction target image into a current discrimination network to obtain each first current prediction probability value of each second reconstruction target image as a second target sample image; inputting each second target sample image corresponding to each second original sample image into a current discrimination network to obtain each second current prediction probability value of each second target sample image as a second target sample image;
calculating a loss value according to each first current prediction probability value, each second current prediction probability value, whether the current prediction probability value is a real result of a second target sample image and a preset second loss function;
adjusting the network parameters of the current discrimination network according to the loss value of a preset second loss function to obtain a current intermediate discrimination network;
inputting a third preset number of third original sample images in the training sample set into a current generation network, and acquiring each third reconstruction target image corresponding to each third original sample image;
inputting each third reconstruction target image into the current intermediate discrimination network to obtain each third current prediction probability value of each third reconstruction target image as a third target sample image;
calculating a loss value according to each third current prediction probability value, whether the value is a real result of a third target sample image, each third target sample image corresponding to the third original sample image, each third reconstructed target image and a preset third loss function;
and adjusting the network parameters of the current generation network according to the loss value of a third loss function, adding 1 iteration time, returning to the step of inputting a second preset number of second original sample images in the training sample set into the current generation network, acquiring each second reconstruction target image corresponding to each second original sample image until the preset iteration time is reached, and taking the trained current generation network as a second super-resolution network model.
4. The method according to claim 1, wherein the step of obtaining the target image after fusing the first image and the second image comprises:
fusing the pixel value of the first image and the pixel value of the second image according to the weight to obtain a target image; the weight is preset or determined based on the resolution of the first image and the resolution of the second image.
5. The method according to claim 4, wherein the step of fusing the pixel values of the first image and the pixel values of the second image according to the weight to obtain the target image comprises:
and fusing the pixel value of the first image and the pixel value of the second image according to the weight according to the following formula to obtain a fused image as a target image:
img3=alpha1*img1+(1-alpha1)*img2
wherein, alpha1 is the weight of each pixel value corresponding to each pixel point of the first image, img1 is each pixel value corresponding to each pixel point of the first image, img2 is each pixel value corresponding to each pixel point of the second image, and img3 is each pixel value corresponding to each pixel point of the target image; alpha1 has a value in the range of [0, 1 ].
6. An apparatus for super-resolution of an image, the apparatus comprising:
the image processing device comprises a to-be-processed image acquisition unit, a processing unit and a processing unit, wherein the to-be-processed image acquisition unit is used for acquiring an image to be processed;
the input unit is used for respectively inputting the images to be processed into a first super-resolution network model and a second super-resolution network model which are trained in advance; the first super-resolution network model is a convolutional neural network which is trained by a plurality of original sample images and corresponding target sample images; the second super-resolution network model is a generation network contained in a generation countermeasure network which is trained by a plurality of original sample images and corresponding target sample images; the network structures of the first super-resolution network model and the second super-resolution network model are the same; the resolution of the target sample image is greater than the resolution of the original sample image;
an acquisition unit, configured to acquire a first image output by the first super-resolution network model and a second image output by the second super-resolution network model; the resolution of the first image and the resolution of the second image are both greater than the resolution of the image to be processed;
and the target image obtaining unit is used for obtaining a target image after the first image and the second image are fused, wherein the resolution of the target image is greater than that of the image to be processed.
7. The apparatus of claim 6, further comprising: a first super-resolution network model training unit;
the first super-resolution network model training unit is specifically configured to:
acquiring a training sample set; the training sample set comprises a plurality of training samples; wherein each training sample comprises: an original sample image and a corresponding target sample image; the resolution of the target sample image is greater than the resolution of the original sample image;
inputting a first preset number of first original sample images in the training sample set into a current convolutional neural network, and acquiring each first reconstruction target image corresponding to each first original sample image;
calculating a loss value based on each first reconstructed target image, each first target sample image corresponding to each first original sample image and a preset first loss function;
judging whether the current convolutional neural network is converged or not according to a loss value of a preset first loss function; if so, taking the current convolutional neural network as a trained first super-resolution network model; and if not, adjusting the network parameters of the current convolutional neural network, and returning to the step of inputting the first original sample images of the first preset number in the training sample set into the current convolutional neural network to obtain each first reconstruction target image corresponding to each first original sample image.
8. The apparatus of claim 7, further comprising: a second super-resolution network model training unit;
the second super-resolution network model training unit is specifically configured to:
taking the network parameters of the first super-resolution network model as initial parameters of a generation network in a generation type countermeasure network to obtain a current generation network; setting initial parameters of a discrimination network in the generating countermeasure network to obtain a current discrimination network;
inputting a second preset number of second original sample images in the training sample set into a current generation network, and acquiring each second reconstruction target image corresponding to each second original sample image;
inputting each second reconstruction target image into a current discrimination network to obtain each first current prediction probability value of each second reconstruction target image as a second target sample image; inputting each second target sample image corresponding to each second original sample image into a current discrimination network to obtain each second current prediction probability value of each second target sample image as a second target sample image;
calculating a loss value according to each first current prediction probability value, each second current prediction probability value, whether the current prediction probability value is a real result of a second target sample image and a preset second loss function;
adjusting the network parameters of the current discrimination network according to the loss value of a preset second loss function to obtain a current intermediate discrimination network;
inputting a third preset number of third original sample images in the training sample set into a current generation network, and acquiring each third reconstruction target image corresponding to each third original sample image;
inputting each third reconstruction target image into the current intermediate discrimination network to obtain each third current prediction probability value of each third reconstruction target image as a third target sample image;
calculating a loss value according to each third current prediction probability value, whether the value is a real result of a third target sample image, each third target sample image corresponding to the third original sample image, each third reconstructed target image and a preset third loss function;
and adjusting the network parameters of the current generation network according to the loss value of a third loss function, adding 1 iteration time, returning to the step of inputting a second preset number of second original sample images in the training sample set into the current generation network, acquiring each second reconstruction target image corresponding to each second original sample image until the preset iteration time is reached, and taking the trained current generation network as a second super-resolution network model.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1 to 5 when executing a program stored in the memory.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of the claims 1-5.
CN201911329473.5A 2019-12-20 2019-12-20 Image super-resolution method and device, electronic equipment and storage medium Active CN111080527B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201911329473.5A CN111080527B (en) 2019-12-20 2019-12-20 Image super-resolution method and device, electronic equipment and storage medium
PCT/CN2020/135037 WO2021121108A1 (en) 2019-12-20 2020-12-09 Image super-resolution and model training method and apparatus, electronic device, and medium
US17/772,306 US20220383452A1 (en) 2019-12-20 2020-12-09 Method, apparatus, electronic device and medium for image super-resolution and model training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911329473.5A CN111080527B (en) 2019-12-20 2019-12-20 Image super-resolution method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111080527A true CN111080527A (en) 2020-04-28
CN111080527B CN111080527B (en) 2023-12-05

Family

ID=70316480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911329473.5A Active CN111080527B (en) 2019-12-20 2019-12-20 Image super-resolution method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111080527B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488865A (en) * 2020-06-28 2020-08-04 腾讯科技(深圳)有限公司 Image optimization method and device, computer storage medium and electronic equipment
CN111681165A (en) * 2020-06-02 2020-09-18 上海闻泰信息技术有限公司 Image processing method, image processing device, computer equipment and computer readable storage medium
CN111681167A (en) * 2020-06-03 2020-09-18 腾讯科技(深圳)有限公司 Image quality adjusting method and device, storage medium and electronic equipment
CN111861962A (en) * 2020-07-28 2020-10-30 湖北亿咖通科技有限公司 Data fusion method and electronic equipment
CN111951168A (en) * 2020-08-25 2020-11-17 Oppo(重庆)智能科技有限公司 Image processing method, image processing apparatus, storage medium, and electronic device
CN112184547A (en) * 2020-09-03 2021-01-05 红相股份有限公司 Super-resolution method of infrared image and computer readable storage medium
CN112365398A (en) * 2020-09-11 2021-02-12 成都旷视金智科技有限公司 Super-resolution network training method, digital zooming method, device and electronic equipment
CN112530004A (en) * 2020-12-11 2021-03-19 北京奇艺世纪科技有限公司 Three-dimensional point cloud reconstruction method and device and electronic equipment
CN112862681A (en) * 2021-01-29 2021-05-28 中国科学院深圳先进技术研究院 Super-resolution method, device, terminal equipment and storage medium
WO2021121108A1 (en) * 2019-12-20 2021-06-24 北京金山云网络技术有限公司 Image super-resolution and model training method and apparatus, electronic device, and medium
CN113379606A (en) * 2021-08-16 2021-09-10 之江实验室 Face super-resolution method based on pre-training generation model
CN113538238A (en) * 2021-07-09 2021-10-22 深圳市深光粟科技有限公司 High-resolution photoacoustic image imaging method and device and electronic equipment
CN113781309A (en) * 2021-09-17 2021-12-10 北京金山云网络技术有限公司 Image processing method and device and electronic equipment
CN113808021A (en) * 2021-09-17 2021-12-17 北京金山云网络技术有限公司 Image processing method and device, image processing model training method and device, and electronic equipment
CN113888410A (en) * 2021-09-30 2022-01-04 北京百度网讯科技有限公司 Image super-resolution method, apparatus, device, storage medium, and program product
CN115272267A (en) * 2022-08-08 2022-11-01 中国科学院苏州生物医学工程技术研究所 Fundus fluorography image generation method, device, medium and product based on deep learning
WO2024032494A1 (en) * 2022-08-12 2024-02-15 腾讯科技(深圳)有限公司 Image processing method and apparatus, computer, readable storage medium, and program product

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107464217A (en) * 2017-08-16 2017-12-12 清华-伯克利深圳学院筹备办公室 A kind of image processing method and device
CN107516290A (en) * 2017-07-14 2017-12-26 北京奇虎科技有限公司 Image switching network acquisition methods, device, computing device and storage medium
US20180137603A1 (en) * 2016-11-07 2018-05-17 Umbo Cv Inc. Method and system for providing high resolution image through super-resolution reconstruction
CN108496201A (en) * 2017-09-27 2018-09-04 深圳市大疆创新科技有限公司 Image processing method and equipment
CN109325931A (en) * 2018-08-22 2019-02-12 中北大学 Based on the multi-modality images fusion method for generating confrontation network and super-resolution network
US20190057488A1 (en) * 2017-08-17 2019-02-21 Boe Technology Group Co., Ltd. Image processing method and device
CN109753938A (en) * 2019-01-10 2019-05-14 京东方科技集团股份有限公司 Image-recognizing method and equipment and the training method of application, neural network
WO2019104705A1 (en) * 2017-12-01 2019-06-06 华为技术有限公司 Image processing method and device
CN110084119A (en) * 2019-03-26 2019-08-02 安徽艾睿思智能科技有限公司 Low-resolution face image recognition methods based on deep learning
US20190259136A1 (en) * 2019-04-29 2019-08-22 Intel Corporation Method and apparatus for person super resolution from low resolution image
CN110310229A (en) * 2019-06-28 2019-10-08 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, terminal device and readable storage medium storing program for executing
CN110335196A (en) * 2019-07-11 2019-10-15 山东工商学院 A kind of super-resolution image reconstruction method and system based on fractal decoding
CN110363716A (en) * 2019-06-25 2019-10-22 北京工业大学 One kind is generated based on condition and fights network combined degraded image high quality method for reconstructing
CN110415176A (en) * 2019-08-09 2019-11-05 北京大学深圳研究生院 A kind of text image super-resolution method
CN110490802A (en) * 2019-08-06 2019-11-22 北京观微科技有限公司 A kind of satellite image Aircraft Targets type identifier method based on super-resolution

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180137603A1 (en) * 2016-11-07 2018-05-17 Umbo Cv Inc. Method and system for providing high resolution image through super-resolution reconstruction
CN107516290A (en) * 2017-07-14 2017-12-26 北京奇虎科技有限公司 Image switching network acquisition methods, device, computing device and storage medium
CN107464217A (en) * 2017-08-16 2017-12-12 清华-伯克利深圳学院筹备办公室 A kind of image processing method and device
US20190057488A1 (en) * 2017-08-17 2019-02-21 Boe Technology Group Co., Ltd. Image processing method and device
CN108496201A (en) * 2017-09-27 2018-09-04 深圳市大疆创新科技有限公司 Image processing method and equipment
WO2019104705A1 (en) * 2017-12-01 2019-06-06 华为技术有限公司 Image processing method and device
CN109325931A (en) * 2018-08-22 2019-02-12 中北大学 Based on the multi-modality images fusion method for generating confrontation network and super-resolution network
CN109753938A (en) * 2019-01-10 2019-05-14 京东方科技集团股份有限公司 Image-recognizing method and equipment and the training method of application, neural network
CN110084119A (en) * 2019-03-26 2019-08-02 安徽艾睿思智能科技有限公司 Low-resolution face image recognition methods based on deep learning
US20190259136A1 (en) * 2019-04-29 2019-08-22 Intel Corporation Method and apparatus for person super resolution from low resolution image
CN110363716A (en) * 2019-06-25 2019-10-22 北京工业大学 One kind is generated based on condition and fights network combined degraded image high quality method for reconstructing
CN110310229A (en) * 2019-06-28 2019-10-08 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, terminal device and readable storage medium storing program for executing
CN110335196A (en) * 2019-07-11 2019-10-15 山东工商学院 A kind of super-resolution image reconstruction method and system based on fractal decoding
CN110490802A (en) * 2019-08-06 2019-11-22 北京观微科技有限公司 A kind of satellite image Aircraft Targets type identifier method based on super-resolution
CN110415176A (en) * 2019-08-09 2019-11-05 北京大学深圳研究生院 A kind of text image super-resolution method

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021121108A1 (en) * 2019-12-20 2021-06-24 北京金山云网络技术有限公司 Image super-resolution and model training method and apparatus, electronic device, and medium
CN111681165A (en) * 2020-06-02 2020-09-18 上海闻泰信息技术有限公司 Image processing method, image processing device, computer equipment and computer readable storage medium
CN111681167A (en) * 2020-06-03 2020-09-18 腾讯科技(深圳)有限公司 Image quality adjusting method and device, storage medium and electronic equipment
CN111488865A (en) * 2020-06-28 2020-08-04 腾讯科技(深圳)有限公司 Image optimization method and device, computer storage medium and electronic equipment
CN111861962A (en) * 2020-07-28 2020-10-30 湖北亿咖通科技有限公司 Data fusion method and electronic equipment
CN111951168A (en) * 2020-08-25 2020-11-17 Oppo(重庆)智能科技有限公司 Image processing method, image processing apparatus, storage medium, and electronic device
CN112184547A (en) * 2020-09-03 2021-01-05 红相股份有限公司 Super-resolution method of infrared image and computer readable storage medium
CN112184547B (en) * 2020-09-03 2023-05-05 红相股份有限公司 Super resolution method of infrared image and computer readable storage medium
CN112365398B (en) * 2020-09-11 2024-04-05 成都旷视金智科技有限公司 Super-resolution network training method, digital zooming method, device and electronic equipment
CN112365398A (en) * 2020-09-11 2021-02-12 成都旷视金智科技有限公司 Super-resolution network training method, digital zooming method, device and electronic equipment
CN112530004A (en) * 2020-12-11 2021-03-19 北京奇艺世纪科技有限公司 Three-dimensional point cloud reconstruction method and device and electronic equipment
CN112530004B (en) * 2020-12-11 2023-06-06 北京奇艺世纪科技有限公司 Three-dimensional point cloud reconstruction method and device and electronic equipment
CN112862681A (en) * 2021-01-29 2021-05-28 中国科学院深圳先进技术研究院 Super-resolution method, device, terminal equipment and storage medium
CN112862681B (en) * 2021-01-29 2023-04-14 中国科学院深圳先进技术研究院 Super-resolution method, device, terminal equipment and storage medium
CN113538238B (en) * 2021-07-09 2024-06-28 深圳市深光粟科技有限公司 High-resolution photoacoustic image imaging method and device and electronic equipment
CN113538238A (en) * 2021-07-09 2021-10-22 深圳市深光粟科技有限公司 High-resolution photoacoustic image imaging method and device and electronic equipment
CN113379606A (en) * 2021-08-16 2021-09-10 之江实验室 Face super-resolution method based on pre-training generation model
CN113808021A (en) * 2021-09-17 2021-12-17 北京金山云网络技术有限公司 Image processing method and device, image processing model training method and device, and electronic equipment
CN113781309A (en) * 2021-09-17 2021-12-10 北京金山云网络技术有限公司 Image processing method and device and electronic equipment
CN113781309B (en) * 2021-09-17 2024-06-28 北京金山云网络技术有限公司 Image processing method and device and electronic equipment
CN113888410A (en) * 2021-09-30 2022-01-04 北京百度网讯科技有限公司 Image super-resolution method, apparatus, device, storage medium, and program product
CN115272267A (en) * 2022-08-08 2022-11-01 中国科学院苏州生物医学工程技术研究所 Fundus fluorography image generation method, device, medium and product based on deep learning
WO2024032494A1 (en) * 2022-08-12 2024-02-15 腾讯科技(深圳)有限公司 Image processing method and apparatus, computer, readable storage medium, and program product

Also Published As

Publication number Publication date
CN111080527B (en) 2023-12-05

Similar Documents

Publication Publication Date Title
CN111080527B (en) Image super-resolution method and device, electronic equipment and storage medium
CN111080528B (en) Image super-resolution and model training method and device, electronic equipment and medium
US20220383452A1 (en) Method, apparatus, electronic device and medium for image super-resolution and model training
CN109101975B (en) Image semantic segmentation method based on full convolution neural network
CN108022212B (en) High-resolution picture generation method, generation device and storage medium
US20200013205A1 (en) Colorizing Vector Graphic Objects
CN111476863B (en) Method and device for coloring black-and-white cartoon, electronic equipment and storage medium
CN110660033B (en) Subtitle removing method and device and electronic equipment
US11836898B2 (en) Method and apparatus for generating image, and electronic device
CN110909663B (en) Human body key point identification method and device and electronic equipment
CN111476719A (en) Image processing method, image processing device, computer equipment and storage medium
US10186022B2 (en) System and method for adaptive pixel filtering
CN111145202B (en) Model generation method, image processing method, device, equipment and storage medium
CN113112561B (en) Image reconstruction method and device and electronic equipment
CN111340140A (en) Image data set acquisition method and device, electronic equipment and storage medium
CN113014928B (en) Compensation frame generation method and device
US20230060988A1 (en) Image processing device and method
CN115063803B (en) Image processing method, device, storage medium and electronic equipment
CN112561818B (en) Image enhancement method and device, electronic equipment and storage medium
CN111932466B (en) Image defogging method, electronic equipment and storage medium
CN116977190A (en) Image processing method, apparatus, device, storage medium, and program product
CN110136061B (en) Resolution improving method and system based on depth convolution prediction and interpolation
CN112581401A (en) Method and device for acquiring RAW picture and electronic equipment
CN112954454A (en) Video frame generation method and device
CN112084371A (en) Film multi-label classification method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant