CN112419151B - Image degradation processing method and device, storage medium and electronic equipment - Google Patents

Image degradation processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN112419151B
CN112419151B CN202011308390.0A CN202011308390A CN112419151B CN 112419151 B CN112419151 B CN 112419151B CN 202011308390 A CN202011308390 A CN 202011308390A CN 112419151 B CN112419151 B CN 112419151B
Authority
CN
China
Prior art keywords
image
low
resolution
quality
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011308390.0A
Other languages
Chinese (zh)
Other versions
CN112419151A (en
Inventor
王伟
袁泽寰
王长虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202011308390.0A priority Critical patent/CN112419151B/en
Publication of CN112419151A publication Critical patent/CN112419151A/en
Priority to PCT/CN2021/129431 priority patent/WO2022105638A1/en
Application granted granted Critical
Publication of CN112419151B publication Critical patent/CN112419151B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to an image degradation processing method, apparatus, storage medium, and electronic device, and relates to the problems of data distribution inconsistency and color shift between training data and actual data existing in acquiring a low-quality image in the related art. The method comprises the following steps: acquiring a high-resolution high-quality image; downsampling a high-resolution high-quality image to obtain a low-resolution high-quality image; inputting the low-resolution high-quality image into an image degradation model to obtain a low-resolution low-quality image, wherein the image degradation model is obtained by training according to the brightness image characteristics of the sample low-resolution high-quality image and the brightness image characteristics of the sample low-resolution low-quality image, and the sample low-resolution high-quality image is obtained by downsampling the sample high-resolution high-quality image; the low-resolution low-quality image is used for training a super-resolution reconstruction network, and the super-resolution reconstruction network is used for repairing the input low-resolution low-quality video so as to obtain a high-resolution high-quality video.

Description

Image degradation processing method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of video processing technologies, and in particular, to an image degradation processing method, an image degradation processing device, a storage medium, and an electronic apparatus.
Background
With the continuous development of science and technology, people can enjoy the experience of watching high-resolution high-quality video. However, the video is very susceptible to noise, blurring, compression, etc. during the acquisition and transmission phases, resulting in poor video quality, which is particularly problematic in some older film sources.
The related art may perform video repair through a supervised restoration algorithm. In the training process of the supervised restoration algorithm, training is required to be performed through the low-quality image and the high-quality image so as to learn the mapping between the low-quality image and the high-quality image, and therefore restoration of each frame of image in the video is achieved. However, in this process, a low-quality image is mainly obtained by artificially adding a degradation operation such as gaussian noise, gaussian blur, or the like to a high-quality image. If the noise, blur, etc. of the low-quality image for training are inconsistent with the real image to be repaired, a problem may be caused that the real image cannot be effectively repaired, thereby affecting the video repair effect.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, the present disclosure provides an image degradation processing method, the method including:
acquiring a high-resolution high-quality image;
downsampling the high-resolution high-quality image to obtain a low-resolution high-quality image;
inputting the low-resolution high-quality image into an image degradation model to obtain a low-resolution low-quality image, wherein the image degradation model is obtained by training according to brightness image features of a sample low-resolution high-quality image and brightness image features of the sample low-resolution low-quality image, and the sample low-resolution high-quality image is obtained by downsampling the sample high-resolution high-quality image;
the low-resolution low-quality image is used for training a super-resolution reconstruction network, and the super-resolution reconstruction network is used for repairing the input low-resolution low-quality video so as to obtain a high-resolution high-quality video.
In a second aspect, the present disclosure provides an image degradation processing apparatus, the apparatus comprising:
the acquisition module is used for acquiring the high-resolution high-quality image;
the first processing module is used for downsampling the high-resolution high-quality image to obtain a low-resolution high-quality image;
the second processing module is used for inputting the low-resolution high-quality image into an image degradation model to obtain a low-resolution low-quality image, the image degradation model is obtained by training according to brightness image features of a sample low-resolution high-quality image and brightness image features of the sample low-resolution low-quality image, and the sample low-resolution high-quality image is obtained by downsampling the sample high-resolution high-quality image;
The low-resolution low-quality image is used for training a super-resolution reconstruction network, and the super-resolution reconstruction network is used for repairing the input low-resolution low-quality video so as to obtain a high-resolution high-quality video.
In a third aspect, the present disclosure provides a computer readable medium having stored thereon a computer program which when executed by a processing device implements the steps of the method described in the first aspect.
In a fourth aspect, the present disclosure provides an electronic device comprising:
a storage device having a computer program stored thereon;
processing means for executing said computer program in said storage means to carry out the steps of the method described in the first aspect.
According to the technical scheme, the low-resolution low-quality images for training the super-resolution reconstruction network can be obtained by processing a large number of high-resolution high-quality images through the image degradation model, and compared with the mode that the low-quality images are obtained by manually adding factors such as Gaussian noise and Gaussian blur to the high-quality images in a related technology, the method has the advantages that various low-quality images which are more consistent with actual conditions can be obtained, the problem that video restoration cannot be effectively carried out due to the fact that training data are inconsistent with actual data to be restored is solved, and video restoration effects are improved. In addition, the model training is carried out by extracting the brightness characteristics of the image in the process of training the image degradation model, so that the problem that the color shift of the subsequently obtained low-quality image occurs can be avoided, the accuracy of the super-resolution reconstruction network obtained by the low-quality image training is improved, and the video restoration effect is further improved.
Additional features and advantages of the present disclosure will be set forth in the detailed description which follows.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale. In the drawings:
FIG. 1 is a flowchart illustrating an image degradation processing method according to an exemplary embodiment of the present disclosure;
fig. 2 is a schematic diagram showing the structures of an image degradation module and an image degradation removal module in an image degradation model in an image degradation processing method according to an exemplary embodiment of the present disclosure;
fig. 3 is a schematic diagram showing the structures of an image degradation module and an image degradation removal module in an image degradation model in an image degradation processing method according to another exemplary embodiment of the present disclosure;
FIG. 4 is a schematic diagram of the structure of each neural unit in the image degradation module or the image degradation removal module shown in FIG. 3;
FIG. 5 is a schematic view of the structure of the first and second discriminators in the image degradation model in the case where the image degradation module or the image degradation removal module is shown in FIG. 3;
Fig. 6 is a schematic view showing a structure of an image degradation model in an image degradation processing method according to an exemplary embodiment of the present disclosure;
FIG. 7 is a schematic diagram illustrating a comparison of an output image of an image degradation processing method with an actual low resolution low quality image according to an exemplary embodiment of the present disclosure;
fig. 8 is a block diagram of an image degradation processing apparatus according to an exemplary embodiment of the present disclosure;
fig. 9 is a block diagram of an electronic device, according to an exemplary embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below. It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units. It is further noted that references to "one" or "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
As described in the background, the related art adopts a supervised restoration algorithm to perform video restoration. In the training process of the supervised restoration algorithm, training is required to be performed through the low-quality image and the high-quality image so as to learn the mapping between the low-quality image and the high-quality image, and therefore restoration of each frame of image in the video is achieved. However, in this process, a low-quality image is mainly obtained by artificially adding a degradation operation such as gaussian noise, gaussian blur, or the like to a high-quality image. If the noise, blur, etc. of the low-quality image for training are inconsistent with the real image to be repaired, a problem may be caused that the real image cannot be effectively repaired, thereby affecting the video repair effect.
The inventor researches and discovers that the related technology also carries out video restoration through an unsupervised image restoration algorithm. The method constructs the image data pair which is more close to the real image data distribution through the unsupervised algorithm model so as to obtain the neural network model which can be better generalized on the real image data during testing. Although the method can align the data distribution between the training data and the test data to a certain extent, the problem caused by inconsistent data distribution between the training data and the test data in the supervised image restoration algorithm is reduced, the problem of color shift exists, and the color characteristic irrelevant to the image degradation characteristic can be learned in the characteristic learning stage, so that the video restoration effect is affected.
In view of the above, the present disclosure provides an image degradation processing method, an apparatus, a storage medium, and an electronic device, so as to solve the problem that in the related art, effective video repair cannot be performed due to inconsistent data distribution between training data and test data or due to color shift, thereby improving video repair effect.
Fig. 1 is a flowchart illustrating an image degradation processing method according to an exemplary embodiment of the present disclosure. Referring to fig. 1, the image degradation processing method includes the steps of:
step 101, obtaining a high-resolution high-quality image.
Step 102, downsampling the high-resolution high-quality image to obtain a low-resolution high-quality image.
Step 103, inputting the low resolution high quality image into an image degradation model to obtain a low resolution low quality image, wherein the image degradation model is obtained by training according to the brightness image characteristics of the sample low resolution high quality image and the brightness image characteristics of the sample low resolution low quality image, and the sample low resolution high quality image is obtained by downsampling the sample high resolution high quality image. The low-resolution low-quality image can be used for training a super-resolution reconstruction network, and the super-resolution reconstruction network is used for repairing an input low-resolution low-quality video so as to obtain a high-resolution high-quality video.
By the method, the low-resolution low-quality images for training the super-resolution reconstruction network can be obtained by processing a large number of high-resolution high-quality images through the image degradation model, and compared with the mode that the low-quality images are obtained by manually adding factors such as Gaussian noise and Gaussian blur to the high-quality images in the related art, the method has the advantages that various low-quality images more consistent with actual conditions can be obtained, the problem that video restoration cannot be effectively performed due to the fact that training data are inconsistent with actual data to be restored is solved, and video restoration effects are improved. In addition, the model training is carried out by extracting the brightness characteristics of the image in the process of training the image degradation model, so that the problem that the color shift of the subsequently obtained low-quality image occurs can be avoided, the accuracy of the super-resolution reconstruction network obtained by the low-quality image training is improved, and the video restoration effect is further improved.
In order to enable those skilled in the art to more understand the image degradation processing method provided in the embodiments of the present disclosure, each of the above steps is illustrated in detail below.
First, a training process of the image degradation model is explained.
For example, a sample high-resolution high-quality image and a sample low-resolution low-quality image may be acquired first. It should be understood that resolution refers to the number of pixels per inch contained in an image. The higher the resolution, the more pixels the image unit inch contains, and the lower the resolution, the fewer pixels the image unit inch contains. In the embodiments of the present disclosure, the sample high resolution high quality image unit inches contain more pixels than the sample low resolution low quality image unit inches. Image quality refers to the detail content in an image stored in pixels, such as color, shading, contrast, etc. The higher the image quality, the more abundant the detail content contained in the pixel, and the lower the image quality, the less the detail content contained in the pixel. In the embodiment of the disclosure, the detail content contained in the pixels in the sample high-resolution high-quality image is richer than the detail content contained in the pixels in the sample low-resolution low-quality image.
In a possible manner, the sample high-resolution high-quality image and the sample low-resolution low-quality image may be obtained by: a plurality of high resolution high quality videos and a plurality of low resolution low quality videos are acquired first. Then, for each high-resolution high-quality video, image segmentation is carried out on each frame image corresponding to the high-resolution high-quality video to obtain a plurality of first image blocks, and a target first image block with pixel values meeting preset conditions is selected from the plurality of first image blocks to serve as a sample high-resolution high-quality image. Likewise, for each low-resolution low-quality video, image segmentation may be performed on each frame image corresponding to the low-resolution low-quality video to obtain a plurality of second image blocks, and, among the plurality of second image blocks, a target second pixel block whose pixel value satisfies a preset condition is selected as a sample low-resolution low-quality image.
For example, a plurality of high-resolution high-quality movie data and old movie data (low-resolution low-quality movie data) may be acquired, and then image division processing may be performed for each frame image of the high-resolution high-quality movie data to obtain a plurality of first image blocks, for example, 3×3 image division processing may be performed for each frame image to obtain 9 first image blocks, and so on. Similarly, image division processing may be performed for each frame image of the old movie data, resulting in a plurality of second image blocks. For example, 3×3 image segmentation processing may be performed on each frame image, 9 second image blocks may be obtained, and so on.
After obtaining the plurality of first image blocks and the plurality of second image blocks, a target image block whose pixel values satisfy a preset condition may be selected for training the image degradation model. For example, the preset condition may be that the average value of the pixel values in the image block is greater than or equal to a preset threshold, or the preset condition may be that the variance of the pixel values in the image block is greater than or equal to a preset threshold, or the preset condition may also be that the average value or the variance corresponding to the low-frequency characteristic of the pixel point is greater than or equal to a preset threshold, or the like. The embodiment of the present disclosure is not limited to the specific content of the preset condition. In addition, the preset threshold in the above process may also be set according to the actual situation, which is not limited in the embodiment of the disclosure.
The first image block and the second image block are screened through preset conditions, so that the image blocks with rich details can be obtained, the content richness of the sample high-resolution high-quality image and the sample low-resolution low-quality image is increased, the robustness of the image degradation model is further increased, and the trained image degradation model outputs the low-resolution low-quality image which is more consistent with the actual situation. Where a detail enrichment may be understood as a large difference in pixel values in an image block. For example, for an image block including only sky or grass, which has a small difference in pixel value, an image block including a complex building, a person, or the like, which has a large difference in pixel value, is an image block having rich details.
After the sample high-resolution high-quality image and the sample low-resolution low-quality image are obtained in the above manner, the image degradation model can be trained by the sample high-resolution high-quality image and the sample low-resolution low-quality image. For example, the sample high-resolution high-quality image may be downsampled to obtain the sample low-resolution high-quality image, then the luminance image features of the sample low-resolution high-quality image and the luminance image features of the sample low-resolution low-quality image are extracted, and finally the image degradation model is trained according to the extracted luminance image features.
It should be appreciated that downsampling is a technique of multi-rate digital signal processing or a process of reducing the sampling rate of a signal, typically used to reduce the data transmission rate or data size. In the embodiment of the disclosure, the number of pixels contained in an image unit inch can be reduced by downsampling the sample high-resolution high-quality image, so that the sample low-resolution high-quality image is obtained.
After the sample low-resolution high-quality image is obtained, the brightness image features of the sample low-resolution high-quality image can be extracted, and the brightness image features of the sample low-resolution low-quality image can be extracted at the same time, so that the image degradation model is trained according to the extracted brightness image features.
It should be understood that the video restoration method provided by the embodiment of the present disclosure may be applied to low-quality old movie restoration. Considering that the low-quality old movies are mostly black and white and have image characteristics such as noise, blur and the like, the image degradation model needs to learn the image characteristics such as noise, blur and the like of the low-quality old movies, and does not need to learn color characteristics in the images. In the scene, in the process of training an image degradation model, in order to avoid that color features in a sample high-resolution high-quality image influence the feature learning result of the image degradation model, namely to solve the problem of color shift, the brightness image features of the sample low-resolution high-quality image and the brightness image features of the sample low-resolution low-quality image can be extracted for model training.
In a possible manner, the luminance image features of the sample low-resolution high-quality image and the luminance image features of the sample low-resolution low-quality image may be obtained by: converting the sample low-resolution high-quality image from an RGB color space to a YCbCr color space to obtain a first target sample image, converting the sample low-resolution low-quality image from the RGB color space to the YCbCr color space to obtain a second target sample image, and extracting brightness image features corresponding to a Y channel of the first target sample image and brightness image features corresponding to a Y channel of the second target sample image.
Illustratively, the RGB color space is based on three basic colors of R (Red), G (Green), and B (Blue), and is superimposed to different extents, resulting in a rich and broad color. Y in the YCbCr color space is the luminance component of the color, and Cb and Cr are the concentration offsets of blue and red, respectively. In order to solve the color shift problem in the embodiments of the present disclosure, an image may be converted from an RGB color space to a YCbCr color space, and then training of an image degradation model is performed in a Y channel, that is, the luminance image features of the image are extracted to perform training of the image degradation model.
The image degradation model may include, in a possible manner, an image degradation module, an image degradation removal module, a first arbiter, and a second arbiter. The image degradation module can be used for carrying out image degradation processing according to the brightness image characteristics of the input sample low-resolution high-quality image and outputting the brightness image characteristics of the analog low-resolution low-quality image corresponding to the sample low-resolution high-quality image. The image degradation removal module may be configured to perform image restoration processing according to luminance image features of an input sample low-resolution low-quality image, and output luminance image features of an analog low-resolution high-quality image corresponding to the sample low-resolution low-quality image. The first arbiter may be used for model training based on the luminance image features of the input simulated low resolution high quality image and the luminance image features of the sample low resolution high quality image. The second discriminator is used for performing model training according to the brightness image characteristics of the input analog low-resolution low-quality image and the brightness image characteristics of the sample low-resolution low-quality image.
For example, the image degradation module and the image degradation removal module may employ a neural network structure including a plurality of residual modules (ResBlock). For example, the neural network structure of the image degradation module and the image degradation removal module is shown in fig. 2. Wherein 3 x 3conv represents a convolution layer with a convolution kernel size of 3; relu (Rectified Linear Unit), a rectified linear unit function, represents the excitation function of the neural network structure, also called a modified linear unit,representing a pixel-by-pixel addition operation.
Alternatively, in order to better utilize the context information of the image and enhance the neural network expression capability, the image degradation module and the image degradation removal module may employ a neural network structure as shown in fig. 3. The neural network structure is illustrated by layer c4b, which is formed by combining layer c7 and layer c4 a. The c7 layer is a deconvolution layer, the convolution step length (stride) is 2, the input feature resolution is 4*4, and the feature size after upsampling is 8×8. The size of the c4a layer and the size of the c4 layer are consistent, can be regarded as a copy of the c4 layer, the size of the copy is twice that of the c7 layer, the size of the copy is just consistent with that of the up-sampled c7 layer, and the values can be directly added, so that the c4b layer is obtained. The data processing procedure of other layers is similar to that of the c4b layer, and will not be repeated here. The structure of each neural network element in the neural network structure may be as shown in fig. 4, where concat represents a merging operation, and 1×1conv represents a convolution layer with a convolution kernel size of 1.
By way of example, in the case where the neural network structure of the image degradation module and the image degradation removal module is as shown in fig. 3, the first arbiter and the second arbiter may employ block arbiters, and in particular, may include a full convolution layer network of 7 convolution layers. For example, the first and second discriminators are structured as shown in fig. 5. Where 4 x 4conv represents a convolution layer with a convolution kernel size of 4, leakyReLU represents a neural network activation function, also known as a leakage correction linear element, similar to the general ReLU, spectralNorm (Spectral normalization) represents spectral normalization, and BatchNorm (Batch Normalization) represents batch normalization.
In one embodiment, a block diagram of the image degradation model may be as shown in FIG. 6. Referring to fig. 6, after the luminance image feature Z of the sample low resolution high quality image is subjected to the image degradation process of the image degradation module G, the luminance image feature X' of the corresponding analog low resolution low quality image can be obtained. After the brightness image characteristic X of the sample low-resolution low-quality image is subjected to image restoration processing by the image degradation removal module F, the brightness image characteristic Z' of the corresponding analog low-resolution high-quality image can be obtained. First discriminator D Z The input of the (a) is the brightness image characteristic Z 'of the analog low-resolution high-quality image and the brightness image characteristic Z of the sample low-resolution high-quality image, and the probability that the brightness image characteristic Z' accords with the brightness image characteristic Z is judged. Second discriminator D X The input of the (a) is the brightness image characteristic X 'of the analog low resolution low quality image and the brightness image characteristic X of the sample low resolution low quality image, which are used for judging the probability that the brightness image characteristic X' accords with the brightness image characteristic X.
In the training of the image degradation model shown in fig. 6, the first discriminator D may be set in advance Z The output probability of (a) is a first probability value, a second discriminator D X The output probability of (2) is a second probability value. Then, the first discriminator D may be made by adjusting the parameters of the image degradation module G and the image degradation removal module F Z The output actual probability is the first probability value, and the second discriminator D X The output actual probability is a second probability value, so that the training process of the image degradation model is realized.
In a possible manner, the training process of the image degradation model may be: and extracting low-frequency signals aiming at the brightness image characteristics of the simulated low-resolution high-quality image, the brightness image characteristics of the sample low-resolution high-quality image, the brightness image characteristics of the simulated low-resolution low-quality image and the brightness image characteristics of the sample low-resolution low-quality image to obtain a target data set, and training an image degradation model according to the image characteristics included in the target data set.
It should be appreciated that in embodiments of the present disclosure, in order to constrain the image degradation module and the image degradation removal module from changing the content of the input image, the low frequency signal may be extracted from the luminance image features of the image, and then model training may be performed according to the extracted low frequency signal, so as to make the low frequency content of the images output by the image degradation module and the image degradation removal module consistent.
In a possible manner, the low frequency signal may be extracted by a gaussian low pass filter or by haar wavelet transform. The gaussian low-pass filter can make a compromise between excessive blurring (i.e. over smoothing) of image features and excessive mutation (i.e. under smoothing) caused by noise and fine textures in the image, so that a better model training effect can be obtained by extracting a low-frequency signal through the gaussian filter for training an image degradation model. In the image processing process, the haar wavelet transformation can separate high-frequency and low-frequency information of the image, and in the embodiment of the disclosure, the haar wavelet transformation can extract low-frequency signals from brightness image features of the image, so that model training is performed according to the extracted low-frequency signals, and the low-frequency content of the image output by the image degradation module and the image degradation removal module is consistent.
After extracting the low frequency signal to obtain the target data set, the image degradation model may be trained according to image features included in the target data set. In a possible way, the loss function may be calculated first as follows:
L cont =λ 1 ||G(Z) l -Z l || 12 ||F(X) l -X l || 1 (1)
wherein L is cont The loss function is represented by a function of the loss,λ 1 and lambda (lambda) 2 Representing a preset weight value, G (Z) l Representing a low frequency signal extracted from a luminance image feature of an analog low resolution low quality image, Z l Representing a low frequency signal extracted from a luminance image feature of a sample low resolution high quality image, F (X) l Representing a low frequency signal, X, extracted from a luminance image feature simulating a low resolution high quality image l Representing a low frequency signal extracted from the luminance image features of the sample low resolution low quality image.
Parameters of the image degradation model may then be adjusted according to the loss function described above.
For example, referring to the image degradation model shown in FIG. 6, the image is filtered through a Gaussian low-pass filter ω L Extracting the low frequency signal, then G (Z) in the loss function l Then it can be expressed as ω L *X1,Z l Can be expressed as ω L *Z,F(X) l Can be expressed as ω L *Z1,X l Can be expressed as ω L * X is a metal alloy. The loss function may be understood as a content consistency loss function of the image degradation model for characterizing differences between results processed by the image degradation module and initial results input to the image degradation module, and differences between results processed by the image degradation removal module and initial results input to the image degradation removal module to constrain the image degradation module and the image degradation removal module from changing the content of the input image.
In addition, referring to the image degradation model shown in fig. 6, the loss function of the image degradation model may further include a cyclic loss function and a domain alignment loss function, which are calculated in a similar manner to the related art, and will be briefly described below.
The cyclic loss function is used for representing the difference between the result obtained by inputting the image into the image degradation module and inputting the result output by the image degradation module into the image degradation removal module and the initially output image, and the difference between the result obtained by inputting the image into the image degradation removal module and inputting the result output by the image degradation removal module into the image degradation module and the initially output image. For example, referring to the image degradation model shown in fig. 6, the calculation formula of the cyclic loss function may be as follows:
L cyc =E Z [||F(G(Z))-Z|| 1 ]+E X [||G(F(X))-X|| 1 ] (2)
wherein L is cyc Represents a cyclic loss function, E Z And E is X The calculation mode is similar to that of the related art, Z represents a sample low-resolution high-quality image, G (Z) represents an output result obtained by inputting the sample low-resolution high-quality image Z into an image degradation module, F (G (Z)) represents an output result obtained by inputting the output result G (Z) obtained by inputting the image degradation module into an image degradation removal module, X represents a sample low-resolution low-quality image, F (X) represents an output result obtained by inputting the sample low-resolution low-quality image X into the image degradation removal module, and G (F (X)) represents an output result obtained by inputting the output result F (X) obtained by inputting the image degradation removal module into the image degradation module.
The domain alignment loss function is used to characterize the difference between the simulated low resolution low quality image obtained by the image degradation module and the actual low resolution low quality image acquired, and the difference between the simulated low resolution high quality image obtained by the image degradation removal module and the actual low resolution high quality image acquired. It will be appreciated that the parameters of the first and second discriminators may be adjusted by a domain alignment loss function. For example, referring to the image degradation model shown in fig. 6, the calculation formula of the domain alignment loss function is as follows:
L(G,D X )=E X [logD X (X)]+E Z [log(1-D X (G(Z)))] (3)
L(F,D Z )=E Z [logD Z (Z)]+E X [log(1-D Z (F(X)))] (4)
wherein L (G, D) X ) Representation D X Domain alignment loss function of image domain, D X (X) represents the sample low resolution low quality image X input to the second discriminator D X The result obtained after that, D X (G (Z)) means that the analog low-resolution low-quality image is input into the second discriminatorD X The results obtained after that, L (G, D Z ) Representation D Z Domain alignment loss function of image domain, D Z (Z) represents the input of the sample low-resolution high-quality image Z into the first discriminator D Z The result obtained after that, D Z (F (X)) means that the analog low-resolution high-quality image is input into the first discriminator D Z The results obtained.
By the method, parameters of the image degradation model can be adjusted according to the cyclic loss function, the domain alignment loss function and the content consistency loss function, and the training process of the image degradation model is realized. In the subsequent process, the low-resolution high-quality image can be input into the image degradation model to obtain the low-resolution low-quality image, and compared with the mode of manually adding factors such as Gaussian noise, gaussian blur and the like to the high-quality image to obtain the low-quality image in the related technology, the method has the advantages that various low-quality images more consistent with actual conditions can be obtained, the problem that video restoration cannot be effectively carried out due to the fact that training data are inconsistent with actual data to be restored is solved, and video restoration effect is improved. In addition, the model training is carried out by extracting the brightness characteristics of the image in the process of training the image degradation model, so that the problem that the color shift of the subsequently obtained low-quality image occurs can be avoided, the accuracy of the super-resolution reconstruction network obtained by training the low-quality image is improved, and the video restoration effect is further improved
For example, referring to fig. 7, from left to right, the following are in order: a true high definition high quality motion picture image, a low definition old motion picture image generated by an image degradation model, a true low definition old motion picture image. As can be seen from fig. 7, the image degradation model in the video restoration method provided by the embodiment of the present disclosure can simulate the blurring and compression of old movie data, so as to improve the subsequent video restoration effect.
For example, a low-resolution low-quality image obtained by processing a high-resolution high-quality image through the image degradation model can be used for training a super-resolution reconstruction network, so that the purpose of video restoration is achieved. For example, the high-resolution high-quality image P1 may be downsampled to obtain a low-resolution high-quality image P2, and the low-resolution high-quality image P2 is input into an image degradation model obtained by training to obtain a low-resolution low-quality image P3, and then the super-resolution reconstruction network is trained by paired training data { P3, P1}, so that each frame image of the target video to be repaired is repaired according to the super-resolution reconstruction network obtained by training, and a repair video is obtained.
Alternatively, in order to increase the number of training samples, operations such as compression, blurring, noise and the like may be manually added to the high-resolution high-quality image P1 in the process of constructing the training data to obtain a low-resolution low-quality image P4, and then the training data { P4, P1} may be constructed. In this case, the super-resolution reconstruction network may be trained by the paired training data { P3, P1} and paired training data { P4, P1}. It should be understood that the specific structure of the super-resolution reconstruction network and the process of training by pairing training data are similar to those of the related art, and will not be described herein.
Based on the same inventive concept, the embodiments of the present disclosure also provide an image degradation processing apparatus. Referring to fig. 8, the image degradation processing apparatus 800 may include:
an acquisition module 801, configured to acquire a high-resolution high-quality image;
a first processing module 802, configured to downsample the high-resolution high-quality image to obtain a low-resolution high-quality image;
a second processing module 803, configured to input the low resolution high quality image into an image degradation model to obtain a low resolution low quality image, where the image degradation model is obtained by training according to a luminance image feature of a sample low resolution high quality image and a luminance image feature of a sample low resolution low quality image, and the sample low resolution high quality image is obtained by downsampling the sample high resolution high quality image;
the low-resolution low-quality image is used for training a super-resolution reconstruction network, and the super-resolution reconstruction network is used for repairing the input low-resolution low-quality video so as to obtain a high-resolution high-quality video.
Optionally, the image degradation model includes an image degradation module, an image degradation removal module, a first discriminator, and a second discriminator;
the image degradation module is used for carrying out image degradation processing according to the input brightness image characteristics of the sample low-resolution high-quality image and outputting the brightness image characteristics of the analog low-resolution low-quality image corresponding to the sample low-resolution high-quality image;
The image degradation removal module is used for carrying out image restoration processing according to the input brightness image characteristics of the sample low-resolution low-quality image and outputting the brightness image characteristics of the analog low-resolution high-quality image corresponding to the sample low-resolution low-quality image;
the first discriminator is used for performing model training according to the input brightness image characteristics of the simulated low-resolution high-quality image and the brightness image characteristics of the sample low-resolution high-quality image;
the second discriminator is used for performing model training according to the input brightness image characteristics of the simulated low-resolution low-quality image and the brightness image characteristics of the sample low-resolution low-quality image.
Optionally, the apparatus 800 further comprises the following modules for training an image degradation model:
the extraction module is used for extracting low-frequency signals aiming at the brightness image characteristics of the simulated low-resolution high-quality image, the brightness image characteristics of the sample low-resolution high-quality image, the brightness image characteristics of the simulated low-resolution low-quality image and the brightness image characteristics of the sample low-resolution low-quality image so as to obtain a target data set;
and the training module is used for training the image degradation model according to the image characteristics included in the target data set.
Optionally, the extracting module is configured to:
the low frequency signal is extracted by a gaussian low pass filter or by haar wavelet transform.
Optionally, the training module is configured to:
the loss function is calculated as follows:
L cont =λ 1 ||G(Z) l -Z l || 12 ||F(X) l -X l || 1 (1)
wherein L is cont Represents a loss function lambda 1 And lambda (lambda) 2 Representing a preset weight value, G (Z) l Representing a low frequency signal extracted from a luminance image feature of an analog low resolution low quality image, Z l Representing a low frequency signal extracted from a luminance image feature of a sample low resolution high quality image, F (X) l Representing a low frequency signal, X, extracted from a luminance image feature simulating a low resolution high quality image l Representing a low frequency signal extracted from a luminance image feature of a sample low resolution low quality image;
and adjusting parameters of the image degradation model according to the loss function.
Optionally, the apparatus 800 further comprises means for extracting luminance image features of a sample low resolution high quality image and luminance image features of the sample low resolution low quality image as follows:
the first extraction module is used for converting the sample low-resolution high-quality image from an RGB color space to a YCbCr color space to obtain a first target sample image, and converting the sample low-resolution low-quality image from the RGB color space to the YCbCr color space to obtain a second target sample image;
And the second extraction module is used for extracting the brightness image characteristics corresponding to the first target sample image Y channel and the brightness image characteristics corresponding to the second target sample image Y channel.
Optionally, the apparatus 800 further comprises means for acquiring the sample high resolution high quality image and the sample low resolution low quality image as follows:
the acquisition module is used for acquiring a plurality of high-resolution high-quality videos and a plurality of low-resolution low-quality videos;
the first selection module is used for carrying out image segmentation on each frame of image corresponding to each high-resolution high-quality video to obtain a plurality of first image blocks, and selecting a target first image block with pixel values meeting preset conditions from the plurality of first image blocks as a sample high-resolution high-quality image;
the second selection module is used for carrying out image segmentation on each frame image corresponding to each low-resolution low-quality video to obtain a plurality of second image blocks, and selecting a target second image block with pixel values meeting the preset condition from the plurality of second image blocks as a sample low-resolution low-quality image.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Based on the same inventive concept, the embodiments of the present disclosure also provide a computer-readable medium having stored thereon a computer program which, when executed by a processing apparatus, implements the steps of any of the above-described image degradation processing methods.
Based on the same inventive concept, the embodiments of the present disclosure further provide an electronic device, including:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to implement the steps of any of the image degradation processing methods described above.
Referring now to fig. 9, a schematic diagram of an electronic device 900 suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 9 is merely an example, and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 9, the electronic device 900 may include a processing means (e.g., a central processor, a graphics processor, etc.) 901, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage means 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data necessary for the operation of the electronic device 900 are also stored. The processing device 901, the ROM 902, and the RAM 903 are connected to each other through a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904.
In general, the following devices may be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 907 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 908 including, for example, magnetic tape, hard disk, etc.; and a communication device 909. The communication means 909 may allow the electronic device 900 to communicate wirelessly or by wire with other devices to exchange data. While fig. 9 shows an electronic device 900 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 909, or installed from the storage device 908, or installed from the ROM 902. When executed by the processing device 901, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, communications may be made using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a high-resolution high-quality image; downsampling the high-resolution high-quality image to obtain a low-resolution high-quality image; inputting the low-resolution high-quality image into an image degradation model to obtain a low-resolution low-quality image, wherein the image degradation model is obtained by training according to brightness image features of a sample low-resolution high-quality image and brightness image features of the sample low-resolution low-quality image, and the sample low-resolution high-quality image is obtained by downsampling the sample high-resolution high-quality image; the low-resolution low-quality image is used for training a super-resolution reconstruction network, and the super-resolution reconstruction network is used for repairing the input low-resolution low-quality video so as to obtain a high-resolution high-quality video.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented in software or hardware. The name of a module does not in some cases define the module itself.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, example one provides an image degradation processing method, including:
acquiring a high-resolution high-quality image;
downsampling the high-resolution high-quality image to obtain a low-resolution high-quality image;
inputting the low-resolution high-quality image into an image degradation model to obtain a low-resolution low-quality image, wherein the image degradation model is obtained by training according to brightness image features of a sample low-resolution high-quality image and brightness image features of the sample low-resolution low-quality image, and the sample low-resolution high-quality image is obtained by downsampling the sample high-resolution high-quality image;
the low-resolution low-quality image is used for training a super-resolution reconstruction network, and the super-resolution reconstruction network is used for repairing the input low-resolution low-quality video so as to obtain a high-resolution high-quality video.
In accordance with one or more embodiments of the present disclosure, example two provides the method of example one, the image degradation model comprising an image degradation module, an image degradation removal module, a first arbiter, and a second arbiter;
the image degradation module is used for carrying out image degradation processing according to the input brightness image characteristics of the sample low-resolution high-quality image and outputting the brightness image characteristics of the analog low-resolution low-quality image corresponding to the sample low-resolution high-quality image;
The image degradation removal module is used for carrying out image restoration processing according to the input brightness image characteristics of the sample low-resolution low-quality image and outputting the brightness image characteristics of the analog low-resolution high-quality image corresponding to the sample low-resolution low-quality image;
the first discriminator is used for performing model training according to the input brightness image characteristics of the simulated low-resolution high-quality image and the brightness image characteristics of the sample low-resolution high-quality image;
the second discriminator is used for performing model training according to the input brightness image characteristics of the simulated low-resolution low-quality image and the brightness image characteristics of the sample low-resolution low-quality image.
In accordance with one or more embodiments of the present disclosure, example three provides a method of example two, the training process of the image degradation model comprising:
extracting a low-frequency signal aiming at the brightness image characteristics of the simulated low-resolution high-quality image, the brightness image characteristics of the sample low-resolution high-quality image, the brightness image characteristics of the simulated low-resolution low-quality image and the brightness image characteristics of the sample low-resolution low-quality image to obtain a target data set;
And training the image degradation model according to the image characteristics included in the target data set.
In accordance with one or more embodiments of the present disclosure, example four provides the method of example three, the extracting the low frequency signal comprising:
the low frequency signal is extracted by a gaussian low pass filter or by haar wavelet transform.
According to one or more embodiments of the present disclosure, example five provides the method of example three, the training the image degradation model according to image features included in the target dataset, comprising:
the loss function is calculated as follows:
L cont =λ 1 ||G(Z) l -Z l || 12 ||F(X) l -X l || 1
wherein L is cont Represents a loss function lambda 1 And lambda (lambda) 2 Representing a preset weight value, G (Z) l Representing a low frequency signal extracted from a luminance image feature of an analog low resolution low quality image, Z l Representing a low frequency signal extracted from a luminance image feature of a sample low resolution high quality image, F (X) l Representing a low frequency signal, X, extracted from a luminance image feature simulating a low resolution high quality image l Representing a low frequency signal extracted from a luminance image feature of a sample low resolution low quality image;
and adjusting parameters of the image degradation model according to the loss function.
According to one or more embodiments of the present disclosure, example six provides the method of any one of examples one to five, the luminance image features of the sample low-resolution high-quality image and the luminance image features of the sample low-resolution low-quality image being obtained by:
converting the sample low-resolution high-quality image from an RGB color space to a YCbCr color space to obtain a first target sample image, and converting the sample low-resolution low-quality image from the RGB color space to the YCbCr color space to obtain a second target sample image;
and extracting the brightness image characteristics corresponding to the first target sample image Y channel and the brightness image characteristics corresponding to the second target sample image Y channel.
According to one or more embodiments of the present disclosure, example six provides the method of any one of examples one to five, the sample high-resolution high-quality image and the sample low-resolution low-quality image being obtained by:
collecting a plurality of high-resolution high-quality videos and a plurality of low-resolution low-quality videos;
for each high-resolution high-quality video, image segmentation is carried out on each frame image corresponding to the high-resolution high-quality video to obtain a plurality of first image blocks, and a target first image block with pixel values meeting preset conditions is selected from the plurality of first image blocks to serve as a sample high-resolution high-quality image;
And aiming at each low-resolution low-quality video, carrying out image segmentation on each frame image corresponding to the low-resolution low-quality video to obtain a plurality of second image blocks, and selecting a target second image block with pixel values meeting the preset condition from the second image blocks as a sample low-resolution low-quality image.
According to one or more embodiments of the present disclosure, an example eight provides an image degradation processing apparatus, comprising:
the acquisition module is used for acquiring the high-resolution high-quality image;
the first processing module is used for downsampling the high-resolution high-quality image to obtain a low-resolution high-quality image;
the second processing module is used for inputting the low-resolution high-quality image into an image degradation model to obtain a low-resolution low-quality image, the image degradation model is obtained by training according to brightness image features of a sample low-resolution high-quality image and brightness image features of the sample low-resolution low-quality image, and the sample low-resolution high-quality image is obtained by downsampling the sample high-resolution high-quality image;
the low-resolution low-quality image is used for training a super-resolution reconstruction network, and the super-resolution reconstruction network is used for repairing the input low-resolution low-quality video so as to obtain a high-resolution high-quality video.
According to one or more embodiments of the present disclosure, example nine provides the apparatus of example eight, the image degradation model comprising an image degradation module, an image degradation removal module, a first arbiter, and a second arbiter;
the image degradation module is used for carrying out image degradation processing according to the input brightness image characteristics of the sample low-resolution high-quality image and outputting the brightness image characteristics of the analog low-resolution low-quality image corresponding to the sample low-resolution high-quality image;
the image degradation removal module is used for carrying out image restoration processing according to the input brightness image characteristics of the sample low-resolution low-quality image and outputting the brightness image characteristics of the analog low-resolution high-quality image corresponding to the sample low-resolution low-quality image;
the first discriminator is used for performing model training according to the input brightness image characteristics of the simulated low-resolution high-quality image and the brightness image characteristics of the sample low-resolution high-quality image;
the second discriminator is used for performing model training according to the input brightness image characteristics of the simulated low-resolution low-quality image and the brightness image characteristics of the sample low-resolution low-quality image.
In accordance with one or more embodiments of the present disclosure, example ten provides the apparatus of example nine, further comprising the following means for training an image degradation model:
the extraction module is used for extracting low-frequency signals aiming at the brightness image characteristics of the simulated low-resolution high-quality image, the brightness image characteristics of the sample low-resolution high-quality image, the brightness image characteristics of the simulated low-resolution low-quality image and the brightness image characteristics of the sample low-resolution low-quality image so as to obtain a target data set;
and the training module is used for training the image degradation model according to the image characteristics included in the target data set.
According to one or more embodiments of the present disclosure, example eleven provides an apparatus of example eleven, the extraction module to:
the low frequency signal is extracted by a gaussian low pass filter or by haar wavelet transform.
In accordance with one or more embodiments of the present disclosure, example twelve provides an apparatus of example ten, the training module to:
the loss function is calculated as follows:
L cont =λ 1 ||G(Z) l -Z l || 12 ||F(X) l -X l || 1
wherein L is cont Represents a loss function lambda 1 And lambda (lambda) 2 Representing a preset weight value, G (Z) l Representing a low frequency signal extracted from a luminance image feature of an analog low resolution low quality image, Z l Representing a low frequency signal extracted from a luminance image feature of a sample low resolution high quality image, F (X) l Representing a low frequency signal, X, extracted from a luminance image feature simulating a low resolution high quality image l Representing a low frequency signal extracted from a luminance image feature of a sample low resolution low quality image;
and adjusting parameters of the image degradation model according to the loss function.
According to one or more embodiments of the present disclosure, example thirteenth provides the apparatus of any one of examples eight to twelve, further comprising means for extracting luminance image features of a sample low-resolution high-quality image and luminance image features of the sample low-resolution low-quality image as follows:
the first extraction module is used for converting the sample low-resolution high-quality image from an RGB color space to a YCbCr color space to obtain a first target sample image, and converting the sample low-resolution low-quality image from the RGB color space to the YCbCr color space to obtain a second target sample image;
and the second extraction module is used for extracting the brightness image characteristics corresponding to the first target sample image Y channel and the brightness image characteristics corresponding to the second target sample image Y channel.
According to one or more embodiments of the present disclosure, example fourteen provides the apparatus of any one of examples eight to twelve, further comprising means for acquiring the sample high-resolution high-quality image and the sample low-resolution low-quality image as follows:
the acquisition module is used for acquiring a plurality of high-resolution high-quality videos and a plurality of low-resolution low-quality videos;
the first selection module is used for carrying out image segmentation on each frame of image corresponding to each high-resolution high-quality video to obtain a plurality of first image blocks, and selecting a target first image block with pixel values meeting preset conditions from the plurality of first image blocks as a sample high-resolution high-quality image;
the second selection module is used for carrying out image segmentation on each frame image corresponding to each low-resolution low-quality video to obtain a plurality of second image blocks, and selecting a target second image block with pixel values meeting the preset condition from the plurality of second image blocks as a sample low-resolution low-quality image.
According to one or more embodiments of the present disclosure, example fifteen provides a computer-readable medium having stored thereon a computer program which, when executed by a processing device, performs the steps of the method of any one of examples one to seven.
In accordance with one or more embodiments of the present disclosure, example sixteen provides an electronic device comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to implement the steps of the method of any one of examples one to seven.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims. The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.

Claims (8)

1. An image degradation processing method, characterized by comprising:
acquiring a high-resolution high-quality image;
downsampling the high-resolution high-quality image to obtain a low-resolution high-quality image;
inputting the low-resolution high-quality image into an image degradation model to obtain a low-resolution low-quality image, wherein the image degradation model is obtained by training according to brightness image characteristics of a sample low-resolution high-quality image and brightness image characteristics of the sample low-resolution low-quality image, the sample low-resolution high-quality image is obtained by downsampling the sample high-resolution high-quality image, and the training process of the image degradation model comprises the following steps: extracting a low-frequency signal aiming at the brightness image characteristics of the simulated low-resolution high-quality image, the brightness image characteristics of the sample low-resolution high-quality image, the brightness image characteristics of the simulated low-resolution low-quality image and the brightness image characteristics of the sample low-resolution low-quality image to obtain a target data set; training the image degradation model according to image features included in the target data set;
The low-resolution low-quality image is used for training a super-resolution reconstruction network, and the super-resolution reconstruction network is used for repairing an input low-resolution low-quality video so as to obtain a high-resolution high-quality video;
wherein the sample high-resolution high-quality image and the sample low-resolution low-quality image are obtained by:
collecting a plurality of high-resolution high-quality videos and a plurality of low-resolution low-quality videos;
for each high-resolution high-quality video, image segmentation is carried out on each frame image corresponding to the high-resolution high-quality video to obtain a plurality of first image blocks, and a target first image block with a pixel value meeting a preset condition is selected from the plurality of first image blocks to serve as a sample high-resolution high-quality image, wherein the preset condition is that the variance of the pixel value in the image block is larger than or equal to a preset threshold value, or the preset condition is that the mean value or variance corresponding to the low-frequency characteristic of the pixel point in the image block is larger than or equal to the preset threshold value;
and aiming at each low-resolution low-quality video, carrying out image segmentation on each frame image corresponding to the low-resolution low-quality video to obtain a plurality of second image blocks, and selecting a target second image block with pixel values meeting the preset condition from the second image blocks as a sample low-resolution low-quality image.
2. The method of claim 1, wherein the image degradation model comprises an image degradation module, an image degradation removal module, a first arbiter, and a second arbiter;
the image degradation module is used for carrying out image degradation processing according to the input brightness image characteristics of the sample low-resolution high-quality image and outputting the brightness image characteristics of the analog low-resolution low-quality image corresponding to the sample low-resolution high-quality image;
the image degradation removal module is used for carrying out image restoration processing according to the input brightness image characteristics of the sample low-resolution low-quality image and outputting the brightness image characteristics of the analog low-resolution high-quality image corresponding to the sample low-resolution low-quality image;
the first discriminator is used for performing model training according to the input brightness image characteristics of the simulated low-resolution high-quality image and the brightness image characteristics of the sample low-resolution high-quality image;
the second discriminator is used for performing model training according to the input brightness image characteristics of the simulated low-resolution low-quality image and the brightness image characteristics of the sample low-resolution low-quality image.
3. The method of claim 1, wherein extracting the low frequency signal comprises:
The low frequency signal is extracted by a gaussian low pass filter or by haar wavelet transform.
4. The method of claim 1, wherein the training the image degradation model based on image features included in the target dataset comprises:
the loss function is calculated as follows:
L cont =λ 1 G(Z) l -Z l12 F(X) l -X l1
wherein L is cont Represents a loss function lambda 1 And lambda (lambda) 2 Representing a preset weight value, G (Z) l Representing a low frequency signal extracted from a luminance image feature of an analog low resolution low quality image, Z l Representing a low frequency signal extracted from a luminance image feature of a sample low resolution high quality image, F (X) l Representing a low frequency signal, X, extracted from a luminance image feature simulating a low resolution high quality image l Representing a low frequency signal extracted from a luminance image feature of a sample low resolution low quality image;
and adjusting parameters of the image degradation model according to the loss function.
5. The method according to any one of claims 1-4, wherein the luminance image features of the sample low resolution high quality image and the luminance image features of the sample low resolution low quality image are obtained by:
converting the sample low-resolution high-quality image from an RGB color space to a YCbCr color space to obtain a first target sample image, and converting the sample low-resolution low-quality image from the RGB color space to the YCbCr color space to obtain a second target sample image;
And extracting the brightness image characteristics corresponding to the first target sample image Y channel and the brightness image characteristics corresponding to the second target sample image Y channel.
6. An image degradation processing apparatus, characterized by comprising:
the acquisition module is used for acquiring the high-resolution high-quality image;
the first processing module is used for downsampling the high-resolution high-quality image to obtain a low-resolution high-quality image;
the second processing module is used for inputting the low-resolution high-quality image into an image degradation model to obtain a low-resolution low-quality image, the image degradation model is obtained by training according to brightness image features of a sample low-resolution high-quality image and brightness image features of the sample low-resolution low-quality image, and the sample low-resolution high-quality image is obtained by downsampling the sample high-resolution high-quality image; the low-resolution low-quality image is used for training a super-resolution reconstruction network, and the super-resolution reconstruction network is used for repairing an input low-resolution low-quality video so as to obtain a high-resolution high-quality video;
wherein the sample high-resolution high-quality image and the sample low-resolution low-quality image are obtained by:
Collecting a plurality of high-resolution high-quality videos and a plurality of low-resolution low-quality videos;
for each high-resolution high-quality video, image segmentation is carried out on each frame image corresponding to the high-resolution high-quality video to obtain a plurality of first image blocks, and a target first image block with a pixel value meeting a preset condition is selected from the plurality of first image blocks to serve as a sample high-resolution high-quality image, wherein the preset condition is that the variance of the pixel value in the image block is larger than or equal to a preset threshold value, or the preset condition is that the mean value or variance corresponding to the low-frequency characteristic of the pixel point in the image block is larger than or equal to the preset threshold value;
for each low-resolution low-quality video, image segmentation is carried out on each frame image corresponding to the low-resolution low-quality video to obtain a plurality of second image blocks, and a target second image block with pixel values meeting the preset conditions is selected from the second image blocks to serve as a sample low-resolution low-quality image; the training process of the image degradation model comprises the following steps: extracting a low-frequency signal aiming at the brightness image characteristics of the simulated low-resolution high-quality image, the brightness image characteristics of the sample low-resolution high-quality image, the brightness image characteristics of the simulated low-resolution low-quality image and the brightness image characteristics of the sample low-resolution low-quality image to obtain a target data set; and training the image degradation model according to the image characteristics included in the target data set.
7. A computer readable medium on which a computer program is stored, characterized in that the program, when being executed by a processing device, carries out the steps of the method according to any one of claims 1-5.
8. An electronic device, comprising:
a storage device having a computer program stored thereon;
processing means for executing said computer program in said storage means to carry out the steps of the method according to any one of claims 1-5.
CN202011308390.0A 2020-11-19 2020-11-19 Image degradation processing method and device, storage medium and electronic equipment Active CN112419151B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011308390.0A CN112419151B (en) 2020-11-19 2020-11-19 Image degradation processing method and device, storage medium and electronic equipment
PCT/CN2021/129431 WO2022105638A1 (en) 2020-11-19 2021-11-09 Image degradation processing method and apparatus, and storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011308390.0A CN112419151B (en) 2020-11-19 2020-11-19 Image degradation processing method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112419151A CN112419151A (en) 2021-02-26
CN112419151B true CN112419151B (en) 2023-07-21

Family

ID=74773271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011308390.0A Active CN112419151B (en) 2020-11-19 2020-11-19 Image degradation processing method and device, storage medium and electronic equipment

Country Status (2)

Country Link
CN (1) CN112419151B (en)
WO (1) WO2022105638A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419151B (en) * 2020-11-19 2023-07-21 北京有竹居网络技术有限公司 Image degradation processing method and device, storage medium and electronic equipment
CN113160079B (en) * 2021-04-13 2024-08-02 Oppo广东移动通信有限公司 Portrait repair model training method, portrait repair method and device
CN113222855B (en) * 2021-05-28 2023-07-11 北京有竹居网络技术有限公司 Image recovery method, device and equipment
CN113222144B (en) * 2021-05-31 2022-12-27 北京有竹居网络技术有限公司 Training method of image restoration model, image restoration method, device and equipment
CN113411521B (en) * 2021-06-23 2022-09-09 北京达佳互联信息技术有限公司 Video processing method and device, electronic equipment and storage medium
CN114627019A (en) * 2022-03-17 2022-06-14 腾讯科技(深圳)有限公司 Content restoration method, device, equipment and computer program product
CN115471398B (en) * 2022-08-31 2023-08-15 北京科技大学 Image super-resolution method, system, terminal equipment and storage medium
CN115439449B (en) * 2022-09-06 2023-05-09 抖音视界有限公司 Full-field histological image processing method, device, medium and electronic equipment
CN116912148B (en) * 2023-09-12 2024-01-05 深圳思谋信息科技有限公司 Image enhancement method, device, computer equipment and computer readable storage medium
CN117319749B (en) * 2023-10-27 2024-08-20 深圳金语科技有限公司 Video data transmission method, device, equipment and storage medium
CN117499558A (en) * 2023-11-02 2024-02-02 北京市燃气集团有限责任公司 Video image optimization processing method and device

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018053340A1 (en) * 2016-09-15 2018-03-22 Twitter, Inc. Super resolution using a generative adversarial network
CN106709875B (en) * 2016-12-30 2020-02-18 北京工业大学 Compressed low-resolution image restoration method based on joint depth network
CN107248140A (en) * 2017-04-27 2017-10-13 东南大学 A kind of single image super resolution ratio reconstruction method based on two-way alignment rarefaction representation
CN107492070B (en) * 2017-07-10 2019-12-03 华北电力大学 A kind of single image super-resolution calculation method of binary channels convolutional neural networks
CN108022212B (en) * 2017-11-24 2022-07-01 腾讯科技(深圳)有限公司 High-resolution picture generation method, generation device and storage medium
CN109285119A (en) * 2018-10-23 2019-01-29 百度在线网络技术(北京)有限公司 Super resolution image generation method and device
CN110120011B (en) * 2019-05-07 2022-05-31 电子科技大学 Video super-resolution method based on convolutional neural network and mixed resolution
CN111127317B (en) * 2019-12-02 2023-07-25 深圳供电局有限公司 Image super-resolution reconstruction method, device, storage medium and computer equipment
CN111179177B (en) * 2019-12-31 2024-03-26 深圳市联合视觉创新科技有限公司 Image reconstruction model training method, image reconstruction method, device and medium
CN111127325B (en) * 2019-12-31 2020-11-24 珠海大横琴科技发展有限公司 Satellite video super-resolution reconstruction method and system based on cyclic neural network
CN111192219B (en) * 2020-01-02 2022-07-26 南京邮电大学 Image defogging method based on improved inverse atmospheric scattering model convolution network
CN111784571A (en) * 2020-04-13 2020-10-16 北京京东尚科信息技术有限公司 Method and device for improving image resolution
CN111667442B (en) * 2020-05-21 2022-04-01 武汉大学 High-quality high-frame-rate image reconstruction method based on event camera
CN112419151B (en) * 2020-11-19 2023-07-21 北京有竹居网络技术有限公司 Image degradation processing method and device, storage medium and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
田岩 ; 谢玉波 ; 史文中 ; 彭复员 ; 柳健 ; .基于局部方差的多分辨率图像分割方法.系统工程与电子技术.2006,(第12期),全文. *

Also Published As

Publication number Publication date
CN112419151A (en) 2021-02-26
WO2022105638A1 (en) 2022-05-27

Similar Documents

Publication Publication Date Title
CN112419151B (en) Image degradation processing method and device, storage medium and electronic equipment
CN110163237B (en) Model training and image processing method, device, medium and electronic equipment
CN111193923B (en) Video quality evaluation method and device, electronic equipment and computer storage medium
CN112767251B (en) Image super-resolution method based on multi-scale detail feature fusion neural network
CN110738605A (en) Image denoising method, system, device and medium based on transfer learning
CN110717868B (en) Video high dynamic range inverse tone mapping model construction and mapping method and device
CN111127331B (en) Image denoising method based on pixel-level global noise estimation coding and decoding network
CN110189260B (en) Image noise reduction method based on multi-scale parallel gated neural network
WO2020231016A1 (en) Image optimization method, apparatus, device and storage medium
CN112200732B (en) Video deblurring method with clear feature fusion
CN111951172A (en) Image optimization method, device, equipment and storage medium
CN112419179B (en) Method, apparatus, device and computer readable medium for repairing image
CN111738951B (en) Image processing method and device
Chen et al. Naturalization module in neural networks for screen content image quality assessment
CN110689478B (en) Image stylization processing method and device, electronic equipment and readable medium
CN113962859A (en) Panorama generation method, device, equipment and medium
CN113538304B (en) Training method and device for image enhancement model, and image enhancement method and device
CN114170082A (en) Video playing method, image processing method, model training method, device and electronic equipment
CN111738950B (en) Image processing method and device
CN113706400A (en) Image correction method, image correction device, microscope image correction method, and electronic apparatus
WO2023179360A1 (en) Video processing method and apparatus, and electronic device and storage medium
CN110070482B (en) Image processing method, apparatus and computer readable storage medium
CN111383299B (en) Image processing method and device and computer readable storage medium
CN108492264B (en) Single-frame image fast super-resolution method based on sigmoid transformation
CN112801997B (en) Image enhancement quality evaluation method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant