CN111369440B - Model training and image super-resolution processing method, device, terminal and storage medium - Google Patents

Model training and image super-resolution processing method, device, terminal and storage medium Download PDF

Info

Publication number
CN111369440B
CN111369440B CN202010141266.3A CN202010141266A CN111369440B CN 111369440 B CN111369440 B CN 111369440B CN 202010141266 A CN202010141266 A CN 202010141266A CN 111369440 B CN111369440 B CN 111369440B
Authority
CN
China
Prior art keywords
resolution
neural network
network model
super
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010141266.3A
Other languages
Chinese (zh)
Other versions
CN111369440A (en
Inventor
陈伟民
袁燚
范长杰
胡志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010141266.3A priority Critical patent/CN111369440B/en
Publication of CN111369440A publication Critical patent/CN111369440A/en
Application granted granted Critical
Publication of CN111369440B publication Critical patent/CN111369440B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application provides a model training method, a model super-resolution processing device, a model super-resolution processing terminal and a model super-resolution processing storage medium, and relates to the technical field of model training. The method comprises the following steps: downsampling an original high-resolution image corresponding to the sample low-resolution image to obtain a plurality of high-resolution images with original high resolution; respectively adopting a plurality of feature extraction branches to extract features of the sample low-resolution image to obtain image features of a plurality of layers; adopting a feature fusion module to fuse the image features of a plurality of layers to obtain fusion features of the sample low-resolution image; adopting a plurality of reconstruction branches to reconstruct the fusion characteristics to obtain super-resolution images with a plurality of resolutions; training the neural network model according to the high-resolution images with the multiple resolutions and the corresponding super-resolution images. The neural network model is used for recovering the low-resolution image, so that semantic information contained in the generated super-resolution image is richer and the definition is higher.

Description

Model training and image super-resolution processing method, device, terminal and storage medium
Technical Field
The invention relates to the technical field of model training, in particular to a method, a device, a terminal and a storage medium for model training and image super-resolution processing.
Background
Image resolution refers to the amount of information stored in an image and is how many pixels are within an image per inch. Low resolution images have poor sharpness and contain fewer features. The low-image resolution image is restored to the super-resolution image, so that the definition of the image can be improved, and details contained in the image are more real.
In the related art, a plurality of feature extraction blocks are sequentially connected in series to the plurality of scale feature extraction blocks, so that image features of different layers are extracted through the plurality of feature extraction blocks connected in series, and super-resolution images are generated according to the image features of different layers.
However, in the related art, the image features are extracted by the plurality of feature extraction blocks connected in series, so that the super-resolution image is directly generated, and the generated super-resolution image has more noise and artifacts, so that the generated super-resolution image has a poor definition.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide a model training and image super-resolution processing method, device, terminal and storage medium, so as to solve the problems that in the prior art, image features are extracted through a plurality of feature extraction blocks connected in series, super-resolution images are directly generated, the generated super-resolution images have more noise and artifacts, and the generated super-resolution images have poorer definition.
In order to achieve the above purpose, the technical scheme adopted by the embodiment of the invention is as follows:
in a first aspect, an embodiment of the present invention provides a model training method, where the model training method is applied to a neural network model, and the neural network model includes: the device comprises a feature extraction module and a reconstruction module, wherein the feature extraction module comprises: the device comprises a plurality of feature extraction branches and a feature fusion module, wherein different feature extraction branches correspond to image features of different layers; the reconstruction module comprises: a plurality of reconstruction branches, different reconstruction branches corresponding to different resolutions, the input of the reconstruction branch of a later resolution being the output of the reconstruction branch of a previous resolution, wherein the later resolution is greater than the previous resolution; the method comprises the following steps:
downsampling an original high-resolution image corresponding to an input sample low-resolution image at least once to obtain a plurality of high-resolution images with original high resolution;
respectively adopting the plurality of feature extraction branches to extract features of the sample low-resolution image to obtain image features of a plurality of layers;
adopting the feature fusion module to fuse the image features of the multiple layers to obtain fusion features of the sample low-resolution image;
Respectively adopting the plurality of reconstruction branches to reconstruct the fusion characteristics to obtain super-resolution images with the plurality of resolutions; the image output by the last reconstruction branch is a target super-resolution image corresponding to the sample low-resolution image;
and training the neural network model according to the high-resolution images with the multiple resolutions and the corresponding super-resolution images.
Further, the training the neural network model according to the high-resolution images and the corresponding super-resolution images includes:
determining a loss function value of the initial neural network model according to the high-resolution images with the multiple resolutions and the corresponding super-resolution images;
and adjusting parameters of the neural network model according to the loss function value until the loss function value of the adjusted neural network model is converged.
Further, the determining the loss function value of the neural network model according to the high resolution images and the corresponding super resolution images includes:
determining a pixel loss value of the neural network model according to the high-resolution images with the multiple resolutions and the corresponding super-resolution images;
Determining a perception loss value of the neural network model according to the high-resolution images with the multiple resolutions and the feature images output by the corresponding super-resolution images in a preset layer in a pre-training model;
determining an antagonism loss value of the neural network model according to the original high-resolution image and the target super-resolution image;
and determining a loss function value of the neural network model according to the pixel loss value, the perceived loss value and the counterloss value.
Further, the determining the countermeasures loss value of the initial neural network model according to the original high-resolution image and the target super-resolution image includes:
determining the probability that the original high-resolution image is true compared with the target super-resolution image and the probability that the target super-resolution image is false compared with the original high-resolution image by adopting a discriminator;
and determining the countermeasures loss value according to the real probability and the false probability.
Further, the determining a loss function value of the neural network model according to the pixel loss value, the perceived loss value, and the contrast loss value includes:
And determining a loss function value of the neural network model by adopting a preset weighting algorithm according to the pixel loss value, the perception loss value and the counterloss value.
Further, the adjusting the parameters of the neural network model according to the loss function value until the loss function value of the adjusted neural network model converges, includes:
and according to the loss function value, adjusting the parameters of the neural network model by adopting a preset gradient descent method until the loss function value of the adjusted neural network model is converged.
In a second aspect, an embodiment of the present application further provides an image super-resolution processing method, where the method is applied to a neural network model obtained by the training method in any one of the first aspect, and the image super-resolution processing method includes:
acquiring an input low-resolution image;
and performing super-resolution processing on the low-resolution image by adopting the neural network model to obtain a target super-resolution image corresponding to the low-resolution image.
In a third aspect, embodiments of the present application further provide a model training apparatus, where the model training apparatus is applied to a neural network model, and the neural network model includes: the device comprises a feature extraction module and a reconstruction module, wherein the feature extraction module comprises: the device comprises a plurality of feature extraction branches and a feature fusion module, wherein different feature extraction branches correspond to image features of different layers; the reconstruction module comprises: a plurality of reconstruction branches, different reconstruction branches corresponding to different resolutions, the input of the reconstruction branch of a later resolution being the output of the reconstruction branch of a previous resolution, wherein the later resolution is greater than the previous resolution; the device comprises:
The downsampling module is used for downsampling at least once an original high-resolution image corresponding to an input sample low-resolution image to obtain a plurality of high-resolution images with original high resolution;
the extraction module is used for extracting the characteristics of the sample low-resolution image by adopting the plurality of characteristic extraction branches respectively to obtain a plurality of layers of image characteristics;
the fusion module is used for carrying out fusion processing on the image features of the multiple layers by adopting the feature fusion module to obtain fusion features of the sample low-resolution image;
the reconstruction processing module is used for carrying out reconstruction processing on the fusion characteristics by adopting the plurality of reconstruction branches respectively to obtain super-resolution images with the plurality of resolutions; the image output by the last reconstruction branch is a target super-resolution image corresponding to the sample low-resolution image;
and the training module is used for training the neural network model according to the high-resolution images with the multiple resolutions and the corresponding super-resolution images.
Further, the training module is further configured to determine a loss function value of the initial neural network model according to the high-resolution images of the multiple resolutions and the corresponding super-resolution images; and adjusting parameters of the neural network model according to the loss function value until the loss function value of the adjusted neural network model is converged.
Further, the training module is further configured to determine a pixel loss value of the neural network model according to the high-resolution images and the corresponding super-resolution images; determining a perception loss value of the neural network model according to the high-resolution images with the multiple resolutions and the feature images output by the corresponding super-resolution images in a preset layer in a pre-training model; determining an antagonism loss value of the neural network model according to the original high-resolution image and the target super-resolution image; and determining a loss function value of the neural network model according to the pixel loss value, the perceived loss value and the counterloss value.
Further, the training module is further configured to determine, by using a discriminator, a probability that the original high-resolution image is true compared with the target super-resolution image, and a probability that the target super-resolution image is false compared with the original high-resolution image; and determining the countermeasures loss value according to the real probability and the false probability.
Further, the training module is further configured to determine a loss function value of the neural network model according to the pixel loss value, the perceived loss value, and the counterloss value by using a preset weighting algorithm.
Further, the training module is further configured to adjust parameters of the neural network model by using a preset gradient descent method according to the loss function value until the loss function value of the adjusted neural network model converges.
In a fourth aspect, an embodiment of the present application further provides an image super-resolution processing device, where the device is applied to a neural network model obtained by the training method in any one of the first aspect, and the image super-resolution processing device includes:
the acquisition module is used for acquiring the input low-resolution image;
and the processing module is used for performing super-resolution processing on the low-resolution image by adopting the neural network model to obtain a target super-resolution image corresponding to the low-resolution image.
In a fifth aspect, embodiments of the present application further provide a terminal, including: a memory storing a computer program executable by the processor, and a processor implementing any one of the methods provided in the first and second aspects.
In a sixth aspect, embodiments of the present application further provide a storage medium having a computer program stored thereon, the computer program implementing any of the methods provided in the first and second aspects above when read and executed.
The beneficial effects of this application are: the embodiment of the invention provides a model training method, which is used for carrying out at least one down-sampling on an original high-resolution image corresponding to an input sample low-resolution image to obtain a plurality of high-resolution images with original high resolution; respectively adopting the plurality of feature extraction branches to extract features of the sample low-resolution image to obtain image features of a plurality of layers; adopting the feature fusion module to fuse the image features of the multiple layers to obtain fusion features of the sample low-resolution image; respectively adopting the plurality of reconstruction branches to reconstruct the fusion characteristics to obtain super-resolution images with the plurality of resolutions; and training the neural network model according to the high-resolution images with the multiple resolutions and the corresponding super-resolution images. The method comprises the steps of obtaining high-resolution images with multiple resolutions, obtaining super-resolution images with multiple resolutions based on multiple feature extraction branches and multiple reconstruction branches, training a model according to the super-resolution images with multiple resolutions and the high-resolution images, extracting more image features when the low-resolution images are recovered through the model, and enabling semantic information contained in the generated super-resolution images to be richer and higher in definition.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a generator structure of a neural network model according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a model training method according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a model training method according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of a model training method according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart of a model training method according to an embodiment of the present invention;
fig. 6 is a schematic flow chart of an image super-resolution processing method according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a model training device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an image super-resolution processing device according to an embodiment of the present invention;
Fig. 9 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention.
The execution subject of the model training method provided by the embodiment of the invention can be a server or a terminal, for example, a desktop computer, a notebook computer, a tablet computer and other individual computers, and the embodiment of the invention is not particularly limited.
The model training method provided by the application is illustrated by a plurality of examples by taking the terminal as an execution subject.
Fig. 1 is a schematic diagram of a generator structure of a neural network model according to an embodiment of the present invention, where, as shown in fig. 1, the neural network model may include a feature extraction module 10 and a reconstruction module 20, and the feature extraction module 10 includes: a plurality of feature extraction branches and feature fusion modules 11, different feature extraction branches corresponding to image features of different levels; the reconstruction module 20 includes: the system comprises a plurality of reconstruction branches, wherein different reconstruction branches correspond to different resolutions, and the input of the reconstruction branch of the next resolution is the output of the reconstruction branch of the previous resolution, wherein the next resolution is larger than the previous resolution.
In the feature extraction module 10, the number of the plurality of feature extraction branches may be N; in the reconstruction module 20, the number of the plurality of reconstruction branches may be N; each reconstruction branch may output one super-resolution image, and the N reconstruction branches may output N super-resolution images. The N super-resolution images have different resolutions.
In the embodiment of the present invention, the first reconstruction branch of the plurality of reconstruction branches may include only the convolution layer 21, and for non-first reconstruction branches, each reconstruction branch may include the sampling layer 22 and the convolution layer 21, and the number of the convolution layer 21 and the up-sampling layer 22 is not particularly limited.
In the embodiment of the present invention, since the first reconstruction branch includes only the convolution layer 21, the resolution of the super-resolution image output by the first reconstruction branch is similar to that of the sample low-resolution image.
In addition, for the non-first reconstruction branch, the output of the sampling layer 22 in the reconstruction branch of the previous resolution may be used as the input of the sampling layer 22 in the reconstruction branch of the next resolution, so the resolution of the super-resolution image output by the reconstruction branch of the next resolution is greater than the resolution of the super-resolution image output by the reconstruction branch of the previous resolution. The resolution of N super-resolution images output by the N reconstruction branches is sequentially increased, and the resolution of the super-resolution image output by the last reconstruction branch is highest. After the neural network model is trained, when a low-resolution image is input, the super-resolution image output by the last reconstruction branch can be used as the super-resolution image output by the neural network model.
It should be noted that, the feature extraction module 10 and the reconstruction module 20 may both belong to a generator of the neural network model.
Fig. 2 is a schematic flow chart of a model training method according to an embodiment of the present invention, where the model training method may be implemented by software and/or hardware. As shown in fig. 2, the method may include:
s101, performing at least one downsampling on an original high-resolution image corresponding to an input sample low-resolution image to obtain a plurality of high-resolution images with original high resolutions.
Wherein the sample low resolution image and the original high resolution image are different sharpness images of the same image.
In addition, the sample low resolution image and the original high resolution image may each include: the pixel information of the color channel, i.e. each pixel point in the sample low resolution image and the original high resolution image, can be represented by RGB (red, green, blue) values.
In some embodiments, the terminal may downsample the original high-resolution image corresponding to the sample low-resolution image N-1 times, so as to obtain N-1 high-resolution images with different resolutions. The original high-resolution image and N-1 high-resolution images with different resolutions form N high-resolution images with different resolutions, namely N labels.
It should be noted that the sample low resolution image may be represented by I LR To represent the original high resolution image, the original high resolution image can be represented by I HR Representing N high resolution images of different resolutions, i.e., N labels may be represented asI HR Wherein, the method comprises the steps of, wherein,is a downsampled high resolution image of N-1 different resolutions.
S102, respectively adopting a plurality of feature extraction branches to extract features of the sample low-resolution image, and obtaining image features of a plurality of layers.
Wherein each feature extraction branch may include: a plurality of convolution layers and a hole convolution layer. Each convolution layer and the hole convolution layer has a corresponding convolution kernel. In the same feature extraction branch, the convolution kernel sizes may be the same. The convolution kernel size may be different for different feature extraction branches.
In addition, the convolution kernels of the different feature extraction branches are different in size, so that the feature extraction branches can extract image features of different levels, and the feature extraction branches with smaller convolution kernels can extract image features, such as detail features and texture features, of a smaller range in the sample low-resolution image; the feature extraction branches with larger convolution kernels may extract a larger range of image features, e.g., object location features, in the sample low resolution image.
In some embodiments, the number of the feature extraction branches may be N, in which the convolution layer and the hole convolution layer may be disposed alternately, as shown in fig. 1, and each feature extraction branch may sequentially include: four convolutional layers 12, three hole convolutional layers 13 and one convolutional layer 12.
In the embodiment of the invention, the number of the feature extraction branches is not particularly limited, and the convolution kernel sizes of the convolution layers and the cavity convolution layers in each feature extraction branch can be set according to actual requirements.
S103, adopting a feature fusion module to fuse the image features of the multiple layers to obtain fusion features of the sample low-resolution image.
In one possible implementation manner, at the end of each feature extraction branch, the features of each feature extraction branch are converged through a fusion channel and sent to a residual block, and the residual block can fuse the image features of multiple layers to obtain the fused features of the sample low-resolution image.
The feature fusion module 11 may be a residual block, where the feature fusion module 11 may include a plurality of convolution layers, as shown in fig. 1, and the feature fusion module 11 may include 3 convolution layers, where the convolution kernel of the first convolution layer may be 1, the sizes of the second convolution layer and the third convolution layer may be the same, and may be 3, the output of the first convolution layer may be input to the second convolution layer, the output of the second convolution layer may be input to the third convolution layer, and the output of the first convolution layer and the output of the third convolution layer may be summed to determine the fusion feature of the sample low resolution image.
Of course, the terminal may also use other modules capable of performing feature fusion to fuse multiple layers of image features, which is not particularly limited by the embodiment of the present invention.
S104, respectively adopting a plurality of reconstruction branches to reconstruct the fusion characteristics to obtain super-resolution images with a plurality of resolutions.
The image output by the last reconstruction branch is a target super-resolution image corresponding to the sample low-resolution image. Each reconstruction branch may output one super resolution image.
In addition, the target super-resolution image can be obtained by using I SR Representing, when the number of reconstruction branches is N, the super-resolution image output by the non-last reconstruction branch can be represented asThe super-resolution image of multiple resolutions may be expressed as +.>I SR
It should be noted that the number of high-resolution images, the number of extraction branches, and the number of reconstruction branches may be the same, and the number of high-resolution images and the number of super-resolution images may be the same.
In the embodiment of the invention, the number of the reconstruction branches is not particularly limited, and the convolution kernel sizes of the sampling layer and the convolution layer in each reconstruction branch can be set according to actual requirements.
And S105, training the neural network model according to the high-resolution images with the multiple resolutions and the corresponding super-resolution images.
In one possible implementation, the terminal may perform calculation of an optimization target parameter from the high-resolution images of the plurality of resolutions and the corresponding super-resolution images, through which the neural network model has been trained.
In summary, the embodiment of the present invention provides a model training method, which performs at least one downsampling on an original high-resolution image corresponding to an input sample low-resolution image to obtain a high-resolution image including a plurality of resolutions of the original high-resolution image; respectively adopting the plurality of feature extraction branches to extract features of the sample low-resolution image to obtain image features of a plurality of layers; adopting the feature fusion module to fuse the image features of the multiple layers to obtain fusion features of the sample low-resolution image; respectively adopting the plurality of reconstruction branches to reconstruct the fusion characteristics to obtain super-resolution images with the plurality of resolutions; and training the neural network model according to the high-resolution images with the multiple resolutions and the corresponding super-resolution images. The method comprises the steps of obtaining high-resolution images with multiple resolutions, obtaining super-resolution images with multiple resolutions based on multiple feature extraction branches and multiple reconstruction branches, training a model according to the super-resolution images with multiple resolutions and the high-resolution images, extracting more image features when the low-resolution images are recovered through the model, and enabling semantic information contained in the generated super-resolution images to be richer and higher in definition.
Optionally, fig. 3 is a schematic flow chart of a model training method provided in the embodiment of the present invention, as shown in fig. 3, where S105 may further include:
s201, determining a loss function value of the initial neural network model according to the high-resolution images with the multiple resolutions and the corresponding super-resolution images.
Wherein the plurality of high resolution images may include an original high resolution image and the corresponding super resolution image may include a target super resolution image.
In one possible implementation, the terminal may determine a first loss value from the original high-resolution image and the target super-resolution image, and determine a plurality of second loss values from the high-resolution images of the plurality of resolutions and the corresponding super-resolution images, and then determine the loss function value from the first loss value and the second loss value.
For example, when the high resolution image of a plurality of resolutions isI HR The super-resolution image of multiple resolutions is +.>I SR When the original high resolution image is I HR The target super-resolution image is I SR The terminal can be according to I HR And I SR Determining a first loss function and according to +.>And->A second loss function is determined.
S202, adjusting parameters of the neural network model according to the loss function value until the loss function value of the adjusted neural network model converges.
The neural network model may include, among other things, a generator and a arbiter. The generator comprises: the device comprises a feature extraction module and a reconstruction module.
In the embodiment of the invention, the terminal can adjust the parameters of the generator and the discriminator according to the loss function value until the loss function value of the adjusted neural network model is converged, so as to obtain the trained neural network model. The low resolution image is input into the neural network model, and the neural network model can output a high resolution image, wherein the high resolution image comprises more detail information and has higher definition.
Optionally, fig. 4 is a schematic flow chart of a model training method according to an embodiment of the present invention, as shown in fig. 4, where S202 may further include:
s301, determining a pixel loss value of the neural network model according to the high-resolution images with the multiple resolutions and the corresponding super-resolution images.
In some embodiments, the terminal may calculate the similarity between the high-resolution image and the corresponding super-resolution image of each resolution by using a preset similarity calculation formula, so as to obtain a plurality of similarities, and then may determine the pixel loss value of the neural network model according to the plurality of similarities.
The terminal may superimpose the plurality of similarities, thereby obtaining a pixel loss value of the neural network model.
It should be noted that the similarity calculation formula may beWherein I is HR1 For one of the plurality of resolution high resolution images, I SR1 Is equal to I HR1 The corresponding super-resolution image can determine the similarity between one high-resolution image and the corresponding super-resolution image, similarly, calculate the similarity between each high-resolution image and the corresponding super-resolution image to obtain a plurality of similarities, and superimpose the plurality of similarities to obtain the pixel loss value of the neural network model, wherein the pixel loss value of the neural network model can be used->And (3) representing.
In the embodiment of the invention, whenThe smaller the time, the more similar the high resolution image and the corresponding super resolution image.
S302, determining a perception loss value of the neural network model according to the high-resolution images with a plurality of resolutions and the feature images output by the corresponding super-resolution images in the preset layer in the pre-training model.
The feature map output by the preset layer in the pre-training model may include: and the characteristic diagrams of the plurality of high-resolution images and the characteristic diagrams of the corresponding super-resolution images. The pre-training model may be VGG (Visual Geometry Group Network ) -19.
In one possible implementation manner, the terminal may calculate a perceptual loss value between the feature map of each high-resolution image and the feature map of the corresponding super-resolution image by using a preset perceptual loss formula, so as to obtain a plurality of perceptual loss values, and then determine the perceptual loss value of the neural network model according to the plurality of perceptual loss values.
The terminal may superimpose the plurality of perceptual loss values, thereby obtaining the perceptual loss value of the neural network model.
In the embodiment of the present invention, the feature map output by the preset layer may be: the feature map output after the ith convolution layer and the jth activation layer in the pre-training model may be the following preset perceptual loss formula:wherein phi is i,j (I SR1 ) Feature map, phi, representing a super-resolution image output after the ith convolution layer and the jth activation layer i,j (I HR1 ) A feature map representing a high resolution image output after the ith convolution layer and the jth activation layer.
Similarly, calculating the perceived loss value between the feature map of each high-resolution image and the feature map of the corresponding super-resolution image to obtain a plurality of perceived loss values, and superposing the plurality of perceived loss values to obtain a pixel loss value of a neural network model, wherein the perceived loss value of the neural network model can be used And (3) representing.
In the embodiment of the invention, whenThe smaller the time, the more similar the high resolution image and the corresponding super resolution image.
S303, determining the antagonism loss value of the neural network model according to the original high-resolution image and the target super-resolution image.
In some embodiments, the terminal may input the original high-resolution image and the target super-resolution image into the arbiter, the arbiter may output probability information, and the terminal may determine the counterdamage value of the neural network model according to the probability information by using a preset counterdamage value calculation formula.
The countermeasures loss value is the first loss value in S201.
S304, determining a loss function value of the neural network model according to the pixel loss value, the perception loss value and the antagonism loss value.
In the embodiment of the invention, the terminal can adopt a preset loss function value calculation formula to determine the loss function value of the neural network model according to the pixel loss value, the perception loss value and the contrast loss value. The loss function value of the neural network model may be used to indicate whether model training is complete, and parameters of the neural network model may be optimized based on the loss function value.
Optionally, fig. 5 is a schematic flow chart of a model training method according to an embodiment of the present invention, as shown in fig. 5, where S303 may further include:
s401, determining the probability that the original high-resolution image is true compared with the target super-resolution image and the probability that the target super-resolution image is false compared with the original high-resolution image by adopting a discriminator.
The network structure of the discriminator can be VGG-13.
It should be noted that, determining the probability that the original high resolution image is true to the target super resolution image and the probability that the target super resolution image is false to the original high resolution image can increase the speed and stability of the model training process.
S402, determining the countermeasures loss value according to the true probability and the false probability.
Wherein the antagonism loss value can measure the generating capacity of the generator and the judging capacity of the discriminator.
In addition, the terminal may determine the countermeasures loss value according to the true probability and the false probability by using a preset countermeasures loss value calculation formula.
In some embodiments, the preset challenge loss value calculation formula may be expressed asWherein (1)>To counter the loss value, a->Can represent the probability that the original high resolution image is more realistic than the target super resolution image, +. >The probability that the target super-resolution image is more false than the original high-resolution image may be represented.
When the following is performedWhen the super-resolution image is converged, the target super-resolution image generated by the generator and the original high-resolution image can not be distinguished by the discriminator, and the generator and the discriminator reach an equilibrium state.
Optionally, the process of S304 may include: and determining a loss function value of the neural network model by adopting a preset weighting algorithm according to the pixel loss value, the perception loss value and the counterloss value.
The terminal can calculate a weighted sum of the pixel loss value, the perception loss value and the contrast loss value through a preset weighting algorithm, so that a loss function value of the neural network model is determined.
In some embodiments, the preset loss function value calculation formula may be: wherein (1)>Loss function value for neural network model, +.>Pixel loss value for neural network model, +.>For the perception loss value of the neural network model, +.>Is the antagonism loss value of the neural network model. λ and η are weight parameters, and the larger the weight parameters are, the larger the gradient of the parameters related to the corresponding loss is in the training process, and the super-resolution image generated by the neural network model is changed accordingly.
Optionally, S202 may include: and adjusting parameters of the neural network model by adopting a preset gradient descent method according to the loss function value until the loss function value of the adjusted neural network model is converged.
In the embodiment of the invention, the terminal can calculate the loss function value according to the loss function value by adopting a chained derivative rule, and can obtain the gradient of the loss function value on each parameter, wherein each parameter can be used for generating each parameter in the generator and the discriminator, so that the parameters in the generator and the discriminator can be optimized to reduce the loss.
In the model training, a pyresch (deep learning framework) may be used, and a random gradient descent method may be selected to train the model, thereby obtaining a neural network model with good performance.
In summary, in the embodiment of the invention, the neural network model integrates the hierarchical feature extraction module and the hierarchical guide reconstruction module, extracts and analyzes the multi-scale features, and focuses on the local texture and the global semantics at the same time, so that a reasonable and natural super-resolution image is gradually generated based on multiple supervision information, and the resolution of the generated super-resolution image is obviously improved.
Fig. 6 is a flow chart of an image super-resolution processing method according to an embodiment of the present invention, as shown in fig. 6, the method may include:
s501, acquiring an input low-resolution image.
In the practice of the present invention, the low resolution image may comprise: pixel information of the color channel.
S502, performing super-resolution processing on the low-resolution image by adopting a neural network model to obtain a target super-resolution image corresponding to the low-resolution image.
The neural network model may be any of the neural network models shown in fig. 1 to 5.
It should be noted that, the super-resolution processing is performed on the low-resolution image through the neural network model, so that the obtained target super-resolution image can contain more characteristic information, and the target super-resolution image is clearer, more reasonable and more natural.
Fig. 7 is a schematic structural diagram of a model training device according to an embodiment of the present invention, where, as shown in fig. 7, the model training device is applied to a neural network model, and the neural network model includes: the device comprises a feature extraction module and a reconstruction module, wherein the feature extraction module comprises: the device comprises a plurality of feature extraction branches and a feature fusion module, wherein different feature extraction branches correspond to different resolutions; the reconstruction module comprises: a plurality of reconstruction branches, wherein different reconstruction branches correspond to different resolutions, and the input of the reconstruction branch of the latter resolution is the output of the reconstruction branch of the former resolution, and the latter resolution is greater than the former resolution; the apparatus may include:
A downsampling module 701, configured to downsample an original high-resolution image corresponding to an input sample low-resolution image at least once, to obtain a high-resolution image including a plurality of resolutions of the original high-resolution image;
the extracting module 702 is configured to perform feature extraction on the sample low-resolution image by using a plurality of feature extraction branches, so as to obtain image features of a plurality of layers;
the fusion module 703 is configured to perform fusion processing on the image features of multiple layers by using the feature fusion module, so as to obtain fusion features of the sample low-resolution image;
the reconstruction processing module 704 is configured to perform reconstruction processing on the fusion feature by using a plurality of reconstruction branches, so as to obtain a super-resolution image with a plurality of resolutions; the image output by the last reconstruction branch is a target super-resolution image corresponding to the sample low-resolution image;
the training module 705 is configured to train the neural network model according to the high-resolution images and the corresponding super-resolution images.
Optionally, the training module 705 is further configured to determine a loss function value of the initial neural network model according to the high resolution images of the multiple resolutions and the corresponding super resolution images; and adjusting parameters of the neural network model according to the loss function value until the loss function value of the adjusted neural network model converges.
Optionally, the training module 705 is further configured to determine a pixel loss value of the neural network model according to the high resolution images of the plurality of resolutions and the corresponding super resolution images; determining a perception loss value of the neural network model according to the high-resolution images with a plurality of resolutions and the feature images output by the corresponding super-resolution images in a preset layer in the pre-training model; determining an antagonism loss value of the neural network model according to the original high-resolution image and the target super-resolution image; and determining a loss function value of the neural network model according to the pixel loss value, the perception loss value and the antagonism loss value.
Optionally, the training module 705 is further configured to determine, by using a discriminator, a probability that the original high-resolution image is more realistic than the target super-resolution image, and a probability that the target super-resolution image is more false than the original high-resolution image; the countermeasures loss value is determined based on the true probability and the false probability.
Optionally, the training module 705 is further configured to determine a loss function value of the neural network model according to the pixel loss value, the perceived loss value, and the counterloss value by using a preset weighting algorithm.
Optionally, the training module 705 is further configured to adjust parameters of the neural network model by using a preset gradient descent method according to the loss function value until the loss function value of the adjusted neural network model converges.
Fig. 8 is a schematic structural diagram of an image super-resolution processing device according to an embodiment of the present invention; the device is applied to the neural network model obtained by the training method of any one of fig. 2 to 5, and the image super-resolution processing device comprises:
an acquisition module 801, configured to acquire an input low resolution image;
and the processing module 802 is configured to perform super-resolution processing on the low-resolution image by using the neural network model, so as to obtain a target super-resolution image corresponding to the low-resolution image.
The foregoing apparatus is used for executing the method provided in the foregoing embodiment, and its implementation principle and technical effects are similar, and are not described herein again.
The above modules may be one or more integrated circuits configured to implement the above methods, for example: one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), or one or more microprocessors (digital singnal processor, abbreviated as DSP), or one or more field programmable gate arrays (Field Programmable Gate Array, abbreviated as FPGA), or the like. For another example, when a module above is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 9 is a schematic structural diagram of a terminal provided in an embodiment of the present application, where the terminal may include: memory 901, processor 902. The memory 901 is used for storing a program, and the processor 902 calls the program stored in the memory 901 to execute the method embodiments of fig. 2 to 6 described above. The specific implementation manner and the technical effect are similar, and are not repeated here.
Optionally, the present invention also provides a program product, such as a computer readable storage medium, comprising a program for performing the above-described method embodiments when being executed by a processor.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (english: processor) to perform some of the steps of the methods according to the embodiments of the invention. And the aforementioned storage medium includes: u disk, mobile hard disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes or substitutions are covered by the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. A model training method, wherein the model training method is applied to a neural network model, the neural network model comprising: the device comprises a feature extraction module and a reconstruction module, wherein the feature extraction module comprises: the device comprises a plurality of feature extraction branches and a feature fusion module, wherein different feature extraction branches correspond to image features of different layers; the reconstruction module comprises: a plurality of reconstruction branches, different reconstruction branches corresponding to different resolutions, the input of the reconstruction branch of a later resolution being the output of the reconstruction branch of a previous resolution, wherein the later resolution is greater than the previous resolution; the method comprises the following steps:
downsampling an original high-resolution image corresponding to an input sample low-resolution image at least once to obtain a plurality of high-resolution images with original high resolution;
Respectively adopting the plurality of feature extraction branches to extract features of the sample low-resolution image to obtain image features of a plurality of layers;
adopting the feature fusion module to fuse the image features of the multiple layers to obtain fusion features of the sample low-resolution image;
respectively adopting the plurality of reconstruction branches to reconstruct the fusion characteristics to obtain super-resolution images with the plurality of resolutions; the image output by the last reconstruction branch is a target super-resolution image corresponding to the sample low-resolution image;
training the neural network model according to the high-resolution images with the multiple resolutions and the corresponding super-resolution images;
the training the neural network model according to the high-resolution images and the corresponding super-resolution images, including:
determining a loss function value of the initial neural network model according to the high-resolution images with the multiple resolutions and the corresponding super-resolution images;
according to the loss function value, adjusting parameters of the neural network model until the loss function value of the adjusted neural network model is converged;
The determining a loss function value of the neural network model according to the high-resolution images with the multiple resolutions and the corresponding super-resolution images comprises:
determining a pixel loss value of the neural network model according to the high-resolution images with the multiple resolutions and the corresponding super-resolution images;
determining a perception loss value of the neural network model according to the high-resolution images with the multiple resolutions and the feature images output by the corresponding super-resolution images in a preset layer in a pre-training model;
determining an antagonism loss value of the neural network model according to the original high-resolution image and the target super-resolution image;
and determining a loss function value of the neural network model according to the pixel loss value, the perceived loss value and the counterloss value.
2. The method of claim 1, wherein said determining the fight loss value of the initial neural network model from the original high resolution image and the target super resolution image comprises:
determining the probability that the original high-resolution image is true compared with the target super-resolution image and the probability that the target super-resolution image is false compared with the original high-resolution image by adopting a discriminator;
And determining the countermeasures loss value according to the real probability and the false probability.
3. The method of claim 1, wherein the determining a loss function value of the neural network model from the pixel loss value, the perceived loss value, and the counter loss value comprises:
and determining a loss function value of the neural network model by adopting a preset weighting algorithm according to the pixel loss value, the perception loss value and the counterloss value.
4. The method according to claim 1, wherein adjusting the parameters of the neural network model according to the loss function value until the adjusted loss function value of the neural network model converges, comprises:
and according to the loss function value, adjusting the parameters of the neural network model by adopting a preset gradient descent method until the loss function value of the adjusted neural network model is converged.
5. An image super-resolution processing method, wherein the method is applied to the neural network model obtained by the training method according to any one of claims 1 to 4, and the image super-resolution processing method comprises the following steps:
Acquiring an input low-resolution image;
and performing super-resolution processing on the low-resolution image by adopting the neural network model to obtain a target super-resolution image corresponding to the low-resolution image.
6. A model training apparatus, wherein the model training apparatus is applied to a neural network model, the neural network model comprising: the device comprises a feature extraction module and a reconstruction module, wherein the feature extraction module comprises: the device comprises a plurality of feature extraction branches and a feature fusion module, wherein different feature extraction branches correspond to image features of different layers; the reconstruction module comprises: a plurality of reconstruction branches, different reconstruction branches corresponding to different resolutions, the input of the reconstruction branch of a later resolution being the output of the reconstruction branch of a previous resolution, wherein the later resolution is greater than the previous resolution; the device comprises:
the downsampling module is used for downsampling at least once an original high-resolution image corresponding to an input sample low-resolution image to obtain a plurality of high-resolution images with original high resolution;
the extraction module is used for extracting the characteristics of the sample low-resolution image by adopting the plurality of characteristic extraction branches respectively to obtain a plurality of layers of image characteristics;
The fusion module is used for carrying out fusion processing on the image features of the multiple layers by adopting the feature fusion module to obtain fusion features of the sample low-resolution image;
the reconstruction processing module is used for carrying out reconstruction processing on the fusion characteristics by adopting the plurality of reconstruction branches respectively to obtain super-resolution images with the plurality of resolutions; the image output by the last reconstruction branch is a target super-resolution image corresponding to the sample low-resolution image;
the training module is used for training the neural network model according to the high-resolution images with the multiple resolutions and the corresponding super-resolution images;
the training module is further used for determining a loss function value of the initial neural network model according to the high-resolution images with the multiple resolutions and the corresponding super-resolution images; according to the loss function value, adjusting parameters of the neural network model until the loss function value of the adjusted neural network model is converged;
the training module is further used for determining a pixel loss value of the neural network model according to the high-resolution images with the multiple resolutions and the corresponding super-resolution images; determining a perception loss value of the neural network model according to the high-resolution images with the multiple resolutions and the feature images output by the corresponding super-resolution images in a preset layer in a pre-training model; determining an antagonism loss value of the neural network model according to the original high-resolution image and the target super-resolution image; and determining a loss function value of the neural network model according to the pixel loss value, the perceived loss value and the counterloss value.
7. An image super-resolution processing apparatus applied to the neural network model obtained by the training method according to any one of claims 1 to 4, comprising:
the acquisition module is used for acquiring the input low-resolution image;
and the processing module is used for performing super-resolution processing on the low-resolution image by adopting the neural network model to obtain a target super-resolution image corresponding to the low-resolution image.
8. A terminal, comprising: a memory and a processor, the memory storing a computer program executable by the processor, the processor implementing the method of any of the preceding claims 1-5 when the computer program is executed.
9. A storage medium having stored thereon a computer program which, when read and executed, implements the method of any of the preceding claims 1-5.
CN202010141266.3A 2020-03-03 2020-03-03 Model training and image super-resolution processing method, device, terminal and storage medium Active CN111369440B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010141266.3A CN111369440B (en) 2020-03-03 2020-03-03 Model training and image super-resolution processing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010141266.3A CN111369440B (en) 2020-03-03 2020-03-03 Model training and image super-resolution processing method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111369440A CN111369440A (en) 2020-07-03
CN111369440B true CN111369440B (en) 2024-01-30

Family

ID=71211172

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010141266.3A Active CN111369440B (en) 2020-03-03 2020-03-03 Model training and image super-resolution processing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111369440B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861888A (en) * 2020-07-27 2020-10-30 上海商汤智能科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111933086B (en) * 2020-08-19 2022-01-21 惠科股份有限公司 Display device and resolution reduction method thereof
CN112037129B (en) * 2020-08-26 2024-04-19 广州视源电子科技股份有限公司 Image super-resolution reconstruction method, device, equipment and storage medium
CN112084908A (en) * 2020-08-28 2020-12-15 广州汽车集团股份有限公司 Image processing method and system and storage medium
CN111968064B (en) * 2020-10-22 2021-01-15 成都睿沿科技有限公司 Image processing method and device, electronic equipment and storage medium
CN112734642B (en) * 2021-01-12 2023-03-10 武汉工程大学 Remote sensing satellite super-resolution method and device of multi-scale texture transfer residual error network
CN112862681B (en) * 2021-01-29 2023-04-14 中国科学院深圳先进技术研究院 Super-resolution method, device, terminal equipment and storage medium
CN112784857B (en) * 2021-01-29 2022-11-04 北京三快在线科技有限公司 Model training and image processing method and device
CN113191495A (en) * 2021-03-26 2021-07-30 网易(杭州)网络有限公司 Training method and device for hyper-resolution model and face recognition method and device, medium and electronic equipment
CN113269676B (en) * 2021-05-19 2023-01-10 北京航空航天大学 Panoramic image processing method and device
CN113298092A (en) * 2021-05-28 2021-08-24 有米科技股份有限公司 Neural network training method and device for extracting multi-level image contour information
CN113920013B (en) * 2021-10-14 2023-06-16 中国科学院深圳先进技术研究院 Super-resolution-based small image multi-target detection method
CN116071478B (en) * 2023-04-06 2023-06-30 腾讯科技(深圳)有限公司 Training method of image reconstruction model and virtual scene rendering method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108428212A (en) * 2018-01-30 2018-08-21 中山大学 A kind of image magnification method based on double laplacian pyramid convolutional neural networks
CN110647934A (en) * 2019-09-20 2020-01-03 北京百度网讯科技有限公司 Training method and device for video super-resolution reconstruction model and electronic equipment
CN110660020A (en) * 2019-08-15 2020-01-07 天津中科智能识别产业技术研究院有限公司 Image super-resolution method of countermeasure generation network based on fusion mutual information

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102184755B1 (en) * 2018-05-31 2020-11-30 서울대학교 산학협력단 Apparatus and Method for Training Super Resolution Deep Neural Network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108428212A (en) * 2018-01-30 2018-08-21 中山大学 A kind of image magnification method based on double laplacian pyramid convolutional neural networks
CN110660020A (en) * 2019-08-15 2020-01-07 天津中科智能识别产业技术研究院有限公司 Image super-resolution method of countermeasure generation network based on fusion mutual information
CN110647934A (en) * 2019-09-20 2020-01-03 北京百度网讯科技有限公司 Training method and device for video super-resolution reconstruction model and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
倪申龙 ; 曾接贤 ; 周世健 ; .整体车牌图像超分辨率重建研究.计算机技术与发展.2018,(04),全文. *

Also Published As

Publication number Publication date
CN111369440A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
CN111369440B (en) Model training and image super-resolution processing method, device, terminal and storage medium
CN110570353B (en) Super-resolution reconstruction method for generating single image of countermeasure network by dense connection
Lan et al. MADNet: a fast and lightweight network for single-image super resolution
CN110033410B (en) Image reconstruction model training method, image super-resolution reconstruction method and device
CN110992270A (en) Multi-scale residual attention network image super-resolution reconstruction method based on attention
CN111476719B (en) Image processing method, device, computer equipment and storage medium
CN110717857A (en) Super-resolution image reconstruction method and device
CN111105352A (en) Super-resolution image reconstruction method, system, computer device and storage medium
CN109146788A (en) Super-resolution image reconstruction method and device based on deep learning
CN112507997A (en) Face super-resolution system based on multi-scale convolution and receptive field feature fusion
CN114549308B (en) Image super-resolution reconstruction method and system with large receptive field and oriented to perception
CN112488923A (en) Image super-resolution reconstruction method and device, storage medium and electronic equipment
CN115439470B (en) Polyp image segmentation method, computer readable storage medium and computer device
CN115601281A (en) Remote sensing image space-time fusion method and system based on deep learning and electronic equipment
CN113837941A (en) Training method and device for image hyper-resolution model and computer readable storage medium
CN117575915A (en) Image super-resolution reconstruction method, terminal equipment and storage medium
CN116630152A (en) Image resolution reconstruction method and device, storage medium and electronic equipment
CN116862765A (en) Medical image super-resolution reconstruction method and system
CN113496228B (en) Human body semantic segmentation method based on Res2Net, transUNet and cooperative attention
CN115713462A (en) Super-resolution model training method, image recognition method, device and equipment
Liu et al. Hyperspectral image super-resolution employing nonlocal block and hybrid multiscale three-dimensional convolution
CN112634126A (en) Portrait age reduction processing method, portrait age reduction training device, portrait age reduction equipment and storage medium
Qiu et al. Image Super-Resolution Method Based on Dual Learning
Dargahi et al. Single image super-resolution by cascading parallel-structure units through a deep-shallow CNN
Mehmood Deep learning based super resolution of aerial and satellite imagery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant