WO2021052261A1 - 一种锐化标签数据的图像超分辨率重建方法及装置 - Google Patents

一种锐化标签数据的图像超分辨率重建方法及装置 Download PDF

Info

Publication number
WO2021052261A1
WO2021052261A1 PCT/CN2020/114881 CN2020114881W WO2021052261A1 WO 2021052261 A1 WO2021052261 A1 WO 2021052261A1 CN 2020114881 W CN2020114881 W CN 2020114881W WO 2021052261 A1 WO2021052261 A1 WO 2021052261A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
resolution
super
data
neural network
Prior art date
Application number
PCT/CN2020/114881
Other languages
English (en)
French (fr)
Inventor
朱俊杰
范湘涛
杜小平
刘健
Original Assignee
中国科学院空天信息创新研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院空天信息创新研究院 filed Critical 中国科学院空天信息创新研究院
Priority to US17/636,044 priority Critical patent/US20220335573A1/en
Publication of WO2021052261A1 publication Critical patent/WO2021052261A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • This application relates to computer image restoration technology, in particular to an image super-resolution restoration technology based on deep learning.
  • this application proposes a method and device for image super-resolution reconstruction of sharpened label data.
  • the edges of the label data are clearer and the ground objects are clearer.
  • the model is obtained by training the deep neural network, and the model is used to perform super-resolution processing on the image to obtain a higher resolution image, and the clear resolution and definition of the image texture are significantly improved, and the image quality evaluation index is better.
  • the method of the present application can effectively solve the above-mentioned problems existing in the existing image super-resolution restoration technology.
  • This application provides an image super-resolution reconstruction method, which includes: inputting sample data into a pre-established super-resolution image generation neural network, the sample data is obtained by performing low-resolution processing on the original image; the super-resolution The rate image generation neural network extracts image features according to the low-resolution image input by the sample data, and then reconstructs the image according to the extracted image features to obtain the output super-resolution image; if the similarity between the super-resolution image and the label data is If the pre-set standard is not met, adjust the parameters of the super-resolution image generation neural network; wherein, the label data is obtained by sharpening the original image; input the low-resolution image into the super-resolution image after the training is completed.
  • the resolution image generates a neural network to obtain a super-resolution image.
  • the super-resolution image generation neural network includes multi-layer convolution calculation, and at least a part of the multi-layer convolution calculation adopts narrow convolution.
  • the super-resolution image generation neural network includes multi-layer convolution calculation, and each layer of convolution calculation adopts LRN local response normalization.
  • the method for preparing the sample data includes: performing sub-sampling calculation on the original image to obtain a low-resolution image; performing Sobel operator filtering calculation on the obtained low-resolution image to obtain a Sobel edge image; The three band data of red, green, and blue of the low-resolution image are added to the data of the four bands of the Sobel edge image to perform interpolation calculation, and an image with the same resolution as the original image is obtained as sample data.
  • the method for producing the label data includes: performing a sharpening calculation from the original image to obtain the label data, the method of the sharpening calculation is USM sharpening; and the parameter used for the USM sharpening is set as a threshold Set to 3, radius to 1-1.2, and quantity to 20%-25%.
  • an image super-resolution reconstruction device which includes: a sample data generation module for performing low-resolution processing on the original image to obtain training for the super-resolution image generation neural network module Sample data; label data generation module for sharpening the original image to obtain training label data for the super-resolution image generation neural network module; super-resolution image generation neural network module, based on the sample data and the The label data is trained; the trained super-resolution image generation neural network module calculates the input image to obtain the super-resolution image of the input image.
  • the super-resolution image generation neural network includes a multi-layer convolution calculation sub-unit, and at least a part of the multi-layer convolution calculation sub-unit adopts narrow convolution.
  • the super-resolution image generation neural network includes a multi-layer convolution calculation subunit, and each layer of the convolution calculation subunit adopts LRN local response normalization.
  • the device includes: a sub-unit for sub-sampling the original image to obtain a low-resolution image; a sub-unit for performing Sobel filter calculation on the obtained low-resolution image to obtain a Sobel edge image;
  • the red, green, and blue band data of the low-resolution image are interpolated by adding the data of the 4 bands of the Sobel edge image to obtain an image with the same resolution as the original image as a subunit of the sample data.
  • the label data generation module adopts USM sharpening, and the parameters used in the USM method are set as follows: the threshold is set to 3, the radius is set to 1-1.2, and the number is set to 20%-25%.
  • the label data is sharpened to a certain extent to make the edges of the label data clearer and the ground objects clearer.
  • the model is obtained by training the deep neural network, and the model is used to perform super-resolution processing on the image to obtain a higher resolution image , And the clear resolution and sharpness of image texture are significantly improved, and the image quality evaluation index is better.
  • FIG. 1 is a neural network model diagram of a method for image super-resolution reconstruction of sharpened tag data provided by an embodiment of the present application
  • Figure 2 is a data conversion and flow chart of an image super-resolution reconstruction method of sharpened label data
  • Figure 3 is a block diagram of an image super-resolution reconstruction device
  • Figure 4 is a comparison diagram of the original data and the sample data obtained after sub-sampling and interpolation
  • Figure 5 is a comparison diagram of super-resolution results with RGB channel as input data, Sobel edge+RGB channel as input data, and label data sharpened;
  • Figure 6 is a comparison diagram of original data and super-resolution results
  • Figure 7 is an enlarged comparison diagram of the original data and the super-resolution result.
  • image super-resolution is to improve image resolution and clarity, making the objects in the image clearer and the texture clearer.
  • the label data is sharpened to a certain extent to make the edges of the label data clearer and the ground objects clearer.
  • the model is obtained by training the deep neural network, and the model is used to perform super-resolution processing on the image to obtain higher resolution
  • the resolution and clarity of the image texture are significantly improved, and the image quality evaluation index is better.
  • FIG. 1 is a neural network model diagram for image super-resolution reconstruction of sharpened tag data provided by an embodiment of the present application.
  • the embodiment of this application first designs a deep learning network model for image super-resolution calculation, then produces sample data and label data, and then uses the sample data and label data to train the network, and obtains a mature super-resolution network through training. Finally, the low-resolution image is input into the training network, and the image super-resolution result is obtained.
  • the original image is sharpened, and the obtained image is used as the label data.
  • the pre-established super-resolution image generation neural network can extract image features according to the input low-resolution image, and then according to the extraction
  • the image feature reconstructs the image and outputs the super-resolution image.
  • the image super-resolution reconstruction process of the sharpened tag data is as follows:
  • the designed super-resolution image generation neural network model is a convolutional neural network model.
  • the super-resolution image generation neural network model may be a 6-layer convolutional neural network model.
  • the input data can be the data of 4 channels composed of color image and Sobel channel. After convolution of the 5X5 template, the first layer of 64 features is obtained, and the 3X3 convolution template is used to obtain the second layer of 128 features.
  • the sixth layer is a convolutional layer with 3 features, and the sixth layer is the final super-resolution result.
  • each layer must be normalized by the LRN local response, otherwise the network model is likely to cause overfitting.
  • the same convolution strategy is not used, but a narrow convolution strategy is used. Therefore, after each layer of convolution, the image will become smaller, a 21X21 size As the input data, the output size of the network model becomes 3X3 size.
  • the inventor believes that a deep neural network without regularization processing is often prone to over-fitting, and the over-fitting is often two aspects of data and model.
  • the amount of data is often small, or the data is not comprehensive and not typical, and in terms of models, the model is often too complicated.
  • Network regularization can effectively eliminate over-fitting.
  • LRN can perform local normalization on a layer of the network model to obtain a local normalization layer.
  • the AlexNet network first defined it. This method can effectively reduce the over-fitting. Fitting, its calculation formula is as follows:
  • i is the input channel
  • x and y are the current pixel positions.
  • Is the input value that is, the output value of the neuron activation function.
  • k, ⁇ , ⁇ , n/2 are all custom coefficients
  • N is the total number of channels, and finally the sum of squares accumulated by different channels is calculated to obtain the local normalized result.
  • the original color image is sub-sampled to obtain a low-resolution image, and the image is filtered by the Sobel operator to obtain the Sobel operator edge image.
  • the three bands of red, green and blue are combined with the Sobel edge operator. 4 bands of data; these 4 bands of data are interpolated to make it the same resolution as the original image, and this up-sampled 4 bands of data is used as sample data.
  • Sobel is a very classic edge detection method.
  • the Sobel operator is composed of two sets of 3x3 small templates, namely the horizontal template and the vertical template, which are respectively convolved with the image to obtain the horizontal and vertical brightness difference values.
  • A is an image slice
  • Gx and Gy are horizontal and vertical convolution templates, respectively.
  • the standard unsharp mask (USM) sharpening method is used to make label data.
  • USM sharpening is a good image sharpening method that can sharpen the edges of the image and quickly adjust the contrast of the edge details of the image. It has three parameters, the number is the first parameter, which controls the intensity of the sharpening effect; the radius is the second parameter, which specifies the sharpening radius, this setting determines the number of pixels around the edge pixels that affect the sharpening, the image’s The higher the resolution, the larger the radius setting should be; the threshold is the third parameter, which refers to the comparison value between adjacent pixels. These parameters determine how much the hue of the pixel must differ from the pixels in the surrounding area to be regarded as an edge pixel, and then use the USM filter to sharpen it.
  • the label data is obtained by sharpening the original data, and the sharpening method is the USM method; the parameters are set as 3, the threshold is 3, the radius is 1, the number is 20%, and the sharpened image is obtained as training The label data used.
  • the setting of these three parameters in the USM algorithm is an empirical value obtained through several experiments. This combination of values is optimal. If the setting is not appropriate, it will lead to over-fitting or under-fitting.
  • the training process is as follows: input the sample data into the super-resolution image generation neural network to obtain the output super-resolution image; compare the label data with the output super-resolution image, if the If the similarity between the label data and the output super-resolution image does not reach the preset standard, adjust the parameters of the super-resolution image generation neural network; and repeat the input sample data to the super-resolution image generation neural network The process of comparing the label data with the output image of the super-resolution image generating neural network, and confirming whether to adjust the parameters of the super-resolution image generating neural network according to the comparison result.
  • the comparison result is that the similarity between the label data and the output super-resolution image reaches a preset standard, the parameters of the image super-resolution neural network are no longer adjusted, and the training is completed.
  • the parameters of the neural network of the trained super-resolution model have been adjusted and perfected. At this time, it can be used to complete the task of converting low-resolution images into super-resolution.
  • Fig. 3 is a schematic diagram of an image super-resolution reconstruction device.
  • the device includes: a super-resolution image generation neural network module 101, which is used to obtain a super-resolution image according to the input low-resolution image; a sample data generation module 103, which is used to perform low-resolution image on the original image.
  • the label data generation module 105 is used to sharpen the original image to obtain the training label of the super-resolution image generation neural network module 101 Data; the super-resolution image generation neural network module 101 trains according to the sample data and the label data to obtain a mature super-resolution image generation neural network; uses the mature super-resolution image to generate a neural network for The input image is calculated to obtain the super-resolution image of the input image.
  • the super-resolution image generation neural network module 101 includes a multi-layer convolution calculation sub-unit, and at least a part of the multi-layer convolution calculation sub-unit adopts narrow convolution.
  • a 6-layer super-resolution image generation neural network is established.
  • the input data of the super-resolution image generation neural network is data of 4 channels composed of color images and Sobel channels;
  • For the convolution calculation of the 5X5 template the first layer of the convolutional layer with 64 features is obtained; after the convolution calculation of the 3X3 template, the second layer of the convolutional layer with 128 features is obtained; and the size is 7X7
  • the convolution calculation of the template of the third layer is used to obtain a convolutional layer with 32 features in the third layer; the convolution calculation of a template with a size of 3X3 is performed to obtain a convolutional layer with 16 features in the fourth layer; and a convolutional layer with a size of 3X3 is obtained.
  • Convolution calculation to obtain a convolutional layer with 8 features in the fifth layer; and then a convolution calculation with a size of 3X3 template to obtain a convolution layer with 3 features in the sixth layer; the sixth convolution layer to obtain super-resolution Rate results.
  • the convolution calculation subunit of each layer of the super-resolution image generation neural network must be normalized by the LRN local response, and the calculation formula is as follows:
  • Is the normalized value, i is the input channel, and x and y are the current pixel positions.
  • Is the input value that is, the output value of the neuron activation function, k, ⁇ , ⁇ , n/2 are all custom coefficients, and N is the total number of channels.
  • the sample data generating module 103 further includes: a sub-unit for sub-sampling the original image to obtain a low-resolution image; and performing Sobel operator filtering on the obtained low-resolution image to obtain the Sobel operator The sub-unit of the edge image; for the red, green, and blue band data of the low-resolution image, add the Sobel operator edge image to perform interpolation calculation on a total of 4 band data, and obtain an image with the same resolution as the original image as a sample The subunit of data.
  • the label data generation module 105 adopts USM sharpening; the parameters used in the USM sharpening are set as follows: threshold is set to 3, radius is set to 1-1.2, and the number is set to 20%- 25%.
  • Figure 4 is a comparison diagram of the original data and the sample data obtained by sub-sampling and interpolation.
  • the left part of the figure is the original data picture, and the right part of the figure is the sample data obtained by sub-sampling and interpolation.
  • the resolution of the data is high, the image is clearer, and the texture is more delicate.
  • the sample data after sampling and then the difference is obviously blurred and the edges are not clear.
  • Such data can be used as the sample data for training the model.
  • Figure 5 is a comparison diagram of super-resolution results with RGB channel as input data, Sobel edge+RGB channel as input data, and label data sharpened.
  • the input data is sample data of the three bands of red, green and blue.
  • the label data is the network model constructed by the original data; the second is the input data is the three bands of red, green and blue plus the Sobel edge operator, a total of 4 bands are used as the input data, and the label data is the network model constructed by the original data;
  • the three types are the three bands of red, green and blue as the input data, and a total of 4 bands with the Sobel edge operator as the input data, and the label data is a network model constructed by USM sharpening data.
  • the same training data is the image containing the Sobel operator and the label data is the USM sharpened image training model to calculate the super-resolution result.
  • the training sample data Adding edge detection operators can effectively improve the quality of super-resolution images.
  • the indicators for image quality evaluation are some statistical indicators, including sharpness, peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and root mean square error RMSE.
  • PSNR peak signal-to-noise ratio
  • SSIM structural similarity
  • RMSE root mean square error
  • the peak signal-to-noise ratio is a statistic that compares the signal intensity with the background noise intensity. The comparison shows that the value of the peak signal-to-noise ratio of the label image sharpening super-resolution result of the embodiment of the present application is the largest, that is, the present application
  • the embodiment scheme can suppress the generation of noise and increase the amount of information at the same time.
  • Structural similarity is used to evaluate the quality of super-resolution images from three aspects: image brightness similarity, image contrast similarity and image structure similarity.
  • the other three indicators are all four-band sample data model calculation results are better than the calculation results of the three bands of red, green and blue, and the three bands of red, green and blue are used as input data.
  • the high definition of the super-resolution result is due to the existence of false edges caused by over-fitting, not that the image quality is really good. This comparison also proves that the super-resolution result of the edge operator is better.
  • the method and device for image super-resolution reconstruction of sharpened label data disclosed in this application has the following advantages: a certain sharpening process is performed on the label data to make the edges of the label data clearer, The object is clearer.
  • the model is obtained by training the deep neural network, and the model is used to perform super-resolution processing on the image to obtain a higher-resolution image, and the clear resolution and definition of the image texture are significantly improved, and the image quality is evaluated The indicators are better.
  • a computing device including a memory and a processor, the memory stores executable code, and when the processor executes the executable code, a combination of the foregoing The method described.
  • the steps of the method or algorithm described in combination with the embodiments disclosed in this document can be implemented by hardware, a software module executed by a processor, or a combination of the two.
  • the software module can be placed in random access memory (RAM), internal memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disks, removable disks, CD-ROMs, or all areas in the technical field. Any other known storage media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

一种锐化标签数据的图像超分辨率重建方法及装置,方法包括:建立从低分辨率到高分辨率的图像重建深度神经网络,通过亚采样和sobel算子计算得到低分辨率样本数据,然后对标签数据进行一定的锐化处理,使得标签数据边缘更加清晰,地物更加清楚,这样通过对深度神经网络训练得到模型,利用模型来对图像进行超分辨率处理,能够得到更高分辨率的图像,且图像纹理清晰分辨率和清晰度得到明显的提高,图像质量评价指标更好。

Description

一种锐化标签数据的图像超分辨率重建方法及装置 技术领域
本申请涉及计算机图像复原技术,尤其涉及一种基于深度学习的图像超分辨率复原技术。
背景技术
近年来,深度学习在多媒体处理领域迅猛发展,基于深度学习的图像超分辨率复原技术已逐渐成为主流技术。它的一个假设就是低分辨率图像样本与高分辨率图像标签数据之间存在一个映射关系,利用神经网络的学习功能建立这两种数据之间的映射关系,从而实现图像超分辨率功能。基于卷积神经网络的图像超分辨率重建算法,验证了深度学习在图像超分辨率方面具有优越的性能,证明了卷积神经网络可以实现低分辨率图像映射到高分辨率图像,成为深度神经网络进行图像超分辨率是一项开创性的科学研究。
利用卷积神经网络的深度学习由于存在大量的卷积运算,势必导致图像像素被平滑,从而使得纹理存在一定程度的模糊性。后来,有人将样本数据做成多通道数据,不仅仅是三个通道的彩色图像,而是将原始样本图像通过图像处理得到多种信息的数据,对样本数据进行了三种插值和五种锐化得到的18个通道的数据作为样本输入数据,扩展输入样本数据的通道数,目的是保留数据的纹理信息,这也存在输入通道过多,效果也并不太明显。有人采用20层的深度残差神经网络,残差网络的特点网络用于描述相邻层数据的差,简单且易于收敛,采用深层残差网络结构,在单幅图像的重建过程中采用亚像素卷积层来实现,提升重建效率,减少重建时间。改善了超分辨率结果图像的纹理效果。利用对抗网络来实现超分辨率的方 法证明是一种更优秀的方法。利用深度神经网络的图像超分辨率技术已经是一项热门的技术,有着非常好的应用前景。
前面的这些探讨主要集中在两个方面,一个是输入样本数据的选择,一个是网络模型的设计,这些文献推动了图像超分辨率的进展,但这两个方面的研究虽然在一定程度上提高了图像的分辨率和清晰度,但是与原始数据还是有一定的差距,图像清晰度和分辨率方面还有进一步提高的空间。
发明内容
本申请针对现有技术的不足,提出一种锐化标签数据的图像超分辨率重建方法及装置,通过对标签数据进行一定的锐化处理,使得标签数据边缘更加清晰,地物更加清楚,这样通过对深度神经网络训练得到模型,利用模型来对图像进行超分辨率处理,得到更高分辨率的图像,且图像纹理清晰分辨率和清晰度得到明显的提高,图像质量评价指标更好。本申请的方法可以有效解决现有图像超分辨率复原技术存在的上述问题。
本申请提供一种图像超分辨率重建方法,该方法包括:样本数据输入预先设立的超分辨率图像生成神经网络,所述样本数据是对原始图像进行低分辨率处理而得到;所述超分辨率图像生成神经网络根据样本数据输入的低分辨率图像提取图像特征,继而根据所述提取的图像特征重建图像,得到输出的超分辨率图像;如果所述超分辨率图像和标签数据的相似度达不到预先设定标准,则调整所述超分辨率图像生成神经网络的参数;其中,所述标签数据是对原始图像进行锐化处理而得到;将低分辨率图像输入完成训练后的超分辨率图像生成神经网络,得到一个超分辨率图像。
优选地,所述超分辨率图像生成神经网络包括多层卷积计算,并且多层卷积计算中的至少一部分采用窄卷积。
优选地,所述超分辨率图像生成神经网络包括多层卷积计算,且每层卷积计算均采用LRN局部响应归一化。
优选地,所述样本数据的制作方法包括:对原始图像进行亚采样计算得到低分辨率的图像;对得到的所述低分辨率图像进行Sobel算子滤波计算,得到Sobel边缘图像;对所述低分辨率图像的红、绿和蓝三个波段数据,加所述Sobel边缘图像共4个波段的数据进行插值计算,得到与原始图像分辨率相同的图像作为样本数据。
优选地,所述标签数据的制作方法包括:由原始图像进过锐化计算得到所述标签数据,所述锐化计算的方法为USM锐化;所述USM锐化使用的参数设置为,阈值设置为3,半径设置为1-1.2,数量设置为20%-25%。
另一方面,本申请提供了一种图像超分辨率重建装置,该装置包括:样本数据生成模块,用于对原始图像进行低分辨率处理,得到所述超分辨率图像生成神经网络模块的训练样本数据;标签数据生成模块,用于对原始图像进行锐化处理,得到所述超分辨率图像生成神经网络模块的训练标签数据;超分辨率图像生成神经网络模块,根据所述样本数据和所述标签数据进行训练;训练完成的所述超分辨率图像生成神经网络模块对输入图像进行计算,得到所述输入图像的超分辨率图像。
优选地,所述超分辨率图像生成神经网络包括多层卷积计算子单元,并且多层卷积计算子单元中的至少一部分采用窄卷积。
优选地,所述超分辨率图像生成神经网络包括多层卷积计算子单元,且每层卷积计算子单元均采用LRN局部响应归一化。
优选地,所述装置包括:对原始图像进行亚采样计算得到低分辨率的图像的子单元;对得到的所述低分辨率图像进行Sobel算子滤波计算,得到Sobel边缘图像的子单元;对所述低分辨率图像的红、绿和蓝三个波段数据,加Sobel边缘图像共4个波段的数据进行插值计算,得到与原始图像分辨率相同的图像作为样本数据的子单元。
优选地,所述标签数据生成模块采用USM锐化,所述USM方法使用的参数设置为,阈值设置为3,半径设置为1-1.2,数量设置为20%-25%。
以上本申请所采用的技术方案与现有技术相比,具有以下技术优点:
对标签数据进行一定的锐化处理,使得标签数据边缘更加清晰,地物更加清楚,这样通过对深度神经网络训练得到模型,利用模型来对图像进行超分辨率处理,得到更高分辨率的图像,且图像纹理清晰分辨率和清晰度得到明显的提高,图像质量评价指标更好。
附图说明
通过结合附图描述本说明书实施例,可以使得本说明书实施例更加清楚:
图1是本申请实施例提供的一种锐化标签数据的图像超分辨率重建方法的神经网络模型图;
图2是锐化标签数据的图像超分辨率重建方法的数据转化和流程图;
图3是图像超分辨率重建装置的模块图;
图4是为原始数据和亚采样后再插值得到的样本数据对比图;
图5是为RGB通道作为输入数据、Sobel边缘+RGB通道作为输入数据、标签数据锐化后的超分辨率结果对比图;
图6是为原始数据及超分辨率结果的对比图;
图7是为原始数据及超分辨率结果的放大对比图。
具体实施方式
下面将结合附图对本申请技术方案的实施例进行详细地描述。
需要注意的是,除非另有说明,本申请使用的技术术语或者科学术语应当为本实用新型所属领域技术人员所理解的通常意义。
图像超分辨率目的是提高图像分辨率和清晰度,使得图像中的地物更加清楚,纹理更加清晰。本申请通过对标签数据进行一定的锐化处理,使得标签数据边缘更加清晰,地物更加清楚,这样通过对深度神经网络训练 得到模型,利用模型来对图像进行超分辨率处理,得到更高分辨率的图像,且图像纹理清晰分辨率和清晰度得到明显的提高,图像质量评价指标更好。
图1是本申请实施例提供的一种锐化标签数据的图像超分辨率重建的神经网络模型图。本申请实施例先设计一个深度学习网络模型用于图像超分辨率计算,再制作样本数据和标签数据,之后用样本数据和标签数据来训练这个网络,通过训练得到一个成熟的超分辨率网络,最后将低分辨率图像输入到训练网络中,得到图像超分辨率结果。
如图1所示,对原始图像进行低分辨率处理,以得到的图像作为样本数据。
与此同时或者先后,对原始图像进行锐化处理,以得到的图像作为标签数据。
使用所述样本数据和标签数据,训练预先设立的超分辨率图像生成神经网络;所述预先设立的超分辨率图像生成神经网络能够根据输入的低分辨率图像提取图像特征,继而根据所述提取的图像特征重建图像,并且输出超分辨率图像。
将低分辨率图像输入完成训练后的超分辨率图像生成神经网络,得到输出的超分辨率图像。
在一个实施例中,所述锐化标签数据的图像超分辨率重建过程如下:
一、设计超分辨率图像生成神经网络模型
在一个例子中,所设计的超分辨率图像生成神经网络模型为卷积神经网络模型。具体地,所述超分辨率图像生成神经网络模型可以是一个6层的卷积神经网络模型。输入数据可以为彩色图像加Sobel通道组成的4个通道的数据,经过5X5模板的卷积得到第一层为64个特征的卷积层,再经过3X3卷积模板得到第二层为128个特征的卷积层,再经过7X7模板得到第三层为32个特征的卷积层,再经过3X3模板得到第四层为16个特征的卷积层,再经过3X3模板得到第五层为8个特征的卷积层,再经过3X3模 板得到第六层为3个特征的卷积层,第六层就是最终的超分辨率结果。卷积神经网络模型参考图2。
在一个例子中,每一个层都要经过LRN局部响应归一化,否则网络模型容易引起过拟合。在本实施例中,在每一个层中的卷积计算过程中,没有采用同卷积策略,而是采用了窄卷积策略,因此每经过一层卷积,图像都会变小,一个21X21大小的图片作为输入数据,则经过网络模型输出大小就变为3X3大小。
关于本实施例中的归一化,即网络正则化处理,发明人认为,一个没有正则化处理的深度神经网络往往容易产生过拟合现象,过拟合的产生往往是数据和模型两个方面导致的,在数据方面往往是数据量小,或者数据不全面不典型,在模型方面往往是模型太复杂。网络正则化能够有效消除过拟合,LRN可以对网络模型的一个层进行局部归一化,得到一个局部归一化层,最早是AlexNet网络对它进行了定义,这个方法能够非常有效的降低过拟合,其计算公式如下:
Figure PCTCN2020114881-appb-000001
其中,
Figure PCTCN2020114881-appb-000002
是归一化后的值,i是输入通道,x与y为当前像素位置。
Figure PCTCN2020114881-appb-000003
是输入值,即神经元激活函数的输出值。k、α、β、n/2都是自定义系数,N是总的通道数,最后计算不同通道累加的平方和,得到局部归一化结果。
二、制作样本数据
在一个例子中,将原始彩色图像进行亚采样得到低分辨率的图像,并对图像进行Sobel算子滤波,得到Sobel算子边缘图像,由红、绿和蓝三个波段加Sobel边缘算子共4个波段数据;这4个波段的数据再插值,使得其与原始图像分辨率相同,这个上采样的4波段数据作为样本数据。
其中,Sobel是一种非常经典的边缘检测方法。Sobel算子由两组3x3的小模板组成,分别为横向模板和纵向模板,将它们分别与图像作卷积, 得出横向及纵向的亮度差分值。假设A为图像切片,Gx及Gy分别为横向和纵向卷积模板,其计算公式如下:
Figure PCTCN2020114881-appb-000004
将横向计算结果与纵向计算结果的各自的平方和相加,之后再开方,就可以得到图像当前像素的边缘梯度,其计算公式如下:
Figure PCTCN2020114881-appb-000005
三、制作标签数据
在一个例子中,制作标签数据使用标准非锐化掩膜Unsharp Mask(USM)锐化方法。USM锐化是一种很好的图像锐化方法,可以锐化图像边缘,快速调整图像边缘细节的对比度。它有三个参数,数量是第一个参数,它控制锐化效果的强度;半径是第二个参数,它指定锐化的半径,该设置决定了边缘像素周围影响锐化的像素数,图像的分辨率越高,则半径设置应越大;阈值是第三个参数,它指相邻像素之间的比较值。这些参数决定了像素的色调必须与周边区域的像素相差多少才被视为边缘像素,进而使用USM滤镜对其进行锐化。
在一个例子中,标签数据由原始数据进行锐化得到,锐化方法为USM方法;其中的参数设置为,阈值为3,半径为1,数量为20%,这样就得到了锐化图像作为训练用的标签数据。USM算法中这三个参数的设置是经过若干次实验得到的一个经验值,这一数值组合是最优的,如果设置不合适的话,会导致过拟合或欠拟合。
四、利用样本数据和标签数据来训练神经网络,得到图像超分辨率模型
训练的过程如下,将所述样本数据输入所述超分辨率图像生成神经网络,得到输出的超分辨率图像;将所述标签数据和所述输出的超分辨率图像进行比对,如果所述标签数据和所述输出的超分辨率图像的相似度达不到预先设定标准,则调整所述超分辨率图像生成神经网络的参数;并重复进 行输入样本数据到超分辨率图像生成神经网络,以所述标签数据和所述超分辨率图像生成神经网络的输出图像进行比对,并根据比对结果确认是否调整所述超分辨率图像生成神经网络的参数的过程。
如果比对结果为所述标签数据和所述输出的超分辨率图像的相似度达到预先设定标准,则不再调整所述图像超分辨率神经网络的参数,完成所述训练。
五、利用训练完成的超分辨率模型来对输入图像进行超分辨率计算,得到超分辨率图像
训练完成的超分辨率模型,其神经网络的参数已经被调整完善,这时使用它可以较为满意完成将低分辨率图片转化为超分辨率的任务。
图3是图像超分辨率重建装置的示意图。如图3所示,该装置包括:超分辨率图像生成神经网络模块101,用于根据输入的低分辨率图像得到超分辨率图像;样本数据生成模块103,用于对原始图像进行低分辨率处理,得到所述超分辨率图像生成神经网络模块101的训练样本数据;标签数据生成模块105,用于对原始图像进行锐化处理,得到所述超分辨率图像生成神经网络模块101的训练标签数据;所述超分辨率图像生成神经网络模块101根据所述样本数据和所述标签数据进行训练,得到成熟超分辨率图像生成神经网络;利用所述成熟超分辨率图像生成神经网络,用于对输入图像进行计算,得到所述输入图像的超分辨率图像。
优选地,所述超分辨率图像生成神经网络模块101包括多层卷积计算子单元,并且多层卷积计算子单元中的至少一部分采用窄卷积。
在一个例子中,建立一个6层的超分辨率图像生成神经网络,所述超分辨率图像生成神经网络的输入数据为彩色图像加Sobel通道组成的4个通道的数据;所述输入数据经过大小为5X5的模板的卷积计算,得到第一层64个特征的卷积层;再经过大小为3X3的模板的卷积计算,得到第二层128个特征 的卷积层;再经过大小为7X7的模板的卷积计算,得到第三层32个特征的卷积层;再经过大小为3X3的模板的卷积计算,得到第四层16个特征的卷积层;再经过大小为3X3模板的卷积计算,得到第五层8个特征的卷积层;再经过大小为3X3模板的卷积计算,得到第六层3个特征的卷积层;所述第六层卷积层得到超分辨率结果。
具体地,所述超分辨率图像生成神经网络每一层的卷积计算子单元都要经过LRN局部响应归一化,其计算公式如下:
Figure PCTCN2020114881-appb-000006
其中,
Figure PCTCN2020114881-appb-000007
是归一化后的值,i是输入通道,x与y为当前像素位置。
Figure PCTCN2020114881-appb-000008
是输入值,即神经元激活函数的输出值,k、α、β、n/2都是自定义系数,N是总的通道数。
所述超分辨率图像生成神经网络每一层的卷积计算过程中,采用窄卷积策略,每经过一层卷积计算,图像变小。
优选地,所述样本数据生成模块103中,还包括:对原始图像进行亚采样得到低分辨率的图像的子单元;对得到的所述低分辨率图像进行Sobel算子滤波,得到Sobel算子边缘图像的子单元;对所述低分辨率图像的红、绿和蓝三个波段数据,加Sobel算子边缘图像共4个波段数据进行插值计算,得到与原始图像分辨率相同的图像作为样本数据的子单元。
优选地,所述标签数据生成模块105中,标签数据生成模块采用USM锐化;所述USM锐化使用的参数设置为,阈值设置为3,半径设置为1-1.2,数量设置为20%-25%。
下面从实际图片效果调度,展示本申请实施例方法步骤中和结果中的一些图片变化和效果比对。
使用本申请实施例提供的方法,在一个例子中得到的实际效果和数据 对比,如下:
图4是为原始数据和亚采样后再插值得到的样本数据对比图,图左部分为原始数据图片,图右部分为亚采样后再插值得到的样本数据,通过对比可以发现原始数据比样本实验数据的分辨率高,图像更加清晰,纹理更加细腻,而经过采样再差值的样本数据明显模糊,边缘不清晰,这样的数据可以用来做训练模型的样本数据。
图5是为RGB通道作为输入数据、Sobel边缘+RGB通道作为输入数据、标签数据锐化后的超分辨率结果对比图。
在改变输入数据和标签数据,而不改变图2的网络模型前提下,在本实施例中,设计了三种具体的模型,一种是输入数据为红、绿和蓝三个波段的样本数据,标签数据为原始数据构建的网络模型;第二种是输入数据为红、绿和蓝三个波段加Sobel边缘算子共4个波段作为输入数据,标签数据为原始数据构建的网络模型;第三种是输入数据为红、绿和蓝三个波段加Sobel边缘算子共4个波段作为输入数据,标签数据为USM锐化数据构建的网络模型。
在图像超分辨率结果的评价方面,需要将原始图像与结果进行对比,也将不同方法之间的结果进行对比,评价上采用主观视觉对比和几个常用的定量评价指标,图像经过超分辨率后可能会存在虚假边缘、噪声和模糊等等区域上的差异,这就导致评价指标存在一定的不客观的情况,所以主观视觉上的评价是很重要且不能忽略的,定性上评价优先于定量指标评价。
通过训练得到了三种模型,分别对数据进行了超分辨率计算,计算结果如图上左、图上右和图下各部分所示。对比采样后的样本数据与超分辨率结果,可以发现三幅超分辨率结果比样本数据在分辨率和清晰度上有明显的提高,这说明了本申请实施例的模型还是有一定效果的,再对比原始数据与超分辨率结果,在图像不放大的情况下,这几幅图像差别很小,甚 至看不出明显的差别,这说明了本申请实施例的超分辨率模型是一个比较好的卷积神经网络模型,能够提高图像的分辨率和清晰度,这也说明了本申请实施例的模型达到了提高图像分辨率的结果。
为了进一步定性分析模型的超分辨率图像是否存在过拟合效果,有必要放大图像进行更加细致的对比,如图6所示。放大图像后,通过对比可以发现仅采用RGB彩色图像作为训练数据得到的模型进行图像超分辨率,得到的结果存在明显的虚假边缘,这说明了尽管在卷积神经网络模型中采用了消除过拟合的LRN技术,但是对于仅仅采用三个波段的彩色图像作为样本训练数据得到的结果依然不是很理想,而在训练数据中增加了Sobel边缘算子,通过训练得到的超分辨率模型进行实验验证发现没有明显的虚假边缘,同样训练数据为包含Sobel算子的图像且标签数据为USM锐化图像训练得到的模型来计算超分辨率结果也没有过拟合现象,这也证明了训练样本数据中增加边缘检测算子能够有效提高超分辨率图像的质量。
再将不同超分辨率结果与原始数据进行对比放大分析图像清晰度,如图7所示。可发现超分辨率结果都没有原始数据那样清晰,原始数据的边缘更加明确,也就是说超分辨率结果从原理和常识的角度都达不到原始数据的视觉效果,他们的边缘没有原始数据那样清晰。将红、绿、蓝三个波段的超分辨率结果与Sobel加红、绿、蓝三个波段的超分辨率结果与锐化标签数据的超分辨率结果对比可以发现,锐化标签数据的超分辨率结果在边缘特征方面好于另外两种方法,这说明了锐化标签数据的超分辨率方法是更好的一种思路,这是一种能够在很大程度上消除超分辨率图像模糊的方法,通过3幅超分辨率结果之间的对比以及它们与原始数据的对比,可以看出标签数据锐化训练得到的模型的超分辨率结果边缘更清晰,更接近原始数据,是三种结果中最优的一个。
下面从数据指标角度,比较本申请实施例方法的效果。
通常而论,图像质量评价的指标是一些统计指标,包括清晰度、峰值信噪比(PSNR)、结构相似度(SSIM),均方根误差RMSE。将不同方法得到的超分辨率结果图像进行了定量评价,计算结果如表1所示。
表1:不同方法的超分辨率图像计算指标
Figure PCTCN2020114881-appb-000009
分析这张表格,可以发现通过锐化标签数据训练网络模型,对数据进行超分辨率重建所得到的结果在4个评价指标方面都好于另外两种方法。因为标签数据是样本锐化结果,其清晰度好于另外两种方法得到的结果是容易理解的。均方根误差是超分辨率结果与原始标签数据之间的差的计算结果,这个值越小,说明原始标签图像与超分辨率结果更接近,比较这几种方法计算得到的均方根误差,可以发现标签数据锐化方法得到的超分辨率结果与原始数据更接近,更加清晰,这也证明了其他两种方法与原始数据差别较大,它们更加模糊一些,这证明了本申请实施例方案的优越性。峰值信噪比是比较信号强度与背景噪声强度的一个统计量,对比可以发现本申请实施例方案的标签图像锐化超分辨率结果的峰值信噪比的值是最大的,也就是说本申请实施例方案能够压制噪声的产生,同时提高了信息量。结构相似度是用来从图像亮度相似性、图像对比度相似性和图像结构相似性三个方面来评价超分辨率图像的质量。通过比较三种方法得到的结果图像的结构相似度指标,可以发现本申请实施例方案标签锐化超分辨率方法的结果要好于另外两种方法。
对于红、绿和蓝三个波段作为输入数据的超分辨率结果与红、绿和蓝 三个波段加Sobel算子的四个波段作为输入数据的超分辨率结果,进行这四个指标的对比,可以发现除了清晰度这个指标之外,其他三个指标都是四波段样本数据的模型计算结果都好于红绿蓝三个波段的计算结果,红、绿和蓝三个波段作为输入数据的超分辨率结果的清晰度大是因为过拟合导致的虚假边缘的存在,并不是其图像质量真的好,这个对比也证明了边缘算子参与的超分辨率结果更好。
从以上实施例可以看出,本申请公开的一种锐化标签数据的图像超分辨率重建方法及装置,具有如下优点:对标签数据进行一定的锐化处理,使得标签数据边缘更加清晰,地物更加清楚,这样通过对深度神经网络训练得到模型,利用模型来对图像进行超分辨率处理,得到更高分辨率的图像,且图像纹理清晰分辨率和清晰度得到明显的提高,图像质量评价指标更好。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
根据再一方面的实施例,还提供一种计算设备,该计算设备包括存储器和处理器,所述存储器中存储有可执行代码,所述处理器执行所述可执行代码时,实现结合前文所述的方法。
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
本领域普通技术人员应该还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执轨道,取决于技术方案的特定应用和设计约束条件。本领域普通技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
结合本文中所公开的实施例描述的方法或算法的步骤可以用硬件、处理器执轨道的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。
以上所述的具体实施方式,对本申请的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本申请的具体实施方式而已,并不用于限定本申请的保护范围,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (10)

  1. 一种图像超分辨率重建方法,包括:
    样本数据输入预先设立的超分辨率图像生成神经网络,所述样本数据是对原始图像进行低分辨率处理而得到;
    所述超分辨率图像生成神经网络根据样本数据输入的低分辨率图像提取图像特征,继而根据所述提取的图像特征重建图像,得到超分辨率图像;
    如果所述超分辨率图像和标签数据的相似度达不到预先设定标准,则调整所述超分辨率图像生成神经网络的参数;其中,所述标签数据是对原始图像进行锐化处理而得到;
    将低分辨率图像输入完成训练后的超分辨率图像生成神经网络,得到一个超分辨率图像。
  2. 根据权利要求1所述的图像超分辨率重建方法,其特征在于,所述超分辨率图像生成神经网络包括多层卷积计算,并且多层卷积计算中的至少一部分采用窄卷积。
  3. 根据权利要求1所述的图像超分辨率重建方法,其特征在于,所述超分辨率图像生成神经网络包括多层卷积计算,且每层卷积计算均采用LRN局部响应归一化。
  4. 根据权利要求1所述的图像超分辨率重建方法,其特征在于,所述方法包括:
    对原始图像进行亚采样计算得到低分辨率的图像;
    对所述低分辨率图像进行Sobel算子滤波计算,得到Sobel边缘图像;
    对所述低分辨率图像的红、绿和蓝三个波段数据,加所述Sobel边缘图像共4个波段的数据进行插值计算,得到与原始图像分辨率相同的图像作为样本数据。
  5. 根据权利要求1所述的图像超分辨率重建方法,其特征在于,所述方法包括:
    由原始图像进过锐化计算得到所述标签数据,所述锐化计算的方法为采用USM锐化;所述USM锐化使用的参数设置为,阈值设置为3,半径设置为1-1.2,数量设置为20%-25%。
  6. 一种图像超分辨率重建装置,包括:
    样本数据生成模块,用于对原始图像进行低分辨率处理,得到所述超分辨率图像生成神经网络模块的训练样本数据;
    标签数据生成模块,用于对原始图像进行锐化处理,得到所述超分辨率图像生成神经网络模块的训练标签数据;
    超分辨率图像生成神经网络模块,根据所述样本数据和所述标签数据进行训练;训练完成的所述超分辨率图像生成神经网络模块,用于对输入图像进行计算,得到所述输入图像的超分辨率图像。
  7. 根据权利要求6所述的图像超分辨率重建装置,其特征在于,所述超分辨率图像生成神经网络包括多层卷积计算子单元,并且多层卷积计算子单元中的至少一部分采用窄卷积。
  8. 根据权利要求6所述的图像超分辨率重建装置,其特征在于,所述超分辨率图像生成神经网络包括多层卷积计算子单元,且每层卷积计算子单元均采用LRN局部响应归一化。
  9. 根据权利要求6所述的图像超分辨率重建装置,其特征在于,所述装置包括:
    对原始图像进行亚采样计算得到低分辨率的图像的子单元;
    对得到的所述低分辨率图像进行Sobel算子滤波计算,得到Sobel边缘图像的子单元;
    对所述低分辨率图像的红、绿和蓝三个波段数据,加Sobel边缘图像共4个波段的数据进行插值计算,得到与原始图像分辨率相同的图像作为样本数据的子单元。
  10. 根据权利要求6所述的图像超分辨率重建装置,其特征在于,标签数据生成模块采用USM锐化,所述USM锐化使用的参数设置为,阈值设置为3,半径设置为1-1.2,数量设置为20%-25%。
PCT/CN2020/114881 2019-09-17 2020-09-11 一种锐化标签数据的图像超分辨率重建方法及装置 WO2021052261A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/636,044 US20220335573A1 (en) 2019-09-17 2020-09-11 Image super-resolution reconstruction method and device featuring labeled data sharpening

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910879915.7A CN110706166B (zh) 2019-09-17 2019-09-17 一种锐化标签数据的图像超分辨率重建方法及装置
CN201910879915.7 2019-09-17

Publications (1)

Publication Number Publication Date
WO2021052261A1 true WO2021052261A1 (zh) 2021-03-25

Family

ID=69194545

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/114881 WO2021052261A1 (zh) 2019-09-17 2020-09-11 一种锐化标签数据的图像超分辨率重建方法及装置

Country Status (3)

Country Link
US (1) US20220335573A1 (zh)
CN (1) CN110706166B (zh)
WO (1) WO2021052261A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116796818A (zh) * 2022-03-15 2023-09-22 生物岛实验室 模型训练方法、装置、设备、存储介质及程序产品

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706166B (zh) * 2019-09-17 2022-03-18 中国科学院空天信息创新研究院 一种锐化标签数据的图像超分辨率重建方法及装置
CN111476285B (zh) * 2020-04-01 2023-07-28 深圳力维智联技术有限公司 一种图像分类模型的训练方法及图像分类方法、存储介质
CN111929723B (zh) * 2020-07-15 2023-03-14 清华大学 基于多任务学习的地震数据约束下速度模型超分辨率方法
CN113269676B (zh) * 2021-05-19 2023-01-10 北京航空航天大学 一种全景图像处理方法和装置
CN113362384A (zh) * 2021-06-18 2021-09-07 安徽理工大学环境友好材料与职业健康研究院(芜湖) 多通道亚像素卷积神经网络的高精度工业零件测量算法
CN113327219B (zh) * 2021-06-21 2022-01-28 易成功(厦门)信息科技有限公司 基于多源数据融合的图像处理方法与系统
CN115564644B (zh) * 2022-01-10 2023-07-25 荣耀终端有限公司 图像数据的处理方法、相关设备以及计算机存储介质
CN115065867B (zh) * 2022-08-17 2022-11-11 中国科学院空天信息创新研究院 基于无人机视频金字塔模型的动态处理方法及装置
CN115578263B (zh) * 2022-11-16 2023-03-10 之江实验室 一种基于生成网络的ct超分辨重建方法、系统及装置
CN117496225A (zh) * 2023-10-17 2024-02-02 南昌大学 一种图像数据取证方法及其系统
CN117541460A (zh) * 2023-12-06 2024-02-09 长春理工大学 一种红外图像盲超分辨率方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100086227A1 (en) * 2008-10-04 2010-04-08 Microsoft Corporation Image super-resolution using gradient profile prior
CN106600553A (zh) * 2016-12-15 2017-04-26 华中科技大学 一种基于卷积神经网络的dem超分辨率方法
CN108550115A (zh) * 2018-04-25 2018-09-18 中国矿业大学 一种图像超分辨率重建方法
CN109035142A (zh) * 2018-07-16 2018-12-18 西安交通大学 一种对抗网络结合航拍图像先验的卫星图像超分辨方法
CN109727207A (zh) * 2018-12-06 2019-05-07 华南理工大学 基于光谱预测残差卷积神经网络的高光谱图像锐化方法
CN110706166A (zh) * 2019-09-17 2020-01-17 中国科学院遥感与数字地球研究所 一种锐化标签数据的图像超分辨率重建方法及装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109102469B (zh) * 2018-07-04 2021-12-21 华南理工大学 一种基于卷积神经网络的遥感图像全色锐化方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100086227A1 (en) * 2008-10-04 2010-04-08 Microsoft Corporation Image super-resolution using gradient profile prior
CN106600553A (zh) * 2016-12-15 2017-04-26 华中科技大学 一种基于卷积神经网络的dem超分辨率方法
CN108550115A (zh) * 2018-04-25 2018-09-18 中国矿业大学 一种图像超分辨率重建方法
CN109035142A (zh) * 2018-07-16 2018-12-18 西安交通大学 一种对抗网络结合航拍图像先验的卫星图像超分辨方法
CN109727207A (zh) * 2018-12-06 2019-05-07 华南理工大学 基于光谱预测残差卷积神经网络的高光谱图像锐化方法
CN110706166A (zh) * 2019-09-17 2020-01-17 中国科学院遥感与数字地球研究所 一种锐化标签数据的图像超分辨率重建方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116796818A (zh) * 2022-03-15 2023-09-22 生物岛实验室 模型训练方法、装置、设备、存储介质及程序产品
CN116796818B (zh) * 2022-03-15 2024-05-10 生物岛实验室 模型训练方法、装置、设备、存储介质及程序产品

Also Published As

Publication number Publication date
US20220335573A1 (en) 2022-10-20
CN110706166A (zh) 2020-01-17
CN110706166B (zh) 2022-03-18

Similar Documents

Publication Publication Date Title
WO2021052261A1 (zh) 一种锐化标签数据的图像超分辨率重建方法及装置
Zhang et al. Residual non-local attention networks for image restoration
Huang et al. Framelet regularization for uneven intensity correction of color images with illumination and reflectance estimation
CN104021523B (zh) 一种基于边缘分类的图像超分辨率放大的方法
WO2019223069A1 (zh) 基于直方图的虹膜图像增强方法、装置、设备及存储介质
CN108921786A (zh) 基于残差卷积神经网络的图像超分辨率重构方法
CN103475876B (zh) 一种基于学习的低比特率压缩图像超分辨率重建方法
CN107492070A (zh) 一种双通道卷积神经网络的单图像超分辨率计算方法
CN103098078B (zh) 笑容检测系统及方法
CN109191390A (zh) 一种基于不同颜色空间多算法融合的图像增强算法
Jin et al. Dual-stream multi-path recursive residual network for JPEG image compression artifacts reduction
CN106447020B (zh) 一种智能菌落计数方法
Gai et al. Multi-focus image fusion method based on two stage of convolutional neural network
CN104574285A (zh) 一种自动祛除图像黑圆圈的方法
CN112465842B (zh) 基于U-net网络的多通道视网膜血管图像分割方法
CN110232670A (zh) 一种基于高低频分离的图像视觉效果增强的方法
CN102402784A (zh) 一种基于最近特征线流形学习的人脸图像超分辨率方法
CN104463806B (zh) 基于数据驱动技术的高度自适应图像对比度增强方法
CN107437239A (zh) 一种图像增强方法和装置
Li et al. Flexible piecewise curves estimation for photo enhancement
CN105426847A (zh) 低质量自然光虹膜图像非线性增强方法
CN108647605A (zh) 一种结合全局颜色与局部结构特征的人眼凝视点提取方法
CN116579940A (zh) 一种基于卷积神经网络的实时低照度图像增强方法
CN104952053B (zh) 基于非线性压缩感知的人脸图像超分辨重构方法
CN116563133A (zh) 基于模拟曝光和多尺度融合的低照度彩色图像增强方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20866806

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20866806

Country of ref document: EP

Kind code of ref document: A1