CN109191376B - High-resolution terahertz image reconstruction method based on improved SRCNN model - Google Patents

High-resolution terahertz image reconstruction method based on improved SRCNN model Download PDF

Info

Publication number
CN109191376B
CN109191376B CN201810806760.XA CN201810806760A CN109191376B CN 109191376 B CN109191376 B CN 109191376B CN 201810806760 A CN201810806760 A CN 201810806760A CN 109191376 B CN109191376 B CN 109191376B
Authority
CN
China
Prior art keywords
layer
image
srcnn
model
improved
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810806760.XA
Other languages
Chinese (zh)
Other versions
CN109191376A (en
Inventor
羊恺
袁一丹
顾岩
任向阳
陈鑫
李镇
陈新蕊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201810806760.XA priority Critical patent/CN109191376B/en
Publication of CN109191376A publication Critical patent/CN109191376A/en
Application granted granted Critical
Publication of CN109191376B publication Critical patent/CN109191376B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

本发明实施例公开了一种基于SRCNN改进模型的高分辨率太赫兹图像重构方法,包括:基于SRCNN的模型结构,构建双层的特征提取的改进SRCNN的模型结构,在双层特征提取之后进行插值放大,然后再加入池化;对改进SRCNN模型结构的每一层依次经过第三层卷积,第三次卷积视为全连接层的非线性映射;经过第四层重建出最后的高分辨率图像。本发明改进了现有SRCNN模型在进行训练模型时需要大量时间以及重构后图像分辨率低的问题,通过模型内部不断的修改权重以及其他参数来获得最终的模型及最优参数,直到插损值达到很小的值为止。本发明设计的改进SRCNN四层卷积神经网络模型进行训练,相比于现有技术的SRCNN模型处理的结果PSNR量提高了2.4dB。

Figure 201810806760

The embodiment of the present invention discloses a high-resolution terahertz image reconstruction method based on an improved SRCNN model, including: based on the SRCNN model structure, constructing an improved SRCNN model structure for double-layer feature extraction, after double-layer feature extraction Perform interpolation and amplification, and then add pooling; each layer of the improved SRCNN model structure undergoes the third layer of convolution in turn, and the third convolution is regarded as a non-linear mapping of the fully connected layer; after the fourth layer, the final High resolution images. The present invention improves the problem that the existing SRCNN model needs a lot of time when training the model and the image resolution is low after reconstruction, and obtains the final model and optimal parameters by continuously modifying the weight and other parameters inside the model until the insertion loss value reaches a very small value. The improved SRCNN four-layer convolutional neural network model designed by the present invention is trained, and the PSNR amount is increased by 2.4dB compared with the result processed by the SRCNN model of the prior art.

Figure 201810806760

Description

基于SRCNN改进模型的高分辨率太赫兹图像重构方法High-resolution terahertz image reconstruction method based on improved SRCNN model

技术领域technical field

本发明涉及图像数据处理技术领域,尤其是一种基于SRCNN改进模型的高分辨率太赫兹图像重构方法。The invention relates to the technical field of image data processing, in particular to a high-resolution terahertz image reconstruction method based on an improved SRCNN model.

背景技术Background technique

在现有技术中,解决太赫兹图像质量问题主要有两种方法:①从太赫兹成像的源头开始解决,提高太赫兹成像设备的成像分辨率,并且降低成像过程中的噪声;②从得到的太赫兹图像开始处理,优化并且改进算法处理太赫兹图像。In the existing technology, there are two main methods to solve the problem of terahertz image quality: ① start from the source of terahertz imaging, improve the imaging resolution of terahertz imaging equipment, and reduce the noise in the imaging process; ② from the obtained Start to process terahertz images, optimize and improve algorithms to process terahertz images.

第一种方法优点是解决了问题的本源,但是设备成本很高,性价比低;第二种方法利用数字图像处理算法实现太赫兹图像的相关处理以及应用,但存在太赫兹图像成像清晰度低的问题。The advantage of the first method is that it solves the root of the problem, but the cost of equipment is high and the cost performance is low; the second method uses digital image processing algorithms to realize the related processing and application of terahertz images, but there is a problem of low resolution of terahertz images question.

SRCNN模型是由汤晓鸥团队设计的一种模型,这种模型是基于卷积神经网络来实现图像重构,该模型仅仅使用了三层的神经网络结构,代表传统的稀疏表示的图像重构的步骤。但是现有的SRCNN模型处理图像的清晰度不够。The SRCNN model is a model designed by Tang Xiaoou's team. This model is based on a convolutional neural network to achieve image reconstruction. This model only uses a three-layer neural network structure, representing the steps of traditional sparse representation of image reconstruction. . But the existing SRCNN model does not deal with the sharpness of the image enough.

因此,现有技术需要改进。Therefore, the prior art needs to be improved.

发明内容Contents of the invention

本发明实施例所要解决的一个技术问题是:提供一种基于SRCNN改进模型的高分辨率太赫兹图像重构方法,以解决现有技术存在的问题,所述基于SRCNN改进模型的高分辨率太赫兹图像重构方法包括:A technical problem to be solved by the embodiments of the present invention is to provide a high-resolution terahertz image reconstruction method based on the improved SRCNN model to solve the problems existing in the prior art. The high-resolution terahertz image reconstruction method based on the improved SRCNN model Hertzian image reconstruction methods include:

基于SRCNN的模型结构,构建双层的特征提取的改进SRCNN的模型结构,在双层特征提取之后进行插值放大,然后再进行池化;Based on the model structure of SRCNN, an improved SRCNN model structure for double-layer feature extraction is constructed, and interpolation and amplification are performed after double-layer feature extraction, and then pooling is performed;

对改进SRCNN的模型结构每一层依次经过第三层卷积,视为全连接层的非线性映射;Each layer of the model structure of the improved SRCNN undergoes the third layer of convolution in turn, which is regarded as a non-linear mapping of the fully connected layer;

经过第四层重建出最后的高分辨率图像。The final high-resolution image is reconstructed through the fourth layer.

在基于本发明上述基于SRCNN改进模型的高分辨率太赫兹图像重构方法的另一个实施例中,所述基于SRCNN的模型结构,构建双层的特征提取的改进SRCNN的模型结构,在双层特征提取之后进行插值放大,然后再加入池化包括:In another embodiment of the above-mentioned high-resolution terahertz image reconstruction method based on the improved SRCNN model of the present invention, the model structure based on the SRCNN is to construct a two-layer feature extraction improved SRCNN model structure, in the two-layer After feature extraction, interpolation and amplification are performed, and then pooling is added to include:

设定目标经过放大后输入图像为Y,X为原始图像,第一层卷积网络计算出输出图像F1如式(1)所示:After the target is enlarged, the input image is Y, and X is the original image. The first layer of convolutional network calculates the output image F 1 as shown in formula (1):

F1(Y)=max(0,W1*Y+B1) (1)F 1 (Y)=max(0,W 1 *Y+B 1 ) (1)

式中,B1代表权值偏置,*是卷积操作,W1用来表示n1个c×f1×f2滤波器,其中c表示的是图像的通道数,彩色图像是3,灰度图像是1,B1代表n1个偏置量;In the formula, B 1 represents the weight bias, * is the convolution operation, W 1 is used to represent n 1 c×f 1 ×f 2 filters, where c represents the number of channels of the image, and the color image is 3, The grayscale image is 1, and B 1 represents n 1 offsets;

第二层卷积神经网络在第一层生成的n1个特征图的基础上通过映射成为n2个特征图,计算公式如式(2):The second layer of convolutional neural network is mapped to n 2 feature maps based on the n 1 feature maps generated by the first layer. The calculation formula is as follows:

F2(Y)=max(0,W2*F1(Y)+B2) (2)F 2 (Y)=max(0,W 2 *F 1 (Y)+B 2 ) (2)

式中,B2代表权值偏置,*代表卷积操作,W2用来表示n2个n1×f2×f2滤波器,B2表示n2个偏置量;In the formula, B 2 represents the weight bias, * represents the convolution operation, W 2 is used to represent n 2 n 1 ×f 2 ×f 2 filters, B 2 represents n 2 offsets;

第三层进行图像的重建,利用式(3)建立重建层:The third layer reconstructs the image, and uses formula (3) to establish the reconstruction layer:

F(Y)=W3*F2(Y)+B3 (3)F(Y)=W 3 *F 2 (Y)+B 3 (3)

式中W3用来表示c个n2×f3×f3的滤波器,B3表示c个偏置量。In the formula, W 3 is used to represent c filters of n 2 ×f 3 ×f 3 , and B 3 represents c offsets.

在基于本发明上述基于SRCNN改进模型的高分辨率太赫兹图像重构方法的另一个实施例中,所述设定目标经过放大后输入图像为Y,X为原始图像,第一层卷积网络计算出输出图像F1如式(1)中,参数为f1=9,n1=64。In another embodiment of the high-resolution terahertz image reconstruction method based on the improved SRCNN model of the present invention, the input image after the set target is enlarged is Y, X is the original image, and the first layer of convolutional network The output image F 1 is calculated as in formula (1), and the parameters are f 1 =9, n 1 =64.

在基于本发明上述基于SRCNN改进模型的高分辨率太赫兹图像重构方法的另一个实施例中,所述第二层卷积神经网络在第一层生成的n1个特征图的基础上通过映射成为n2个特征图,计算公式如式(2)中,参数为f2=1,n2=32。In another embodiment of the high-resolution terahertz image reconstruction method based on the improved SRCNN model of the present invention, the second layer of convolutional neural network is based on the n1 feature maps generated by the first layer through Mapping into n 2 feature maps, the calculation formula is as in formula (2), and the parameters are f 2 =1, n 2 =32.

在基于本发明上述基于SRCNN改进模型的高分辨率太赫兹图像重构方法的另一个实施例中,所述第三层进行图像的重建,利用式(3)建立重建层中,参数为c=3,f3=5。In another embodiment of the above-mentioned high-resolution terahertz image reconstruction method based on the improved SRCNN model of the present invention, the third layer performs image reconstruction, and the reconstruction layer is established using formula (3), and the parameter is c= 3, f 3 =5.

在基于本发明上述基于SRCNN改进模型的高分辨率太赫兹图像重构方法的另一个实施例中,所述经过第四层重建出最后的高分辨率图像包括:In another embodiment of the above-mentioned high-resolution terahertz image reconstruction method based on the improved SRCNN model of the present invention, the reconstruction of the final high-resolution image through the fourth layer includes:

把原始的彩色图像转化为灰度图像,通过将原图缩小再放大的方式制作低分辨率图像;Convert the original color image into a grayscale image, and make a low-resolution image by reducing and enlarging the original image;

把训练集中的多张图像按照设定步长重叠分割,得到多个子图片;Multiple images in the training set are overlapped and segmented according to the set step size to obtain multiple sub-pictures;

将训练集和测试集转化成H5py格式进行存储;Convert the training set and test set into H5py format for storage;

导入Tensorflow中必要的工具包以及数据集;Import the necessary toolkits and datasets in Tensorflow;

定义初始化函数,随机的给权重制造噪声来消除完全对称,设定标准差;Define the initialization function, randomly create noise for the weights to eliminate complete symmetry, and set the standard deviation;

定义损失函数来结束训练的结果,优化器选用Adam,给予一个极小的学习率;Define the loss function to end the training results, the optimizer chooses Adam, and gives a very small learning rate;

开始训练过程,初始化所有参数,设置batch_size(批尺寸)大小,迭代次数,每设定周期次数的训练输出损失,实时评测模型的性能;Start the training process, initialize all parameters, set the batch_size (batch size), the number of iterations, the training output loss per set number of cycles, and evaluate the performance of the model in real time;

把子图片重建成一整张图片,作为重建后高分辨率的输出图片。Reconstruct the sub-picture into a whole picture as the high-resolution output picture after reconstruction.

在基于本发明上述基于SRCNN改进模型的高分辨率太赫兹图像重构方法的另一个实施例中,所述设定标准差的值为:0.01。In another embodiment of the high-resolution terahertz image reconstruction method based on the improved SRCNN model of the present invention, the value of the set standard deviation is: 0.01.

在基于本发明上述基于SRCNN改进模型的高分辨率太赫兹图像重构方法的另一个实施例中,所述给予一个极小的学习率的值为0.000001。In another embodiment of the high-resolution terahertz image reconstruction method based on the improved SRCNN model of the present invention, the value of giving a very small learning rate is 0.000001.

在基于本发明上述基于SRCNN改进模型的高分辨率太赫兹图像重构方法的另一个实施例中,所述设置batch_size大小,迭代次数,每设定周期次数的训练输出损失的值分别为:batch_size大小为128,迭代次数为100000次,设定周期次数为1000。In another embodiment of the high-resolution terahertz image reconstruction method based on the above-mentioned SRCNN improved model of the present invention, the set batch_size size, the number of iterations, and the values of the training output loss per set number of cycles are respectively: batch_size The size is 128, the number of iterations is 100000, and the number of cycles is set to 1000.

与现有技术相比,本发明具有以下优点:Compared with the prior art, the present invention has the following advantages:

本发明提出了一种基于SRCNN改进模型的高分辨率太赫兹图像重构方法,改进了现有SRCNN模型在进行训练模型的时候需要大量的时间的问题,通过模型内部不断的修改权重以及其他参数来获得最终的模型及最优参数,直到插损值达到很小的值为止。本发明设计的四层卷积神经网络的改进SRCNN模型后进行训练,相比于现有技术的SRCNN模型的PSNR量提高了2.4dB。The present invention proposes a high-resolution terahertz image reconstruction method based on the improved SRCNN model, which improves the problem that the existing SRCNN model requires a lot of time when training the model, and continuously modifies the weight and other parameters inside the model To obtain the final model and optimal parameters until the interpolation loss value reaches a small value. The improved SRCNN model of the four-layer convolutional neural network designed in the present invention is trained, and the PSNR amount of the SRCNN model of the prior art is improved by 2.4dB.

下面通过附图和实施例,对本发明的技术方案做进一步的详细描述。The technical solutions of the present invention will be described in further detail below with reference to the accompanying drawings and embodiments.

附图说明Description of drawings

构成说明书的一部分的附图描述了本发明的实施例,并且连同描述一起用于解释本发明的原理。The accompanying drawings, which constitute a part of this specification, illustrate embodiments of the invention and, together with the description, serve to explain principles of the invention.

参照附图,根据下面的详细描述,可以更加清楚地理解本发明,其中:The present invention can be more clearly understood from the following detailed description with reference to the accompanying drawings, in which:

图1为本发明的基于SRCNN改进模型的高分辨率太赫兹图像重构方法的一个实施例的流程图;Fig. 1 is the flowchart of an embodiment of the high-resolution terahertz image reconstruction method based on the SRCNN improved model of the present invention;

图2为本发明的基于SRCNN改进模型的高分辨率太赫兹图像重构方法的另一个实施例的流程图。Fig. 2 is a flow chart of another embodiment of the high-resolution terahertz image reconstruction method based on the improved SRCNN model of the present invention.

具体实施方式Detailed ways

现在将参照附图来详细描述本发明的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本发明的范围。Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that the relative arrangements of components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.

同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。At the same time, it should be understood that, for the convenience of description, the sizes of the various parts shown in the drawings are not drawn according to the actual proportional relationship.

以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本发明及其应用或使用的任何限制。The following description of at least one exemplary embodiment is merely illustrative in nature and in no way taken as limiting the invention, its application or uses.

对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。Techniques, methods and devices known to those of ordinary skill in the relevant art may not be discussed in detail, but where appropriate, such techniques, methods and devices should be considered part of the description.

如图1所示,所述基于SRCNN改进模型的高分辨率太赫兹图像重构方法包括:As shown in Figure 1, the high-resolution terahertz image reconstruction method based on the improved SRCNN model includes:

10,基于SRCNN的模型结构,构建双层的特征提取的改进SRCNN的模型结构,在双层特征提取之后进行插值放大,然后再进行池化;通过增加模型的层数,与现有技术的只有一层的特征提取相比,一层的特征提取层会引入噪声并且信息提取不完全,所以本发明使用了双层的特征提取,相比改进前的SRCNN模型未进行差值放大,直接进行池化提高了图像的质量和分辨率;10. Based on the model structure of SRCNN, an improved SRCNN model structure for double-layer feature extraction is constructed. After double-layer feature extraction, interpolation and amplification are performed, and then pooling is performed; by increasing the number of layers of the model, it is only Compared with the feature extraction of one layer, the feature extraction layer of one layer will introduce noise and the information extraction is incomplete, so the present invention uses two-layer feature extraction. Compared with the improved SRCNN model, the difference is not amplified, and the pooling is directly performed. Improved image quality and resolution;

20,对改进SRCNN的模型结构每一层依次经过第三层卷积,视为全连接层的非线性映射;20. For the model structure of the improved SRCNN, each layer undergoes the third layer of convolution in turn, which is regarded as a non-linear mapping of the fully connected layer;

30,经过第四层重建出最后的高分辨率图像。30. Reconstruct the final high-resolution image through the fourth layer.

SRCNN改进模型可以更直观的看到端到端的映射,并且在改进模型中没有加入池化层,池化层是通过减小图片的大小,从而减少参数,减少计算量,提高训练速度,但是本本发明主要是实现高分辨率重建,所以当池化之后图像缩减可能引起图片的主要信息丢失,因此本发明改进的SRCNN模型是在双层特征提取之后,进行插值放大,然后再加入池化,此时的加入的池化不是现有技术中的缩小图像大小,相反它是为了放大图像的大小,从而使得最终的输出图像大小尽量接近输入图像大小,避免造成图像主要信息的丢失。The SRCNN improved model can see the end-to-end mapping more intuitively, and no pooling layer is added to the improved model. The pooling layer reduces the size of the picture, thereby reducing parameters, reducing the amount of calculation, and improving the training speed, but the book The invention is mainly to achieve high-resolution reconstruction, so when the image is reduced after pooling, the main information of the picture may be lost. Therefore, the improved SRCNN model of the present invention performs interpolation and amplification after double-layer feature extraction, and then adds pooling. The pooling added at the time is not to reduce the size of the image in the prior art. On the contrary, it is to enlarge the size of the image, so that the final output image size is as close as possible to the input image size, avoiding the loss of main information of the image.

所述基于SRCNN的模型结构,构建双层的特征提取的改进SRCNN的模型结构,在双层特征提取之后进行插值放大,加入池化包括:Described based on the model structure of SRCNN, construct the model structure of the improved SRCNN of double-layer feature extraction, perform interpolation amplification after double-layer feature extraction, add pooling and include:

设定目标经过放大后输入图像为Y,X为原始图像,第一层卷积网络计算出输出图像F1如式(1)所示:After the target is enlarged, the input image is Y, and X is the original image. The first layer of convolutional network calculates the output image F 1 as shown in formula (1):

F1(Y)=max(0,W1*Y+B1) (1)F 1 (Y)=max(0,W 1 *Y+B 1 ) (1)

式中W1和B1代表滤波器和权值偏置,*是卷积操作,W1用来表示n1个c×f1×f2滤波器,其中c表示的是图像的通道数,彩色图像是3,灰度图像是1,B1代表n1个偏置量;In the formula, W 1 and B 1 represent the filter and weight bias, * is the convolution operation, W 1 is used to represent n 1 c×f 1 ×f 2 filters, where c represents the number of channels of the image, The color image is 3, the grayscale image is 1, and B 1 represents n 1 offsets;

第二层卷积神经网络在第一层生成的n1个特征图的基础上通过映射成为n2个特征图,计算公式如式(2):The second layer of convolutional neural network is mapped to n 2 feature maps based on the n 1 feature maps generated by the first layer. The calculation formula is as follows:

F2(Y)=max(0,W2*F1(Y)+B2) (2)F 2 (Y)=max(0,W 2 *F 1 (Y)+B 2 ) (2)

式中W2和B2代表滤波器和权值偏置,*代表卷积操作,W2用来表示n2个n1×f2×f2滤波器,B2表示n2个偏置量;In the formula, W 2 and B 2 represent the filter and weight bias, * represents the convolution operation, W 2 is used to represent n 2 n 1 × f 2 × f 2 filters, B 2 represents n 2 offsets ;

第三层进行图像的重建,利用式(3)建立重建层:The third layer reconstructs the image, and uses formula (3) to establish the reconstruction layer:

F(Y)=W3*F2(Y)+B3 (3)F(Y)=W 3 *F 2 (Y)+B 3 (3)

式中W3用来表示c个n2×f3×f3的滤波器,B3表示c个偏置量。In the formula, W 3 is used to represent c filters of n 2 ×f 3 ×f 3 , and B 3 represents c offsets.

所述设定目标经过放大后输入图像为Y,X为原始图像,第一层卷积网络计算出输出图像F1如式(1)中,参数为f1=9,n1=64。After the set target is enlarged, the input image is Y, and X is the original image. The first layer of convolutional network calculates the output image F 1 as in formula (1), and the parameters are f 1 =9, n 1 =64.

所述第二层卷积神经网络在第一层生成的n1个特征图的基础上通过映射成为n2个特征图,计算公式如式(2)中,参数为f2=1,n2=32。The second layer of convolutional neural network becomes n 2 feature maps by mapping on the basis of n 1 feature maps generated by the first layer, the calculation formula is as in formula (2), and the parameters are f 2 =1, n 2 =32.

所述第三层进行图像的重建,利用式(3)建立重建层中,参数为c=3,f3=5。The third layer performs image reconstruction, and the reconstruction layer is established by formula (3), and the parameters are c=3, f 3 =5.

下述实施例采用的相关初始参数包括:数据集中一共包含91张训练集图片,以及19张测试集图像,其中5张Set5和14张Set14数据集。The relevant initial parameters used in the following embodiments include: the data set contains a total of 91 training set images and 19 test set images, including 5 Set5 and 14 Set14 data sets.

如图2所示,所述经过第四层重建出最后的高分辨率图像包括:As shown in Figure 2, the reconstruction of the final high-resolution image through the fourth layer includes:

101,把原始的彩色图像转化为灰度图像,通过将原图缩小再放大的方式制作低分辨率图像,其实现程序包括如下:101. Convert the original color image into a grayscale image, and make a low-resolution image by reducing and enlarging the original image. The implementation procedure includes the following:

image=cv2.cvtColor(image,cv2.COLOR_BGR2YCrCb)image=cv2.cvtColor(image,cv2.COLOR_BGR2YCrCb)

image=image[:,:,0:3]image=image[:,:,0:3]

im_label=modcrop(image,scale)im_label = modcrop(image, scale)

(hei,wid,_)=im_label.shape(hei,wid,_)=im_label.shape

im_input=cv2.resize(im_label,(0,0),fx=1.0/scale,fy=1.0/scale,interpolation=cv2.INTER_CUBIC)im_input=cv2.resize(im_label,(0,0), fx=1.0/scale, fy=1.0/scale, interpolation=cv2.INTER_CUBIC)

im_input=cv2.resize(im_input,(0,0),fx=scale,fy=scale,interpolation=cv2.im_input=cv2.resize(im_input,(0,0), fx=scale, fy=scale, interpolation=cv2.

INTER_CUBIC)INTER_CUBIC)

102,把训练集中的多张图像按照设定步长重叠分割,得到多个子图片,举例说明:102. Overlap and segment multiple images in the training set according to the set step size to obtain multiple sub-pictures. For example:

把训练集中的91张图像重叠分割,步长为14,分割后的子图片的大小是33*33,这样就可以得到21884个子图片,具体实现程序如下:The 91 images in the training set are overlapped and segmented, the step size is 14, and the size of the divided sub-pictures is 33*33, so that 21884 sub-pictures can be obtained. The specific implementation procedure is as follows:

for x in range(0,hei-size_input+1,stride):for x in range(0,hei-size_input+1,stride):

for y in range(0,wid-size_input+1,stride):for y in range(0,wid-size_input+1,stride):

subim_input=im_input[x:x+size_input,y:y+size_input,0:3]subim_input=im_input[x:x+size_input,y:y+size_input,0:3]

subim_label=im_label[x+padding:x+padding+size_label,y+padding:y+paddin g+size_label,0:3]subim_label=im_label[x+padding:x+padding+size_label,y+padding:y+padding g+size_label,0:3]

subim_input=subim_input.reshape([size_input,size_input,3])subim_input = subim_input.reshape([size_input, size_input, 3])

subim_label=subim_label.reshape([size_label,size_label,3])subim_label = subim_label.reshape([size_label, size_label, 3])

103,将训练集和测试集转化成H5py格式进行存储,解决内存占用大和读取慢的问题,其实现程序如下:103. Convert the training set and test set into H5py format for storage to solve the problem of large memory usage and slow reading. The implementation procedure is as follows:

with h5py.File(savepath,'w')as hf:with h5py.File(savepath,'w') as hf:

hf.create_dataset('test_input',data=im_input)hf.create_dataset('test_input', data=im_input)

hf.create_dataset('test_label',data=im_label)hf.create_dataset('test_label', data=im_label)

with h5py.File(savepath,'w')as hf:with h5py.File(savepath,'w') as hf:

hf.create_dataset('input',data=data)hf.create_dataset('input', data=data)

hf.create_dataset('label',data=label)hf.create_dataset('label', data=label)

104,导入Tensorflow中必要的工具包以及数据集,其实现程序如下:104. Import the necessary toolkit and data set in Tensorflow, and its implementation procedure is as follows:

import tensorflow as tfimport tensorflow as tf

import h5pyimport h5py

import numpy as npimport numpy as np

import matplotlib.pyplot as pltimport matplotlib.pyplot as plt

import stringimport string

import cv2import cv2

with h5py.File('train_py.h5','r')as hf:with h5py.File('train_py.h5','r') as hf:

hf_data=hf.get('input')hf_data = hf.get('input')

data=np.array(hf_data)data = np.array(hf_data)

hf_label=hf.get('label')hf_label = hf.get('label')

label=np.array(hf_label)label = np.array(hf_label)

with h5py.File('test_py.h5','r')as hf:with h5py.File('test_py.h5','r') as hf:

hf_test_data=hf.get('test_input')hf_test_data = hf.get('test_input')

test_data=np.array(hf_test_data)test_data = np.array(hf_test_data)

hf_test_label=hf.get('test_label')hf_test_label = hf.get('test_label')

test_label=np.array(hf_test_label)test_label=np.array(hf_test_label)

105,定义初始化函数,随机的给权重制造噪声来消除完全对称,设定标准差;导入数据以及工具包结束就可以实现这个卷积神经网络需要大量的权重和偏置,在一个具体的实施例中,所述设定标准差的值为:0.01。权重和偏置的设定程序如下:105. Define the initialization function, randomly create noise for the weights to eliminate complete symmetry, and set the standard deviation; import data and the end of the toolkit can be realized. This convolutional neural network requires a large number of weights and biases. In a specific embodiment , the value of the set standard deviation is: 0.01. The procedure for setting weights and biases is as follows:

def init_weights(shape):def init_weights(shape):

return tf.Variable(tf.random_normal(shape,stddev=0.01))return tf.Variable(tf.random_normal(shape,stddev=0.01))

W1=init_weights([9,9,3,64])W1=init_weights([9,9,3,64])

W2=init_weights([3,3,64,32])W2=init_weights([3,3,64,32])

W3=init_weights([1,1,32,16])W3=init_weights([1,1,32,16])

W4=init_weights([5,5,16,3])W4=init_weights([5,5,16,3])

B1=tf.Variable(tf.zeros([64]),name="Bias1")B1=tf.Variable(tf.zeros([64]),name="Bias1")

B2=tf.Variable(tf.zeros([32]),name="Bias2")B2=tf.Variable(tf.zeros([32]),name="Bias2")

B3=tf.Variable(tf.zeros([16]),name="Bias3")B3=tf.Variable(tf.zeros([16]),name="Bias3")

B4=tf.Variable(tf.zeros([3]),name="Bias4")B4=tf.Variable(tf.zeros([3]),name="Bias4")

tf.nn.conv2d是TensorFlow中的卷积函数,参数中X代表输入,W代表卷记参数权值,Strides代表卷积模板西东的步长,Padding是边界处理方式,SAME代表给边界填零使得卷积的输出和输入有同样的尺寸,VALID代表不处理边界,这样图像经过卷积之后就会变小。在输入数据之前的定义变量可以使用占位符来代替输入,输入量为X,加上的标签(lable)为Y,实现程序如下:tf.nn.conv2d is the convolution function in TensorFlow. Among the parameters, X represents the input, W represents the weight of the volume parameter, Strides represents the step size of the convolution template, Padding is the boundary processing method, and SAME represents filling the boundary with zeros. Make the convolution output and input have the same size, VALID means not to process the boundary, so that the image will become smaller after convolution. The defined variable before inputting data can use a placeholder to replace the input. The input amount is X, and the added label (lable) is Y. The implementation procedure is as follows:

X=tf.placeholder("float32",[None,33,33,3])X=tf.placeholder("float32",[None,33,33,3])

Y=tf.placeholder("float32",[None,19,19,3])Y=tf.placeholder("float32",[None,19,19,3])

L1=tf.nn.relu(tf.nn.conv2d(X,W1,strides=[1,1,1,1],padding='VALID')+B1)L1=tf.nn.relu(tf.nn.conv2d(X,W1,strides=[1,1,1,1],padding='VALID')+B1)

L2=tf.nn.relu(tf.nn.conv2d(L1,W2,strides=[1,1,1,1],padding='VALID')+B2)L2=tf.nn.relu(tf.nn.conv2d(L1,W2,strides=[1,1,1,1],padding='VALID')+B2)

L3=tf.nn.relu(tf.nn.conv2d(L2,W3,strides=[1,1,1,1],padding='VALID')+B3)L3=tf.nn.relu(tf.nn.conv2d(L2,W3,strides=[1,1,1,1],padding='VALID')+B3)

hypothesis=tf.nn.conv2d(L3,W4,strides=[1,1,1,1],padding='VALID')+B4hypothesis=tf.nn.conv2d(L3, W4, strides=[1,1,1,1], padding='VALID')+B4

106,定义损失函数来结束训练的结果,优化器选用Adam,给予一个极小的学习率;在一个具体的实施例中,所述给予一个极小的学习率的值为0.000001。实现程序如下:106. Define a loss function to end the training result. The optimizer selects Adam and gives a very small learning rate; in a specific embodiment, the value of giving a very small learning rate is 0.000001. The implementation procedure is as follows:

cost=tf.reduce_mean(tf.reduce_sum(tf.square((Y-subim_input)-hypothesis),cost=tf.reduce_mean(tf.reduce_sum(tf.square((Y-subim_input)-hypothesis),

reduction_indices=0))reduction_indices=0))

var_list=[W1,W2,W3,W4,B1,B2,B3,B4]var_list=[W1,W2,W3,W4,B1,B2,B3,B4]

optimizer=tf.train.AdamOptimizer(learning_rate).minimize(cost,var_list=var_list)optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost, var_list = var_list)

107,开始训练过程,初始化所有参数,设置batch_size大小,迭代次数,每设定周期次数的训练输出损失,实时评测模型的性能;在一个具体的实施例中,所述设置batch_size大小,迭代次数,每设定周期次数的训练输出损失的值分别为:batch_size大小为128,迭代次数为100000次,设定周期次数为1000。其实现程序如下:107. Start the training process, initialize all parameters, set the batch_size size, the number of iterations, the training output loss per set number of cycles, and evaluate the performance of the model in real time; in a specific embodiment, the set batch_size size, the number of iterations, The values of the training output loss per set number of cycles are: the batch_size is 128, the number of iterations is 100,000, and the number of set cycles is 1000. Its implementation procedure is as follows:

with tf.Session()as sess:with tf.Session() as sess:

tf.initialize_all_variables().run()tf.initialize_all_variables().run()

for i in range(train_num):for i in range(train_num):

batch_data,batch_label=batch.__next__()batch_data, batch_label = batch.__next__()

sess.run(optimizer,feed_dict={X:batch_data,Y:batch_label})sess.run(optimizer, feed_dict={X: batch_data, Y: batch_label})

step+=1step+=1

if(epoch_number+step)%1000==0:if(epoch_number+step)%1000==0:

print_step=(epoch_number+step)print_step=(epoch_number+step)

epoch_cost_string="[epoch]:"+(str)(print_step)+"[cost]:"epoch_cost_string="[epoch]:"+(str)(print_step)+"[cost]:"

current_cost_sum=0.0current_cost_sum = 0.0

mean_batch_size=(int)((data.shape[0]/128))mean_batch_size=(int)((data.shape[0]/128))

for j in range(0,mean_batch_size):for j in range(0, mean_batch_size):

current_cost_sum+=sess.run(cost.feeed_dict={X:data[j](1,33,33,3)}),current_cost_sum+=sess.run(cost.feeed_dict={X:data[j](1,33,33,3)}),

Y:lable[j].reshape(1,19,19,3)})Y:lable[j].reshape(1,19,19,3)})

epoch_cost_string+=str(float(current_cost_sum/mean_batch_size))epoch_cost_string+=str(float(current_cost_sum/mean_batch_size))

epoch_cost_string+="\n"epoch_cost_string+="\n"

print(epoch_cost_string)print(epoch_cost_string)

108,把子图片重建成一整张图片,作为重建后高分辨率的输出图片。实现程序如下:108. Reconstruct the sub-pictures into a whole picture as a reconstructed high-resolution output picture. The implementation procedure is as follows:

with tf.Session()as sess:with tf.Session() as sess:

if(epoch_number+step)%1000==0:if(epoch_number+step)%1000==0:

test_L1=tf.nn.relu(tf.nn.conv2d(test_data,W1,strides=[1,1,1,1],padding='SAM E')+B1)test_L1=tf.nn.relu(tf.nn.conv2d(test_data,W1,strides=[1,1,1,1],padding='SAM E')+B1)

test_L2=tf.nn.relu(tf.nn.conv2d(test_L1,W2,strides=[1,1,1,1],padding='SAME')+B2)test_L2=tf.nn.relu(tf.nn.conv2d(test_L1,W2,strides=[1,1,1,1],padding='SAME')+B2)

test_L3=tf.nn.relu(tf.nn.conv2d(test_L1,W2,strides=[1,1,1,1],padding='SAME')+B3)test_L3=tf.nn.relu(tf.nn.conv2d(test_L1,W2,strides=[1,1,1,1],padding='SAME')+B3)

test_hypothesis=tf.nn.conv2d(test_L2,W3,strides=[1,1,1,1],padding='SAME')+B3)test_hypothesis=tf.nn.conv2d(test_L2, W3, strides=[1,1,1,1], padding='SAME')+B3)

output_image=sess.run(test_hypothesis)[0,:,:,0:3]output_image=sess.run(test_hypothesis)[0,:,:,0:3]

output_image+=test_data[0,:,:,0:3]output_image+=test_data[0,:,:,0:3]

for k in range(0,test_data.shape[1]):for k in range(0,test_data.shape[1]):

for j in range(0,test_data.shape[2]):for j in range(0,test_data.shape[2]):

for c in range(0,3):for c in range(0,3):

if(output_image[k,j,c]>1.0):output_image[k,j,c]=1;if(output_image[k,j,c]>1.0): output_image[k,j,c]=1;

elif(output_image[k,j,c]<0):output_image[k,j,c]=0;elif(output_image[k,j,c]<0): output_image[k,j,c]=0;

temp_image=(output_image*255).astype('uint8')temp_image=(output_image*255).astype('uint8')

temp_image=cv2.cvtColor(temp_image,cv2.COLOR_YCrCb2RGB)temp_image=cv2.cvtColor(temp_image,cv2.COLOR_YCrCb2RGB)

plt.imshow(temp_image)plt.imshow(temp_image)

subname="shot/"+str(epoch_number+step)+".jpg"subname="shot/"+str(epoch_number+step)+".jpg"

plt.savefig(subname)plt. savefig(subname)

本说明书中各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分相互参见即可。对于系统实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。Each embodiment in this specification is described in a progressive manner, each embodiment focuses on the difference from other embodiments, and the same or similar parts of each embodiment can be referred to each other. As for the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the related parts, please refer to the part of the description of the method embodiment.

本发明的描述是为了示例和描述起见而给出的,而并不是无遗漏的或者将本发明限于所公开的形式。很多修改和变化对于本领域的普通技术人员而言是显然的。选择和描述实施例是为了更好说明本发明的原理和实际应用,并且使本领域的普通技术人员能够理解本发明从而设计适于特定用途的带有各种修改的各种实施例。The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and changes will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to better explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention and design various embodiments with various modifications as are suited to the particular use.

Claims (8)

1. A high-resolution terahertz image reconstruction method based on an SRCNN improved model is characterized by comprising the following steps:
constructing an improved SRCNN model structure of double-layer feature extraction based on the model structure of the SRCNN, carrying out interpolation amplification after double-layer feature extraction, and then carrying out pooling;
each layer of the model structure of the improved SRCNN is subjected to the convolution of the third layer in sequence and is regarded as the nonlinear mapping of the full connection layer;
reconstructing a final high-resolution image through a fourth layer of convolution;
the reconstructing the final high resolution image through the fourth layer comprises:
converting an original color image into a gray image, and making a low-resolution image by reducing and amplifying an original image;
carrying out overlapping segmentation on a plurality of images in the training set according to a set step length to obtain a plurality of sub-pictures;
converting the training set and the test set into an H5py format for storage;
importing necessary tool packs and data sets in Tensorflow;
defining an initialization function, randomly manufacturing noise for the weight to eliminate complete symmetry, and setting a standard deviation;
defining a loss function to finish a training result, and giving a minimum learning rate by using Adam through an optimizer;
starting a training process, initializing all parameters, setting the size of batch _ size, the number of iterations, the training output loss of each set period number, and evaluating the performance of the model in real time;
and reconstructing the sub-picture into a whole picture as a reconstructed high-resolution output picture.
2. The method for reconstructing the high-resolution terahertz image based on the SRCNN improved model as claimed in claim 1, wherein the SRCNN based model structure is a model structure of an improved SRCNN for double-layer feature extraction, and the interpolation amplification and pooling after the double-layer feature extraction comprises:
setting the input image of the amplified target as Y and X as the original image, and calculating the output image F by the first layer of convolution network 1 As shown in formula (1):
F 1 (Y)=max(0,W 1 *Y+B 1 ) (1)
in the formula, B 1 Representing weight bias, is a convolution operation, W 1 Is used to represent n 1 C x f 1 ×f 2 A filter, wherein c represents the number of channels of the image, the color image is 3, and the grayscale image is 1,B 1 Represents n 1 An offset amount;
n generated at the first layer by the second layer convolutional neural network 1 Is mapped into n on the basis of the characteristic diagram 2 The characteristic diagram, the calculation formula is as follows (2):
F 2 (Y)=max(0,W 2 *F 1 (Y)+B 2 ) (2)
in the formula, B 2 Representing weight bias, representing convolution operation, W 2 Is used to represent n 2 N is 1 ×f 2 ×f 2 Filter, B 2 Represents n 2 An offset amount;
the third layer carries out image reconstruction, and a reconstruction layer is established by using the formula (3):
F(Y)=W 3 *F 2 (Y)+B 3 (3)
in the formula W 3 Is used to represent c n 2 ×f 3 ×f 3 Filter of B 3 C represents the offset;
f 1 =9,f 2 =1,f 3 =5。
3. the SRCNN improved model-based high-resolution terahertz image reconstruction method as claimed in claim 2, wherein the input image of the set target after amplification is Y, X is an original image, and the first layer of convolutional network calculates an output image F 1 In the formula (1), the parameter is n 1 =64。
4. The SRCNN improved model-based high-resolution terahertz image reconstruction method according to claim 2, wherein the second layer convolutional neural network generates n at the first layer 1 On the basis of the characteristic diagram, the characteristic diagram is mapped into n 2 A characteristic diagram, a calculation formula is shown in formula (2), and the parameter is n 2 =32。
5. The method for reconstructing the high-resolution terahertz image based on the SRCNN improved model as claimed in claim 2, wherein the third layer reconstructs the image, and the parameter c =3 is established by using formula (3).
6. The method for reconstructing the high-resolution terahertz image based on the SRCNN improved model as claimed in claim 1, wherein the set standard deviation values are as follows: 0.01.
7. the method for reconstructing the high-resolution terahertz image based on the SRCNN improved model, wherein the value for giving a very small learning rate is 0.000001.
8. The method of reconstructing a terahertz image with high resolution based on an improved SRCNN model according to claim 1, wherein the setting of the batch _ size, the number of iterations, and the loss of training output per set period are respectively as follows: the size of batch _ size is 128, the number of iterations is 100000, and the number of cycles is set to 1000.
CN201810806760.XA 2018-07-18 2018-07-18 High-resolution terahertz image reconstruction method based on improved SRCNN model Active CN109191376B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810806760.XA CN109191376B (en) 2018-07-18 2018-07-18 High-resolution terahertz image reconstruction method based on improved SRCNN model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810806760.XA CN109191376B (en) 2018-07-18 2018-07-18 High-resolution terahertz image reconstruction method based on improved SRCNN model

Publications (2)

Publication Number Publication Date
CN109191376A CN109191376A (en) 2019-01-11
CN109191376B true CN109191376B (en) 2022-11-25

Family

ID=64936961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810806760.XA Active CN109191376B (en) 2018-07-18 2018-07-18 High-resolution terahertz image reconstruction method based on improved SRCNN model

Country Status (1)

Country Link
CN (1) CN109191376B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109785237B (en) * 2019-01-25 2022-10-18 广东工业大学 Terahertz image super-resolution reconstruction method, system and related device
CN110033469B (en) * 2019-04-01 2021-08-27 北京科技大学 Sub-pixel edge detection method and system
CN110378891A (en) * 2019-07-24 2019-10-25 广东工业大学 A kind of hazardous material detection method, device and equipment based on terahertz image
CN110660020B (en) * 2019-08-15 2024-02-09 天津中科智能识别产业技术研究院有限公司 Image super-resolution method of antagonism generation network based on fusion mutual information
CN111784573A (en) * 2020-05-21 2020-10-16 昆明理工大学 A passive terahertz image super-resolution reconstruction method based on transfer learning
CN113935928B (en) * 2020-07-13 2023-04-11 四川大学 Rock core image super-resolution reconstruction based on Raw format
CN112308212A (en) * 2020-11-02 2021-02-02 佛山科学技术学院 A method and system for high-definition restoration of security images based on neural network
CN113706383A (en) * 2021-08-30 2021-11-26 上海亨临光电科技有限公司 Super-resolution method, system and device for terahertz image

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600538A (en) * 2016-12-15 2017-04-26 武汉工程大学 Human face super-resolution algorithm based on regional depth convolution neural network
CN106910161A (en) * 2017-01-24 2017-06-30 华南理工大学 A kind of single image super resolution ratio reconstruction method based on depth convolutional neural networks
CN106952229A (en) * 2017-03-15 2017-07-14 桂林电子科技大学 Image super-resolution reconstruction method based on improved convolutional network with data augmentation
CN107133919A (en) * 2017-05-16 2017-09-05 西安电子科技大学 Time dimension video super-resolution method based on deep learning
CN107274358A (en) * 2017-05-23 2017-10-20 广东工业大学 Image Super-resolution recovery technology based on cGAN algorithms
CN107464216A (en) * 2017-08-03 2017-12-12 济南大学 A kind of medical image ultra-resolution ratio reconstructing method based on multilayer convolutional neural networks
CN107784628A (en) * 2017-10-18 2018-03-09 南京大学 A kind of super-resolution implementation method based on reconstruction optimization and deep neural network
CN108010049A (en) * 2017-11-09 2018-05-08 华南理工大学 Split the method in human hand region in stop-motion animation using full convolutional neural networks
WO2018086354A1 (en) * 2016-11-09 2018-05-17 京东方科技集团股份有限公司 Image upscaling system, training method therefor, and image upscaling method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10648924B2 (en) * 2016-01-04 2020-05-12 Kla-Tencor Corp. Generating high resolution images from low resolution images for semiconductor applications

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018086354A1 (en) * 2016-11-09 2018-05-17 京东方科技集团股份有限公司 Image upscaling system, training method therefor, and image upscaling method
CN106600538A (en) * 2016-12-15 2017-04-26 武汉工程大学 Human face super-resolution algorithm based on regional depth convolution neural network
CN106910161A (en) * 2017-01-24 2017-06-30 华南理工大学 A kind of single image super resolution ratio reconstruction method based on depth convolutional neural networks
CN106952229A (en) * 2017-03-15 2017-07-14 桂林电子科技大学 Image super-resolution reconstruction method based on improved convolutional network with data augmentation
CN107133919A (en) * 2017-05-16 2017-09-05 西安电子科技大学 Time dimension video super-resolution method based on deep learning
CN107274358A (en) * 2017-05-23 2017-10-20 广东工业大学 Image Super-resolution recovery technology based on cGAN algorithms
CN107464216A (en) * 2017-08-03 2017-12-12 济南大学 A kind of medical image ultra-resolution ratio reconstructing method based on multilayer convolutional neural networks
CN107784628A (en) * 2017-10-18 2018-03-09 南京大学 A kind of super-resolution implementation method based on reconstruction optimization and deep neural network
CN108010049A (en) * 2017-11-09 2018-05-08 华南理工大学 Split the method in human hand region in stop-motion animation using full convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
On Bayesian Adaptive Video Super Resolution;Ce Liu et al;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20140228;第36卷(第2期);346-360 *
太赫兹图像的降噪和增强;徐利民等;《红外与激光工程》;20131031;第42卷(第10期);2865-2870 *

Also Published As

Publication number Publication date
CN109191376A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN109191376B (en) High-resolution terahertz image reconstruction method based on improved SRCNN model
CN109903228B (en) Image super-resolution reconstruction method based on convolutional neural network
CN109509152B (en) Image super-resolution reconstruction method for generating countermeasure network based on feature fusion
CN113014927B (en) Image compression method and image compression device
CN110163801B (en) A kind of image super-resolution and coloring method, system and electronic device
CN111047515A (en) Cavity convolution neural network image super-resolution reconstruction method based on attention mechanism
CN112215755B (en) A method for image super-resolution reconstruction based on back-projection attention network
CN110136062B (en) A Super-Resolution Reconstruction Method for Joint Semantic Segmentation
CN112270644A (en) Face super-resolution method based on spatial feature transformation and cross-scale feature integration
CN110111251B (en) Image super-resolution reconstruction method combining depth supervision self-coding and perception iterative back projection
Wei et al. Deep unfolding with normalizing flow priors for inverse problems
CN102915527A (en) Face image super-resolution reconstruction method based on morphological component analysis
CN107392852A (en) Super resolution ratio reconstruction method, device, equipment and the storage medium of depth image
CN107784628A (en) A kind of super-resolution implementation method based on reconstruction optimization and deep neural network
CN109003229A (en) Magnetic resonance super resolution ratio reconstruction method based on three-dimensional enhancing depth residual error network
CN115393186A (en) Face image super-resolution reconstruction method, system, device and medium
Guo et al. Deep learning based image super-resolution with coupled backpropagation
CN114092834A (en) Unsupervised hyperspectral image blind fusion method and system based on space-spectrum combined residual correction network
CN110473151B (en) Two-stage image completion method and system based on partitioned convolution and association loss
WO2024221696A1 (en) Method for generating image super-resolution dataset, image super-resolution model, and training method
CN112819716B (en) Unsupervised Learning X-ray Image Enhancement Method Based on Gaussian-Laplacian Pyramid
CN111861886A (en) An image super-resolution reconstruction method based on multi-scale feedback network
CN106447609A (en) Image super-resolution method based on depth convolutional neural network
CN114862679A (en) Single image super-resolution reconstruction method based on residual generative adversarial network
US20240362747A1 (en) Methods for generating image super-resolution data set, image super-resolution model and training method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant