CN109191376B - High-resolution terahertz image reconstruction method based on SRCNN improved model - Google Patents
High-resolution terahertz image reconstruction method based on SRCNN improved model Download PDFInfo
- Publication number
- CN109191376B CN109191376B CN201810806760.XA CN201810806760A CN109191376B CN 109191376 B CN109191376 B CN 109191376B CN 201810806760 A CN201810806760 A CN 201810806760A CN 109191376 B CN109191376 B CN 109191376B
- Authority
- CN
- China
- Prior art keywords
- layer
- srcnn
- image
- model
- improved
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000000605 extraction Methods 0.000 claims abstract description 18
- 238000011176 pooling Methods 0.000 claims abstract description 10
- 230000003321 amplification Effects 0.000 claims abstract description 9
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 9
- 238000003199 nucleic acid amplification method Methods 0.000 claims abstract description 9
- 238000013507 mapping Methods 0.000 claims abstract description 5
- 238000012549 training Methods 0.000 claims description 21
- 238000010586 diagram Methods 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 5
- 238000012360 testing method Methods 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 238000003860 storage Methods 0.000 claims description 3
- 238000004519 manufacturing process Methods 0.000 claims description 2
- 230000011218 segmentation Effects 0.000 claims description 2
- 238000003780 insertion Methods 0.000 abstract description 2
- 230000037431 insertion Effects 0.000 abstract description 2
- 238000003384 imaging method Methods 0.000 description 6
- 230000008676 import Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 101100365548 Caenorhabditis elegans set-14 gene Proteins 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention discloses a high-resolution terahertz image reconstruction method based on an SRCNN improved model, which comprises the following steps: constructing a model structure of an improved SRCNN with double-layer feature extraction based on the model structure of the SRCNN, carrying out interpolation amplification after double-layer feature extraction, and then adding pooling; sequentially performing convolution on each layer of the improved SRCNN model structure by a third layer, wherein the third convolution is regarded as nonlinear mapping of a full connection layer; and reconstructing a final high-resolution image through the fourth layer. The invention improves the problems that the prior SRCNN model needs a large amount of time when the model is trained and the resolution of the reconstructed image is low, obtains the final model and the optimal parameters by continuously modifying the weight and other parameters in the model until the insertion loss value reaches a very small value. Compared with the result PSNR (Peak Signal to noise ratio) processed by the SRCNN model in the prior art, the improved SRCNN four-layer convolutional neural network model designed by the invention has the advantage that the PSNR (Peak Signal to noise ratio) quantity is improved by 2.4dB.
Description
Technical Field
The invention relates to the technical field of image data processing, in particular to a high-resolution terahertz image reconstruction method based on an SRCNN improved model.
Background
In the prior art, there are two main methods for solving the quality problem of terahertz images: (1) the terahertz imaging method is solved from the source of terahertz imaging, the imaging resolution of terahertz imaging equipment is improved, and noise in the imaging process is reduced; (2) and starting processing from the obtained terahertz image, and optimizing and improving an algorithm to process the terahertz image.
The first method has the advantages of solving the original source of the problem, but has high equipment cost and low cost performance; the second method utilizes a digital image processing algorithm to realize relevant processing and application of the terahertz image, but has the problem of low imaging definition of the terahertz image.
The SRCNN model is a model designed by the Thomutili team, and the model realizes image reconstruction based on a convolutional neural network, and the model only uses a three-layer neural network structure and represents the traditional image reconstruction steps with sparse representation. But the existing SRCNN model has insufficient definition for processing images.
Accordingly, there is a need in the art for improvements.
Disclosure of Invention
The embodiment of the invention aims to solve the technical problem that: the high-resolution terahertz image reconstruction method based on the SRCNN improved model is provided to solve the problems in the prior art, and comprises the following steps:
constructing a model structure of an improved SRCNN for double-layer feature extraction based on the model structure of the SRCNN, performing interpolation amplification after double-layer feature extraction, and then performing pooling;
each layer of the model structure of the improved SRCNN is subjected to the convolution of the third layer in sequence, and the obtained result is regarded as the nonlinear mapping of the full connection layer;
and reconstructing a final high-resolution image through the fourth layer.
In another embodiment of the method for reconstructing a high-resolution terahertz image based on the above improved model of the srnn, the method includes the steps of constructing a model structure of the improved srnn for double-layer feature extraction based on the model structure of the srnn, performing interpolation amplification after the double-layer feature extraction, and then performing pooling:
setting the input image of the amplified target as Y and X as the original image, and calculating the output image F by the first layer of convolution network 1 As shown in formula (1):
F 1 (Y)=max(0,W 1 *Y+B 1 ) (1)
in the formula, B 1 Representing weight bias, is a convolution operation, W 1 Is used to represent n 1 C x f 1 ×f 2 A filter, wherein c represents the number of channels of the image, the color image is 3, and the grayscale image is 1,B 1 Represents n 1 An offset amount;
n generated by the second layer convolution neural network at the first layer 1 On the basis of the characteristic diagram, the characteristic diagram is mapped into n 2 The characteristic diagram, the calculation formula is as follows (2):
F 2 (Y)=max(0,W 2 *F 1 (Y)+B 2 ) (2)
in the formula, B 2 Representing weight bias, representing convolution operation, W 2 Is used to represent n 2 N is 1 ×f 2 ×f 2 Filter, B 2 Represents n 2 An offset amount;
the third layer carries out image reconstruction, and a reconstruction layer is established by using the formula (3):
F(Y)=W 3 *F 2 (Y)+B 3 (3)
in the formula W 3 Is used to represent c n 2 ×f 3 ×f 3 Filter of B 3 C offsets are indicated.
In another embodiment of the method for reconstructing a high-resolution terahertz image based on the above improved SRCNN model according to the present invention, the input image of the set target after being amplified is Y, X is the original image, and the first layer of convolutional network calculates the output image F 1 As in formula (1), the parameter is f 1 =9,n 1 =64。
In another embodiment of the high-resolution terahertz image reconstruction method based on the above improved model of the SRCNN, the second layer convolutional neural network generates n at the first layer 1 On the basis of the characteristic diagram, the characteristic diagram is mapped into n 2 A characteristic diagram, a calculation formula is shown in formula (2), and the parameter is f 2 =1,n 2 =32。
In another embodiment of the method for reconstructing a high-resolution terahertz image based on the above improved SRCNN model according to the present invention, the third layer performs image reconstruction, and the reconstructed layer is established by using formula (3) with parameters of c =3,f 3 =5。
In another embodiment of the method for reconstructing a high-resolution terahertz image based on the above improved SRCNN model according to the present invention, reconstructing a final high-resolution image through the fourth layer includes:
converting an original color image into a gray image, and making a low-resolution image by reducing and amplifying an original image;
carrying out overlapping segmentation on a plurality of images in the training set according to a set step length to obtain a plurality of sub-pictures;
converting the training set and the test set into an H5py format for storage;
importing necessary tool packs and data sets in Tensorflow;
defining an initialization function, randomly manufacturing noise for the weight to eliminate complete symmetry, and setting a standard deviation;
defining a loss function to finish a training result, and giving a minimum learning rate by using Adam through an optimizer;
starting a training process, initializing all parameters, setting the size of batch _ size, the number of iterations, the training output loss of each set period number, and evaluating the performance of the model in real time;
and reconstructing the sub-picture into a whole picture as a reconstructed high-resolution output picture.
In another embodiment of the method for reconstructing a high-resolution terahertz image based on the above improved SRCNN model according to the present invention, the set standard deviation has the following values: 0.01.
in another embodiment of the method for reconstructing a high-resolution terahertz image based on the above improved model of the present invention, the value given to a very small learning rate is 0.000001.
In another embodiment of the method for reconstructing a high-resolution terahertz image based on the above SRCNN improved model, the values of the training output loss for each set period of the set batch _ size and the iteration number are respectively as follows: the size of batch _ size is 128, the number of iterations is 100000, and the number of cycles is set to 1000.
Compared with the prior art, the invention has the following advantages:
the invention provides a high-resolution terahertz image reconstruction method based on an SRCNN improved model, which solves the problem that the existing SRCNN model needs a large amount of time when the model is trained, and obtains a final model and optimal parameters by continuously modifying weights and other parameters in the model until an insertion loss value reaches a small value. The improved SRCNN model of the four-layer convolutional neural network designed by the invention is trained, and compared with the SRCNN model in the prior art, the PSNR amount of the SRCNN model is improved by 2.4dB.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
The invention will be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
fig. 1 is a flowchart of an embodiment of a high-resolution terahertz image reconstruction method based on an improved SRCNN model according to the present invention;
fig. 2 is a flowchart of another embodiment of the high-resolution terahertz image reconstruction method based on the improved SRCNN model according to the present invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
As shown in fig. 1, the high-resolution terahertz image reconstruction method based on the SRCNN improved model includes:
10, constructing a double-layer feature extraction improved SRCNN model structure based on the SRCNN model structure, carrying out interpolation amplification after double-layer feature extraction, and then carrying out pooling; by increasing the number of layers of the model, compared with the feature extraction of only one layer in the prior art, noise is introduced into the feature extraction layer of one layer and information extraction is incomplete, so that the method uses double-layer feature extraction, compared with an SRCNN model before improvement, difference amplification is not carried out, and pooling is directly carried out, so that the quality and the resolution of the image are improved;
20, sequentially performing convolution on each layer of the improved SRCNN model structure through a third layer, and regarding the obtained result as nonlinear mapping of a full connection layer;
and 30, reconstructing a final high-resolution image through a fourth layer.
The SRCNN improved model can more intuitively see end-to-end mapping, and a pooling layer is not added in the improved model, wherein the pooling layer reduces the size of a picture, thereby reducing parameters, reducing calculation amount and improving training speed.
The model structure based on SRCNN constructs a double-layer model structure of improved SRCNN for feature extraction, interpolation amplification is carried out after double-layer feature extraction, and the adding of pooling comprises the following steps:
setting the input image of the amplified target as Y and X as the original image, and calculating the output image F by the first layer of convolution network 1 As shown in formula (1):
F 1 (Y)=max(0,W 1 *Y+B 1 ) (1)
in the formula W 1 And B 1 Representing filters and weight offsets, is a convolution operation, W 1 Is used to represent n 1 C x f 1 ×f 2 A filter, wherein c represents the number of channels of the image, the color image is 3, and the grayscale image is 1,B 1 Represents n 1 An offset amount;
n generated at the first layer by the second layer convolutional neural network 1 Is mapped into n on the basis of the characteristic diagram 2 The characteristic diagram, the calculation formula is as follows (2):
F 2 (Y)=max(0,W 2 *F 1 (Y)+B 2 ) (2)
in the formula W 2 And B 2 Representing filters and weight offsets, representing convolution operations, W 2 Is used to represent n 2 N is 1 ×f 2 ×f 2 Filter, B 2 Represents n 2 An offset amount;
the third layer carries out image reconstruction, and a reconstruction layer is established by using the formula (3):
F(Y)=W 3 *F 2 (Y)+B 3 (3)
in the formula W 3 Is used to represent c n 2 ×f 3 ×f 3 Filter of B 3 C offsets are indicated.
The input image of the set target after being amplified is Y, X is an original image, and the first layer of convolution network calculates the output image F 1 As in formula (1), the parameter is f 1 =9,n 1 =64。
N generated by the second layer convolutional neural network at the first layer 1 On the basis of the characteristic diagram, the characteristic diagram is mapped into n 2 A characteristic diagram, the calculation formula is as in formula (2), the parameter is f 2 =1,n 2 =32。
The third layer performs image reconstruction, and the reconstruction layer is established by using the formula (3), wherein the parameter is c =3 3 =5。
The following examples employ relevant initial parameters including: the data Set contained 91 training Set pictures in total, and 19 test Set images, of which 5 Set5 and 14 Set14 data sets.
As shown in fig. 2, the reconstructing the final high resolution image through the fourth layer includes:
101, converting an original color image into a grayscale image, and making a low-resolution image by reducing and enlarging an original image, wherein the implementation procedure comprises the following steps:
image=cv2.cvtColor(image,cv2.COLOR_BGR2YCrCb)
image=image[:,:,0:3]
im_label=modcrop(image,scale)
(hei,wid,_)=im_label.shape
im_input=cv2.resize(im_label,(0,0),fx=1.0/scale,fy=1.0/scale,interpolation=cv2.INTER_CUBIC)
im_input=cv2.resize(im_input,(0,0),fx=scale,fy=scale,interpolation=cv2.
INTER_CUBIC)
102, overlapping and dividing a plurality of images in the training set according to a set step length to obtain a plurality of sub-pictures, for example:
the 91 images in the training set are overlapped and divided, the step size is 14, and the size of the divided sub-picture is 33 × 33, so that 21884 sub-pictures can be obtained, and the specific implementation procedure is as follows:
for x in range(0,hei-size_input+1,stride):
for y in range(0,wid-size_input+1,stride):
subim_input=im_input[x:x+size_input,y:y+size_input,0:3]
subim_label=im_label[x+padding:x+padding+size_label,y+padding:y+paddin g+size_label,0:3]
subim_input=subim_input.reshape([size_input,size_input,3])
subim_label=subim_label.reshape([size_label,size_label,3])
103, converting the training set and the test set into an H5py format for storage, and solving the problems of large memory occupation and slow reading, wherein the implementation procedure is as follows:
with h5py.File(savepath,'w')as hf:
hf.create_dataset('test_input',data=im_input)
hf.create_dataset('test_label',data=im_label)
with h5py.File(savepath,'w')as hf:
hf.create_dataset('input',data=data)
hf.create_dataset('label',data=label)
104, importing necessary tool packages and data sets in Tensorflow, and implementing the following procedures:
import tensorflow as tf
import h5py
import numpy as np
import matplotlib.pyplot as plt
import string
import cv2
with h5py.File('train_py.h5','r')as hf:
hf_data=hf.get('input')
data=np.array(hf_data)
hf_label=hf.get('label')
label=np.array(hf_label)
with h5py.File('test_py.h5','r')as hf:
hf_test_data=hf.get('test_input')
test_data=np.array(hf_test_data)
hf_test_label=hf.get('test_label')
test_label=np.array(hf_test_label)
105, defining an initialization function, randomly giving weight to make noise to eliminate complete symmetry, and setting a standard deviation; a large amount of weight and bias are required to implement the convolutional neural network by importing data and completing a toolkit, and in a specific embodiment, the set standard deviation value is: 0.01. the procedure for setting the weights and offsets is as follows:
def init_weights(shape):
return tf.Variable(tf.random_normal(shape,stddev=0.01))
W1=init_weights([9,9,3,64])
W2=init_weights([3,3,64,32])
W3=init_weights([1,1,32,16])
W4=init_weights([5,5,16,3])
B1=tf.Variable(tf.zeros([64]),name="Bias1")
B2=tf.Variable(tf.zeros([32]),name="Bias2")
B3=tf.Variable(tf.zeros([16]),name="Bias3")
B4=tf.Variable(tf.zeros([3]),name="Bias4")
nn. Conv2d is a convolution function in TensorFlow, wherein X in the parameter represents input, W represents a rolling parameter weight, strides represents the step size of the West east of a convolution template, padding is a boundary processing mode, SAME represents that zero is filled in the boundary so that the output and the input of the convolution have the SAME size, VALID represents that the boundary is not processed, and thus the image is reduced after being subjected to convolution. The definition variables before inputting data can use placeholders instead of inputs, the input quantity is X, the added tag (cable) is Y, and the implementation procedure is as follows:
X=tf.placeholder("float32",[None,33,33,3])
Y=tf.placeholder("float32",[None,19,19,3])
L1=tf.nn.relu(tf.nn.conv2d(X,W1,strides=[1,1,1,1],padding='VALID')+B1)
L2=tf.nn.relu(tf.nn.conv2d(L1,W2,strides=[1,1,1,1],padding='VALID')+B2)
L3=tf.nn.relu(tf.nn.conv2d(L2,W3,strides=[1,1,1,1],padding='VALID')+B3)
hypothesis=tf.nn.conv2d(L3,W4,strides=[1,1,1,1],padding='VALID')+B4
106, defining a loss function to finish the training result, wherein the optimizer selects Adam and gives a minimum learning rate; in a specific embodiment, the value that gives a very small learning rate is 0.000001. The implementation procedure is as follows:
cost=tf.reduce_mean(tf.reduce_sum(tf.square((Y-subim_input)-hypothesis),
reduction_indices=0))
var_list=[W1,W2,W3,W4,B1,B2,B3,B4]
optimizer=tf.train.AdamOptimizer(learning_rate).minimize(cost,var_list=var_list)
107, starting a training process, initializing all parameters, setting the size of batch _ size, the number of iterations, the training output loss of each set period number, and evaluating the performance of the model in real time; in a specific embodiment, the setting of the size of batch _ size, the number of iterations, and the values of the loss of training output per set number of cycles are respectively: the size of batch _ size is 128, the number of iterations is 100000, and the number of cycles is set to 1000. The implementation procedure is as follows:
with tf.Session()as sess:
tf.initialize_all_variables().run()
for i in range(train_num):
batch_data,batch_label=batch.__next__()
sess.run(optimizer,feed_dict={X:batch_data,Y:batch_label})
step+=1
if(epoch_number+step)%1000==0:
print_step=(epoch_number+step)
epoch_cost_string="[epoch]:"+(str)(print_step)+"[cost]:"
current_cost_sum=0.0
mean_batch_size=(int)((data.shape[0]/128))
for j in range(0,mean_batch_size):
current_cost_sum+=sess.run(cost.feeed_dict={X:data[j](1,33,33,3)}),
Y:lable[j].reshape(1,19,19,3)})
epoch_cost_string+=str(float(current_cost_sum/mean_batch_size))
epoch_cost_string+="\n"
print(epoch_cost_string)
and 108, reconstructing the sub-picture into a whole picture as a reconstructed high-resolution output picture. The implementation procedure is as follows:
with tf.Session()as sess:
if(epoch_number+step)%1000==0:
test_L1=tf.nn.relu(tf.nn.conv2d(test_data,W1,strides=[1,1,1,1],padding='SAM E')+B1)
test_L2=tf.nn.relu(tf.nn.conv2d(test_L1,W2,strides=[1,1,1,1],padding='SAME')+B2)
test_L3=tf.nn.relu(tf.nn.conv2d(test_L1,W2,strides=[1,1,1,1],padding='SAME')+B3)
test_hypothesis=tf.nn.conv2d(test_L2,W3,strides=[1,1,1,1],padding='SAME')+B3)
output_image=sess.run(test_hypothesis)[0,:,:,0:3]
output_image+=test_data[0,:,:,0:3]
for k in range(0,test_data.shape[1]):
for j in range(0,test_data.shape[2]):
for c in range(0,3):
if(output_image[k,j,c]>1.0):output_image[k,j,c]=1;
elif(output_image[k,j,c]<0):output_image[k,j,c]=0;
temp_image=(output_image*255).astype('uint8')
temp_image=cv2.cvtColor(temp_image,cv2.COLOR_YCrCb2RGB)
plt.imshow(temp_image)
subname="shot/"+str(epoch_number+step)+".jpg"
plt.savefig(subname)
in the present specification, the embodiments are described in a progressive manner, and each embodiment focuses on differences from other embodiments, and the same or similar parts in each embodiment are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to practitioners skilled in this art. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
Claims (8)
1. A high-resolution terahertz image reconstruction method based on an SRCNN improved model is characterized by comprising the following steps:
constructing an improved SRCNN model structure of double-layer feature extraction based on the model structure of the SRCNN, carrying out interpolation amplification after double-layer feature extraction, and then carrying out pooling;
each layer of the model structure of the improved SRCNN is subjected to the convolution of the third layer in sequence and is regarded as the nonlinear mapping of the full connection layer;
reconstructing a final high-resolution image through a fourth layer of convolution;
the reconstructing the final high resolution image through the fourth layer comprises:
converting an original color image into a gray image, and making a low-resolution image by reducing and amplifying an original image;
carrying out overlapping segmentation on a plurality of images in the training set according to a set step length to obtain a plurality of sub-pictures;
converting the training set and the test set into an H5py format for storage;
importing necessary tool packs and data sets in Tensorflow;
defining an initialization function, randomly manufacturing noise for the weight to eliminate complete symmetry, and setting a standard deviation;
defining a loss function to finish a training result, and giving a minimum learning rate by using Adam through an optimizer;
starting a training process, initializing all parameters, setting the size of batch _ size, the number of iterations, the training output loss of each set period number, and evaluating the performance of the model in real time;
and reconstructing the sub-picture into a whole picture as a reconstructed high-resolution output picture.
2. The method for reconstructing the high-resolution terahertz image based on the SRCNN improved model as claimed in claim 1, wherein the SRCNN based model structure is a model structure of an improved SRCNN for double-layer feature extraction, and the interpolation amplification and pooling after the double-layer feature extraction comprises:
setting the input image of the amplified target as Y and X as the original image, and calculating the output image F by the first layer of convolution network 1 As shown in formula (1):
F 1 (Y)=max(0,W 1 *Y+B 1 ) (1)
in the formula, B 1 Representing weight bias, is a convolution operation, W 1 Is used to represent n 1 C x f 1 ×f 2 A filter, wherein c represents the number of channels of the image, the color image is 3, and the grayscale image is 1,B 1 Represents n 1 An offset amount;
n generated at the first layer by the second layer convolutional neural network 1 Is mapped into n on the basis of the characteristic diagram 2 The characteristic diagram, the calculation formula is as follows (2):
F 2 (Y)=max(0,W 2 *F 1 (Y)+B 2 ) (2)
in the formula, B 2 Representing weight bias, representing convolution operation, W 2 Is used to represent n 2 N is 1 ×f 2 ×f 2 Filter, B 2 Represents n 2 An offset amount;
the third layer carries out image reconstruction, and a reconstruction layer is established by using the formula (3):
F(Y)=W 3 *F 2 (Y)+B 3 (3)
in the formula W 3 Is used to represent c n 2 ×f 3 ×f 3 Filter of B 3 C represents the offset;
f 1 =9,f 2 =1,f 3 =5。
3. the SRCNN improved model-based high-resolution terahertz image reconstruction method as claimed in claim 2, wherein the input image of the set target after amplification is Y, X is an original image, and the first layer of convolutional network calculates an output image F 1 In the formula (1), the parameter is n 1 =64。
4. The SRCNN improved model-based high-resolution terahertz image reconstruction method according to claim 2, wherein the second layer convolutional neural network generates n at the first layer 1 On the basis of the characteristic diagram, the characteristic diagram is mapped into n 2 A characteristic diagram, a calculation formula is shown in formula (2), and the parameter is n 2 =32。
5. The method for reconstructing the high-resolution terahertz image based on the SRCNN improved model as claimed in claim 2, wherein the third layer reconstructs the image, and the parameter c =3 is established by using formula (3).
6. The method for reconstructing the high-resolution terahertz image based on the SRCNN improved model as claimed in claim 1, wherein the set standard deviation values are as follows: 0.01.
7. the method for reconstructing the high-resolution terahertz image based on the SRCNN improved model, wherein the value for giving a very small learning rate is 0.000001.
8. The method of reconstructing a terahertz image with high resolution based on an improved SRCNN model according to claim 1, wherein the setting of the batch _ size, the number of iterations, and the loss of training output per set period are respectively as follows: the size of batch _ size is 128, the number of iterations is 100000, and the number of cycles is set to 1000.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810806760.XA CN109191376B (en) | 2018-07-18 | 2018-07-18 | High-resolution terahertz image reconstruction method based on SRCNN improved model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810806760.XA CN109191376B (en) | 2018-07-18 | 2018-07-18 | High-resolution terahertz image reconstruction method based on SRCNN improved model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109191376A CN109191376A (en) | 2019-01-11 |
CN109191376B true CN109191376B (en) | 2022-11-25 |
Family
ID=64936961
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810806760.XA Active CN109191376B (en) | 2018-07-18 | 2018-07-18 | High-resolution terahertz image reconstruction method based on SRCNN improved model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109191376B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109785237B (en) * | 2019-01-25 | 2022-10-18 | 广东工业大学 | Terahertz image super-resolution reconstruction method, system and related device |
CN110033469B (en) * | 2019-04-01 | 2021-08-27 | 北京科技大学 | Sub-pixel edge detection method and system |
CN110378891A (en) * | 2019-07-24 | 2019-10-25 | 广东工业大学 | A kind of hazardous material detection method, device and equipment based on terahertz image |
CN110660020B (en) * | 2019-08-15 | 2024-02-09 | 天津中科智能识别产业技术研究院有限公司 | Image super-resolution method of antagonism generation network based on fusion mutual information |
CN111784573A (en) * | 2020-05-21 | 2020-10-16 | 昆明理工大学 | Passive terahertz image super-resolution reconstruction method based on transfer learning |
CN113935928B (en) * | 2020-07-13 | 2023-04-11 | 四川大学 | Rock core image super-resolution reconstruction based on Raw format |
CN112308212A (en) * | 2020-11-02 | 2021-02-02 | 佛山科学技术学院 | Security image high-definition recovery method and system based on neural network |
CN113706383A (en) * | 2021-08-30 | 2021-11-26 | 上海亨临光电科技有限公司 | Super-resolution method, system and device for terahertz image |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106600538A (en) * | 2016-12-15 | 2017-04-26 | 武汉工程大学 | Human face super-resolution algorithm based on regional depth convolution neural network |
CN106910161A (en) * | 2017-01-24 | 2017-06-30 | 华南理工大学 | A kind of single image super resolution ratio reconstruction method based on depth convolutional neural networks |
CN106952229A (en) * | 2017-03-15 | 2017-07-14 | 桂林电子科技大学 | Image super-resolution rebuilding method based on the enhanced modified convolutional network of data |
CN107133919A (en) * | 2017-05-16 | 2017-09-05 | 西安电子科技大学 | Time dimension video super-resolution method based on deep learning |
CN107274358A (en) * | 2017-05-23 | 2017-10-20 | 广东工业大学 | Image Super-resolution recovery technology based on cGAN algorithms |
CN107464216A (en) * | 2017-08-03 | 2017-12-12 | 济南大学 | A kind of medical image ultra-resolution ratio reconstructing method based on multilayer convolutional neural networks |
CN107784628A (en) * | 2017-10-18 | 2018-03-09 | 南京大学 | A kind of super-resolution implementation method based on reconstruction optimization and deep neural network |
CN108010049A (en) * | 2017-11-09 | 2018-05-08 | 华南理工大学 | Split the method in human hand region in stop-motion animation using full convolutional neural networks |
WO2018086354A1 (en) * | 2016-11-09 | 2018-05-17 | 京东方科技集团股份有限公司 | Image upscaling system, training method therefor, and image upscaling method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10648924B2 (en) * | 2016-01-04 | 2020-05-12 | Kla-Tencor Corp. | Generating high resolution images from low resolution images for semiconductor applications |
-
2018
- 2018-07-18 CN CN201810806760.XA patent/CN109191376B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018086354A1 (en) * | 2016-11-09 | 2018-05-17 | 京东方科技集团股份有限公司 | Image upscaling system, training method therefor, and image upscaling method |
CN106600538A (en) * | 2016-12-15 | 2017-04-26 | 武汉工程大学 | Human face super-resolution algorithm based on regional depth convolution neural network |
CN106910161A (en) * | 2017-01-24 | 2017-06-30 | 华南理工大学 | A kind of single image super resolution ratio reconstruction method based on depth convolutional neural networks |
CN106952229A (en) * | 2017-03-15 | 2017-07-14 | 桂林电子科技大学 | Image super-resolution rebuilding method based on the enhanced modified convolutional network of data |
CN107133919A (en) * | 2017-05-16 | 2017-09-05 | 西安电子科技大学 | Time dimension video super-resolution method based on deep learning |
CN107274358A (en) * | 2017-05-23 | 2017-10-20 | 广东工业大学 | Image Super-resolution recovery technology based on cGAN algorithms |
CN107464216A (en) * | 2017-08-03 | 2017-12-12 | 济南大学 | A kind of medical image ultra-resolution ratio reconstructing method based on multilayer convolutional neural networks |
CN107784628A (en) * | 2017-10-18 | 2018-03-09 | 南京大学 | A kind of super-resolution implementation method based on reconstruction optimization and deep neural network |
CN108010049A (en) * | 2017-11-09 | 2018-05-08 | 华南理工大学 | Split the method in human hand region in stop-motion animation using full convolutional neural networks |
Non-Patent Citations (2)
Title |
---|
On Bayesian Adaptive Video Super Resolution;Ce Liu et al;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20140228;第36卷(第2期);346-360 * |
太赫兹图像的降噪和增强;徐利民等;《红外与激光工程》;20131031;第42卷(第10期);2865-2870 * |
Also Published As
Publication number | Publication date |
---|---|
CN109191376A (en) | 2019-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109191376B (en) | High-resolution terahertz image reconstruction method based on SRCNN improved model | |
CN109903228B (en) | Image super-resolution reconstruction method based on convolutional neural network | |
CN112215755B (en) | Image super-resolution reconstruction method based on back projection attention network | |
JP7108061B2 (en) | Method and apparatus for correcting distorted document images | |
CN107798381B (en) | Image identification method based on convolutional neural network | |
CN110675321B (en) | Super-resolution image reconstruction method based on progressive depth residual error network | |
CN109727195B (en) | Image super-resolution reconstruction method | |
CN113240580A (en) | Lightweight image super-resolution reconstruction method based on multi-dimensional knowledge distillation | |
CN110415199B (en) | Multispectral remote sensing image fusion method and device based on residual learning | |
Zhu et al. | Efficient single image super-resolution via hybrid residual feature learning with compact back-projection network | |
CN110136062A (en) | A kind of super resolution ratio reconstruction method of combination semantic segmentation | |
CN108923984B (en) | Space-time video compressed sensing method based on convolutional network | |
CN110349087B (en) | RGB-D image high-quality grid generation method based on adaptive convolution | |
CN111861886B (en) | Image super-resolution reconstruction method based on multi-scale feedback network | |
CN102915527A (en) | Face image super-resolution reconstruction method based on morphological component analysis | |
Guo et al. | Deep learning based image super-resolution with coupled backpropagation | |
CN108257108A (en) | A kind of super-resolution image reconstruction method and system | |
Guan et al. | Srdgan: learning the noise prior for super resolution with dual generative adversarial networks | |
CN114219719A (en) | CNN medical CT image denoising method based on dual attention and multi-scale features | |
KR102537207B1 (en) | Method for processing image based on machine learning and apparatus therefof | |
Wu et al. | Blind super-resolution for remote sensing images via conditional stochastic normalizing flows | |
CN109993701A (en) | A method of the depth map super-resolution rebuilding based on pyramid structure | |
CN115294225A (en) | Progressive back projection network super-resolution reconstruction method for new coronary pneumonia chest CT | |
CN115861401B (en) | Binocular and point cloud fusion depth recovery method, device and medium | |
CN110070541B (en) | Image quality evaluation method suitable for small sample data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |