CN108647775A - Super-resolution image reconstruction method based on full convolutional neural networks single image - Google Patents
Super-resolution image reconstruction method based on full convolutional neural networks single image Download PDFInfo
- Publication number
- CN108647775A CN108647775A CN201810376429.9A CN201810376429A CN108647775A CN 108647775 A CN108647775 A CN 108647775A CN 201810376429 A CN201810376429 A CN 201810376429A CN 108647775 A CN108647775 A CN 108647775A
- Authority
- CN
- China
- Prior art keywords
- image
- output
- neural networks
- convolutional neural
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 68
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000012549 training Methods 0.000 claims abstract description 60
- 238000012360 testing method Methods 0.000 claims abstract description 53
- 238000000605 extraction Methods 0.000 claims abstract description 34
- 238000013507 mapping Methods 0.000 claims abstract description 20
- 230000006835 compression Effects 0.000 claims description 19
- 238000007906 compression Methods 0.000 claims description 19
- 239000002775 capsule Substances 0.000 claims description 18
- 238000000576 coating method Methods 0.000 claims description 16
- 238000005070 sampling Methods 0.000 claims description 16
- 239000000203 mixture Substances 0.000 claims description 9
- 239000000284 extract Substances 0.000 claims description 6
- 238000002203 pretreatment Methods 0.000 claims description 5
- 230000006870 function Effects 0.000 claims description 3
- 241001269238 Data Species 0.000 description 6
- 239000011248 coating agent Substances 0.000 description 5
- 239000012141 concentrate Substances 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000015654 memory Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
- G06T3/4076—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
A kind of super-resolution image reconstruction method based on full convolutional neural networks single image, be split by the image to acquisition, make network training collection and test set image pre-processed, build full convolutional neural networks, the full convolutional neural networks of training, reconstruction test set image super-resolution image step form, since the present invention constructs a full convolutional neural networks model being made of primitive image features extraction module, feature High Dimensional Mapping module, residual error extraction module, improves the quality of reconstruction image and enrich the details of image.The present invention can go out the super-resolution image of high quality in the case where only possessing low-resolution image by the full convolutional neural networks Model Reconstruction.
Description
Technical field
The present invention relates to technical field of image processing, and in particular to the single image super-resolution image of full convolutional network
It rebuilds.
Technical background
In fields such as video monitoring, satellite image and medical images, the image resolution ratio for being frequently run onto collection is too low,
It can not be used so as to cause the image being collected into.Super-resolution rebuilding refers to then being reconstructed by the low-resolution image being collected into
A kind of application technology of corresponding high-definition picture.
Because low-resolution image only contains some high-resolution high-frequency informations, a large amount of contextual information is had lost,
Therefore how to go out high-definition picture by a small amount of information reconstruction in the low-resolution image of single image is always to study
Difficult point.
It is made great progress in recent years in image procossing and speech analysis field based on the technology of deep learning,
In, convolutional neural networks are because of the characteristics of its weights is shared and partially connected so that network model complexity substantially reduces.It is residual simultaneously
The appearance of poor network allows one deeper network model of structure to become possible.In addition, the appearance of full convolutional network allows network mould
Type has different size of input, and possibility is provided for the different size of image of consolidated network model treatment.
The algorithm based on deep learning of mainstream is mainly with bicubic interpolation in single image super-resolution rebuilding at present
Result as input, then obtain the receptive field of bigger by constantly deepening the depth of network, to preferably utilize it is defeated
Enter the contextual information in image, with the increase of network depth, tribute of the feature that shallow-layer convolutional layer exports for image reconstruction
Offer gradual decrease, the features of these shallow-layers utilization rate in network model is not high.
Invention content
Technical problem to be solved by the present invention lies in the above-mentioned prior art is overcome, provide a kind of flow it is simple,
Rebuild the good super-resolution image reconstruction method based on full convolutional neural networks single image of effect.
Technical solution is to comprise the steps of used by solving above-mentioned technical problem:
(1) image of acquisition is split
It obtains and is no less than 200 coloured images, according to 3:1 ratio is partitioned into training dataset image and test set figure
Picture.
(2) it makes network training collection and test set image is pre-processed
Expand training dataset image using data enhancement methods and make network training collection, and test set image is carried out
Pretreatment.
In the step, data enhancement methods expand training dataset image and are the step of making network training collection:
1) 90 °, 180 °, 270 ° are rotated clockwise successively to all images in training dataset image, will rotated every time
Obtained rotation image is added in training dataset image.
2) flip horizontal is carried out to all images in training dataset image.
3) brightness, chroma blue, red are switched to from red, green, blue color space to all images in training dataset image
Chrominance space extracts brightness, and 16~64 × 16~64 block of pixels is cut into from brightness.
4) 2 times, 3 times, 4 times of bicubic interpolation down-samplings are carried out respectively to each block of pixels, the result of down-sampling is used again
Identical multiple up-sampling is restored to full size, and obtained result is the input of network model, and block of pixels originally is network
The output of model, obtains training set.
In the step, carrying out pre-treatment step to test set image is:
To all test set images from red, green, blue color space, switch to brightness, chroma blue, red color color sky
Between, brightness is extracted as test set.
(3) full convolutional neural networks are built
Full convolutional neural networks include primitive image features extraction module, feature High Dimensional Mapping module, residual error extraction module,
The output of characteristic extracting module is connected with the input of feature High Dimensional Mapping module, and the output of feature High Dimensional Mapping module is carried with residual error
The input of modulus block is connected, and structure helps convolutional neural networks.
(4) the full convolutional neural networks of training
By in the full convolutional neural networks of the training set input step (3) obtained in step (2) structure, net is adjusted with dynamic
The learning rate of network model is trained, and obtains trained full convolutional neural networks.
In the step, dynamic adjust network model learning rate be trained for:Use mean square error as loss function, often
10000 samples of traversal are a generation, often cross 10 generation learning rates and are reduced to current 0.1, iterations were 100 generations.
(5) super-resolution image of test set image is rebuild
Trained full convolutional neural networks in the test set input step (4) that step (2) is obtained, it is defeated to obtain network
Go out, the super-resolution image of test set image is reconstructed according to network output.
In the step, the super-resolution image that test set image is reconstructed according to network output is:Test set image
Color space switchs to brightness, chroma blue, red color from red, green, blue, is replaced with the corresponding output of full convolutional neural networks former
The brightness layer of test set image goes back to red, green, blue image color space from brightness, chroma blue, red color, obtains base
In individual super-resolution image of full convolutional network.
In the full convolutional neural networks step (3) of structure of the present invention, primitive image features extraction module of the invention is:
The output of the first linear rectification unit of connection is connected with the input of the first compression exciting unit, the output of the first compression exciting unit
It is connected with the input of the second linear rectification unit of connection, the output of the second linear rectification unit of connection and the second compression exciting unit
Input be connected, be built into primitive image features extraction module.
The feature High Dimensional Mapping module of the present invention is following formula:
xn=[xn-1,C(xn-1)]
C is capsule module, x in formulan-1For the input of (n-1)th capsule module, C (xn-1) it is (n-1)th capsule module
Output, x0For the output of primitive image features extraction module, xnIt is characterized the output of High Dimensional Mapping module, n is of capsule module
Number, value are limited positive integer.
The residual error extraction module of the present invention is as follows:
1) primitive character residual error is extracted
By the output of feature High Dimensional Mapping module with one layer of convolution kernel size be 128 × 1 × 1 convolutional layer dimensionality reduction to original
The output identical dimensional of beginning image characteristics extraction module, obtains primitive character residual error.
2) global residual error is extracted
Primitive character residual error is added with the output of primitive image features extraction module, is 1 × 3 with one layer of convolution kernel size
× 3 convolutional layer dimensionality reduction obtains global residual error to the input identical dimensional with full convolutional neural networks.
3) residual error extraction module is built
Global residual error is added with the input of full convolutional neural networks, is built into residual error extraction module.
In the full convolutional neural networks step (3) of structure of the present invention, capsule module of the invention is:First convolution unit
Output be connected with the input of the second convolution unit, the output of the second convolution unit is connected with the input of third convolution unit,
The input that exciting unit is compressed in the output of three convolution units with third is connected.
In the full convolutional neural networks step (3) of structure of the present invention, the first convolution unit of the invention is by batch canonical
Layer, line rectification elementary layer, convolutional layer composition, the output of batch regular sheaf are connected with the input of line rectification elementary layer, linearly
Rectification unit layer output is connected with the input of convolutional layer, the second convolution unit, third convolution unit and the first convolution unit knot
Structure is identical.
In the full convolutional neural networks step (3) of structure of the present invention, in the of the invention first linear rectification unit of connection
Convolution kernel size is 64 × 3 × 3, offset 1, and the second linear rectification unit of connection couples linear rectification unit phase with first
Together.
In the full convolutional neural networks step (3) of structure of the present invention, the of the invention first compression exciting unit is by the overall situation
Average pond layer, the first full articulamentum, the second full articulamentum, Sigmoid active coatings composition, the output dimension of the average pond layer of the overall situation
Degree is 128 × 1 × 1, and the output dimension of the first full articulamentum is 8 × 1 × 1, and the output dimension of the second full articulamentum is 128 × 1
The output dimension of × 1, Sigmoid active coating is 128 × 1 × 1, and exporting for the average pond layer of the overall situation is defeated with the first full articulamentum
Enter connected, the output of the first full articulamentum is connected with the input of the second full articulamentum, the output of the second full articulamentum and Sigmoid
The input of active coating is connected;The second compression exciting unit is identical as the first compression exciting unit.
In the full convolutional neural networks step (3) of structure of the present invention, third of the invention compresses exciting unit by the overall situation
Average pond layer, the first full articulamentum, the second full articulamentum, Sigmoid active coatings composition, the output dimension of the average pond layer of the overall situation
Degree is 128 × 1 × 1, and the output dimension of the first full articulamentum is 8 × 1 × 1, and the output dimension of the second full articulamentum is 128 × 1
The output dimension of × 1, Sigmoid active coating is 128 × 1 × 1, and exporting for the average pond layer of the overall situation is defeated with the first full articulamentum
Enter connected, the output of the first full articulamentum is connected with the input of the second full articulamentum, the output of the second full articulamentum and Sigmoid
The input of active coating is connected.
In the full convolutional neural networks step (3) of structure of the present invention, the first convolution unit of capsule module of the invention,
Second convolution unit, the convolution kernel size of third convolution unit are respectively 128 × 1 × 1,128 × 3 × 3,128 × 3 × 3.
The present invention has the following advantages compared with prior art:
One is constructed by primitive image features extraction module, feature High Dimensional Mapping module, residual error since the present invention uses
The full convolutional neural networks model of extraction module composition, improves the quality of reconstruction image and enriches the details of image.This hair
It is bright to go out the oversubscription of high quality by the full convolutional neural networks Model Reconstruction in the case where only possessing low-resolution image
Resolution image.
Description of the drawings
Fig. 1 is the flow chart of the embodiment of the present invention 1.
Fig. 2 is the flow chart that capsule module in full convolutional neural networks is built in Fig. 1.
Fig. 3 is a test set image through bicubic interpolation method treated result figure.
Fig. 4 is that a test set image passes through the super-resolution image reconstruction method based on full convolutional neural networks single image
Result figure that treated.
Specific implementation mode
The present invention is described in more detail with reference to the accompanying drawings and examples, but the present invention is not limited to following implementations
Example.
Embodiment 1
By VOC2012 image datas concentrate choose 292 coloured images for, the present embodiment based on full convolutional Neural
The super-resolution image reconstruction method of network single image is as shown in Figure 1, comprise the steps of:
(1) image of acquisition is split
It concentrates to choose in VOC2012 image datas and obtains 292 coloured images, according to 3:1 ratio is partitioned into trained number
According to collection image and test set image, that is, practices data set 219 and open image, test set 73 opens image.
(2) it makes network training collection and test set image is pre-processed
Expand training dataset image using data enhancement methods and make network training collection, and test set image is carried out
Pretreatment.
Data enhancement methods expand training dataset image and are the step of making network training collection:
1) all images in image are opened to training dataset 219 and rotates clockwise 90 °, 180 °, 270 ° successively, it will be each
Obtained rotation image is rotated to be added in training dataset image.
2) flip horizontal is carried out to all images in training dataset image.
3) brightness, chroma blue, red are switched to from red, green, blue color space to all images in training dataset image
Chrominance space extracts brightness, and 32 × 32 block of pixels is cut into from brightness.
4) 2 times, 3 times, 4 times of bicubic interpolation down-samplings are carried out respectively to each block of pixels, the result of down-sampling is used again
Identical multiple up-sampling is restored to full size, and obtained result is the input of network model, and block of pixels originally is network
The output of model, obtains training set.
Carrying out pre-treatment step to test set image is:To all test set images from red, green, blue color space, switch to
Brightness, chroma blue, red color color space, the brightness extracted is as test set.
(3) full convolutional neural networks are built
Full convolutional neural networks include primitive image features extraction module, feature High Dimensional Mapping module, residual error extraction module,
The output of characteristic extracting module is connected with the input of feature High Dimensional Mapping module, and the output of feature High Dimensional Mapping module is carried with residual error
The input of modulus block is connected, and structure helps convolutional neural networks.
Primitive image features extraction module is:The output of the first linear rectification unit of connection and the first compression exciting unit
Input is connected, and the output of the first compression exciting unit is connected with the input of the second linear rectification unit of connection, and the second connection is linear
The output of rectification unit is connected with the input of the second compression exciting unit, is built into primitive image features extraction module.First
Wiring rectification unit is disclosed in ICML2016 meetings, in the first linear rectification unit of connection convolution kernel size be 64 × 3 ×
3, offset 1, it is identical that the second linear rectification unit of connection with first couples linear rectification unit.First compression exciting unit by
The average pond layer of the overall situation, the first full articulamentum, the second full articulamentum, Sigmoid active coatings composition, global averagely pond layer it is defeated
It is 128 × 1 × 1 to go out dimension, and the output dimension of the first full articulamentum is 8 × 1 × 1, and the output dimension of the second full articulamentum is 128
The output dimension of × 1 × 1, Sigmoid active coating is 128 × 1 × 1, the output of the average pond layer of the overall situation and the first full articulamentum
Input be connected, the output of the first full articulamentum is connected with the input of the second full articulamentum, the output of the second full articulamentum and
The input of Sigmoid active coatings is connected;The second above-mentioned compression exciting unit is identical as the first compression exciting unit.
Feature High Dimensional Mapping module is following formula:
xn=[xn-1,C(xn-1)]
C is capsule module, x in formulan-1For the input of (n-1)th capsule module, C (xn-1) it is (n-1)th capsule module
Output, x0For the output of primitive image features extraction module, xnIt is characterized the output of High Dimensional Mapping module, n is of capsule module
Number, value 25.
In fig. 2, the capsule module of the present embodiment is:The input phase of the output and the second convolution unit of first convolution unit
Even, the output of the second convolution unit is connected with the input of third convolution unit, and output and the third compression of third convolution unit swash
The input for encouraging unit is connected.First convolution unit is made of batch regular sheaf, line rectification elementary layer, convolutional layer, batch canonical
The output of layer is connected with the input of line rectification elementary layer, and the output of line rectification elementary layer is connected with the input of convolutional layer, and second
Convolution unit and third convolution unit are identical as the structure of the first convolution unit.First convolution unit, the second convolution unit, third
The convolution kernel size of convolution unit is respectively 128 × 1 × 1,128 × 3 × 3,128 × 3 × 3.The third compression of the present embodiment swashs
It encourages unit to be made of the average pond layer of the overall situation, the first full articulamentum, the second full articulamentum, Sigmoid active coatings, the average pond of the overall situation
The output dimension for changing layer is 128 × 1 × 1, and the output dimension of the first full articulamentum is 8 × 1 × 1, the output of the second full articulamentum
Dimension is that the output dimension of 128 × 1 × 1, Sigmoid active coatings is 128 × 1 × 1, the output and first of the average pond layer of the overall situation
The input of full articulamentum is connected, and the output of the first full articulamentum is connected with the input of the second full articulamentum, the second full articulamentum
Output is connected with the input of Sigmoid active coatings.
Residual error extraction module is as follows:
1) primitive character residual error is extracted
By the output of feature High Dimensional Mapping module with one layer of convolution kernel size be 128 × 1 × 1 convolutional layer dimensionality reduction to original
The output identical dimensional of beginning image characteristics extraction module, obtains primitive character residual error.
2) global residual error is extracted
Primitive character residual error is added with the output of primitive image features extraction module, is 1 × 3 with one layer of convolution kernel size
× 3 convolutional layer dimensionality reduction obtains global residual error to the input identical dimensional with full convolutional neural networks.
3) residual error extraction module is built
Global residual error is added with the input of full convolutional neural networks, is built into residual error extraction module.
(4) the full convolutional neural networks of training
By in the full convolutional neural networks of the training set input step (3) obtained in step (2) structure, net is adjusted with dynamic
The learning rate of network model is trained, and obtains trained full convolutional neural networks.
Dynamic adjustment network model learning rate be trained for:It uses mean square error as loss function, often traverses 10000
A sample is a generation, often crosses 10 generation learning rates and is reduced to current 0.1, iterations were 100 generations.
(5) super-resolution image of test set image is rebuild
Trained full convolutional neural networks in the test set input step (4) that step (2) is obtained, it is defeated to obtain network
Go out, the super-resolution image of test set image is reconstructed according to network output.
Reconstructing the super-resolution image of test set image according to network output is:The color space of test set image from
Red, green, blue switchs to brightness, chroma blue, red color, and former test set image is replaced with the corresponding output of full convolutional neural networks
Brightness layer go back to red, green, blue image color space from brightness, chroma blue, red color, obtain be based on full convolution net
Individual super-resolution image of network.An image is chosen from super-resolution image with traditional bicubic interpolation image to be compared
Compared with as a result seeing Fig. 3, Fig. 4.By Fig. 3, Fig. 4 as it can be seen that individual super-resolution image based on full convolutional network can show more clearly
Clear grain details.
Embodiment 2
For concentrating in VOC2012 image datas and choose 292 coloured images, based on individual figure of full convolutional neural networks
The super-resolution image reconstruction method of picture comprises the steps of:
(1) image of acquisition is split
The step is same as Example 1.
(2) it makes network training collection and test set image is pre-processed
Expand training dataset image using data enhancement methods and make network training collection, and test set image is carried out
Pretreatment.
Data enhancement methods expand training dataset image and are the step of making network training collection:
1) all images in image are opened to training dataset 219 and rotates clockwise 90 °, 180 °, 270 ° successively, it will be each
Obtained rotation image is rotated to be added in training dataset image.
2) flip horizontal is carried out to all images in training dataset image.
3) brightness, chroma blue, red are switched to from red, green, blue color space to all images in training dataset image
Chrominance space extracts brightness, and 16 × 16 block of pixels is cut into from brightness.
4) 2 times, 3 times, 4 times of bicubic interpolation down-samplings are carried out respectively to each block of pixels, the result of down-sampling is used again
Identical multiple up-sampling is restored to full size, and obtained result is the input of network model, and block of pixels originally is network
The output of model, obtains training set.
Carrying out pre-treatment step to test set image is:To all test set images from red, green, blue color space, switch to
Brightness, chroma blue, red color color space, the brightness extracted is as test set.
Other steps are same as Example 1.Obtain individual super-resolution image based on full convolutional network.
Embodiment 3
For concentrating in VOC2012 image datas and choose 292 coloured images, based on individual figure of full convolutional neural networks
The super-resolution image reconstruction method of picture comprises the steps of:
(1) image of acquisition is split
The step is same as Example 1.
(2) it makes network training collection and test set image is pre-processed
Expand training dataset image using data enhancement methods and make network training collection, and test set image is carried out
Pretreatment.
Data enhancement methods expand training dataset image and are the step of making network training collection:
1) all images in image are opened to training dataset 219 and rotates clockwise 90 °, 180 °, 270 ° successively, it will be each
Obtained rotation image is rotated to be added in training dataset image.
2) flip horizontal is carried out to all images in training dataset image.
3) brightness, chroma blue, red are switched to from red, green, blue color space to all images in training dataset image
Chrominance space extracts brightness, and 64 × 64 block of pixels is cut into from brightness.
4) 2 times, 3 times, 4 times of bicubic interpolation down-samplings are carried out respectively to each block of pixels, the result of down-sampling is used again
Identical multiple up-sampling is restored to full size, and obtained result is the input of network model, and block of pixels originally is network
The output of model, obtains training set.
Carrying out pre-treatment step to test set image is:To all test set images from red, green, blue color space, switch to
Brightness, chroma blue, red color color space, the brightness extracted is as test set.
Other steps are same as Example 1.Obtain individual super-resolution image based on full convolutional network.
Embodiment 4
For concentrating in VOC2012 image datas and choose 200 coloured images, based on individual figure of full convolutional neural networks
The super-resolution image reconstruction method of picture comprises the steps of:
(1) image of acquisition is split
It concentrates to choose in VOC2012 image datas and obtains 200 coloured images, according to 3:1 ratio is partitioned into trained number
According to collection image and test set image, that is, practices data set 200 and open image, test set 50 opens image.
Other steps are same as Example 1.Obtain individual super-resolution image based on full convolutional network.
In order to verify beneficial effects of the present invention, inventor has carried out emulation experiment using the method for the embodiment of the present invention 1,
Experimental conditions are as follows:
1, simulated conditions
Hardware condition is:4 pieces of Nvidia 1080Ti video cards, 128G memories.
Software platform is:Caffe frames.
2, emulation content and result
It is tested under above-mentioned simulated conditions with the method for the present invention.Fig. 3 is a survey randomly selected after 2 times of down-samplings
Examination collection image is by bicubic interpolation as a result, obtained network exports knot after Fig. 4 is to input Fig. 3 as network model
Fruit.As shown in figure 4, the reconstruction effect of the method for the present invention is good, details reduction degree is high.
The effect and details of reconstruction image of the present invention are obtained for apparent improve and improvement.
Claims (8)
1. a kind of super-resolution image reconstruction method based on full convolutional neural networks single image, it is characterised in that by following step
Rapid composition:
(1) image of acquisition is split
It obtains and is no less than 200 coloured images, according to 3:1 ratio is partitioned into training dataset image and test set image;
(2) it makes network training collection and test set image is pre-processed
Expand training dataset image using data enhancement methods and make network training collection, and test set image is located in advance
Reason;
In the step, data enhancement methods expand training dataset image and are the step of making network training collection:
1) 90 °, 180 °, 270 ° are rotated clockwise successively to all images in training dataset image, each rotation is obtained
Rotation image be added training dataset image in;
2) flip horizontal is carried out to all images in training dataset image;
3) brightness, chroma blue, red color are switched to from red, green, blue color space to all images in training dataset image
Space extracts brightness, and 16~64 × 16~64 block of pixels is cut into from brightness;
4) 2 times, 3 times, 4 times of bicubic interpolation down-samplings are carried out respectively to each block of pixels, by the result of down-sampling again with identical
Multiple up-sampling, be restored to full size, obtained result is the input of network model, and block of pixels originally is network model
Output, obtain training set;
In the step, carrying out pre-treatment step to test set image is:
To all test set images from red, green, blue color space, switchs to brightness, chroma blue, red color color space, carry
Brightness is taken out as test set;
(3) full convolutional neural networks are built
Full convolutional neural networks include primitive image features extraction module, feature High Dimensional Mapping module, residual error extraction module, feature
The output of extraction module is connected with the input of feature High Dimensional Mapping module, and output and the residual error of feature High Dimensional Mapping module extract mould
The input of block is connected, and structure helps convolutional neural networks;
(4) the full convolutional neural networks of training
By in the full convolutional neural networks of the training set input step (3) obtained in step (2) structure, network mould is adjusted with dynamic
The learning rate of type is trained, and obtains trained full convolutional neural networks;
The learning rate of above-mentioned dynamic adjustment network model be trained for:It uses mean square error as loss function, often traverses
10000 samples are a generation, often cross 10 generation learning rates and are reduced to current 0.1, iterations were 100 generations;
(5) super-resolution image of test set image is rebuild
Trained full convolutional neural networks in the test set input step (4) that step (2) is obtained obtain network output, root
The super-resolution image of test set image is reconstructed according to network output;
In the step, the super-resolution image that test set image is reconstructed according to network output is:The color of test set image
Space switchs to brightness, chroma blue, red color from red, green, blue, and former test is replaced with the corresponding output of full convolutional neural networks
The brightness layer of collection image goes back to red, green, blue image color space from brightness, chroma blue, red color, obtains based on complete
Individual super-resolution image of convolutional network.
2. the super-resolution image reconstruction method according to claim 1 based on full convolutional neural networks single image,
It is characterized in that in building full convolutional neural networks step (3), the primitive image features extraction module is:First connection line
Property rectification unit output with first compression exciting unit input be connected, first compress exciting unit output couple with second
The input of line rectification unit is connected, the input phase of the output of the second linear rectification unit of connection and the second compression exciting unit
Even, it is built into primitive image features extraction module;
The feature High Dimensional Mapping module is following formula:
xn=[xn-1,C(xn-1)]
C is capsule module, x in formulan-1For the input of (n-1)th capsule module, C (xn-1) be (n-1)th capsule module output,
x0For the output of primitive image features extraction module, xnIt is characterized the output of High Dimensional Mapping module, n is the number of capsule module, is taken
Value is limited positive integer;
The residual error extraction module is as follows:
1) primitive character residual error is extracted
The output of feature High Dimensional Mapping module is arrived and original graph with the convolutional layer dimensionality reduction that one layer of convolution kernel size is 128 × 1 × 1
As the output identical dimensional of characteristic extracting module, primitive character residual error is obtained;
2) global residual error is extracted
Primitive character residual error is added with the output of primitive image features extraction module, is 1 × 3 × 3 with one layer of convolution kernel size
Convolutional layer dimensionality reduction to the input identical dimensional with full convolutional neural networks, obtain global residual error;
3) residual error extraction module is built
Global residual error is added with the input of full convolutional neural networks, is built into residual error extraction module.
3. the super-resolution image reconstruction method according to claim 2 based on full convolutional neural networks single image,
It is characterized in that in building full convolutional neural networks step (3), the capsule module is:The output of first convolution unit and the
The input of two convolution units is connected, and the output of the second convolution unit is connected with the input of third convolution unit, third convolution unit
Output and third compress the input of exciting unit and be connected.
4. the super-resolution image reconstruction method according to claim 3 based on full convolutional neural networks single image,
It is characterized in that:In building full convolutional neural networks step (3), first convolution unit is by batch regular sheaf, linear whole
Elementary layer, convolutional layer composition are flowed, the output of batch regular sheaf is connected with the input of line rectification elementary layer, line rectification elementary layer
Output is connected with the input of convolutional layer, and the second convolution unit, third convolution unit are identical as the structure of the first convolution unit.
5. the super-resolution image reconstruction method according to claim 2 based on full convolutional neural networks single image,
It is characterized in that:In building full convolutional neural networks step (3), convolution kernel size in the described first linear rectification unit of connection
It is 64 × 3 × 3, offset 1, it is identical that the second linear rectification unit of connection with first couples linear rectification unit.
6. the super-resolution image reconstruction method according to claim 2 based on full convolutional neural networks single image,
It is characterized in that:In building full convolutional neural networks step (3), the described first compression exciting unit is by the average pond of the overall situation
The output dimension of layer, the first full articulamentum, the second full articulamentum, Sigmoid active coatings composition, the average pond layer of the overall situation is 128
× 1 × 1, the output dimension of the first full articulamentum is 8 × 1 × 1, and the output dimension of the second full articulamentum is 128 × 1 × 1,
The output dimension of Sigmoid active coatings is 128 × 1 × 1, the input phase of the output and the first full articulamentum of the average pond layer of the overall situation
Even, the output of the first full articulamentum is connected with the input of the second full articulamentum, and output and the Sigmoid of the second full articulamentum are activated
The input of layer is connected;The second compression exciting unit is identical as the first compression exciting unit.
7. the super-resolution image reconstruction method according to claim 3 based on full convolutional neural networks single image,
It is characterized in that:In building full convolutional neural networks step (3), the third compresses exciting unit by the average pond of the overall situation
The output dimension of layer, the first full articulamentum, the second full articulamentum, Sigmoid active coatings composition, the average pond layer of the overall situation is 128
× 1 × 1, the output dimension of the first full articulamentum is 8 × 1 × 1, and the output dimension of the second full articulamentum is 128 × 1 × 1,
The output dimension of Sigmoid active coatings is 128 × 1 × 1, the input phase of the output and the first full articulamentum of the average pond layer of the overall situation
Even, the output of the first full articulamentum is connected with the input of the second full articulamentum, and output and the Sigmoid of the second full articulamentum are activated
The input of layer is connected.
8. the super-resolution image reconstruction method according to claim 3 or 4 based on full convolutional neural networks single image,
It is characterized in that:In building full convolutional neural networks step (3), the first convolution unit of the capsule module, volume Two
Product unit, third convolution unit convolution kernel size be respectively 128 × 1 × 1,128 × 3 × 3,128 × 3 × 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810376429.9A CN108647775B (en) | 2018-04-25 | 2018-04-25 | Super-resolution image reconstruction method based on full convolution neural network single image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810376429.9A CN108647775B (en) | 2018-04-25 | 2018-04-25 | Super-resolution image reconstruction method based on full convolution neural network single image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108647775A true CN108647775A (en) | 2018-10-12 |
CN108647775B CN108647775B (en) | 2022-03-29 |
Family
ID=63747621
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810376429.9A Expired - Fee Related CN108647775B (en) | 2018-04-25 | 2018-04-25 | Super-resolution image reconstruction method based on full convolution neural network single image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108647775B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109362066A (en) * | 2018-11-01 | 2019-02-19 | 山东大学 | A kind of real-time Activity recognition system and its working method based on low-power consumption wide area network and capsule network |
CN109727197A (en) * | 2019-01-03 | 2019-05-07 | 云南大学 | A kind of medical image super resolution ratio reconstruction method |
CN109784242A (en) * | 2018-12-31 | 2019-05-21 | 陕西师范大学 | EEG Noise Cancellation based on one-dimensional residual error convolutional neural networks |
CN109886875A (en) * | 2019-01-31 | 2019-06-14 | 深圳市商汤科技有限公司 | Image super-resolution rebuilding method and device, storage medium |
CN109903226A (en) * | 2019-01-30 | 2019-06-18 | 天津城建大学 | Image super-resolution rebuilding method based on symmetrical residual error convolutional neural networks |
CN109903219A (en) * | 2019-02-28 | 2019-06-18 | 深圳市商汤科技有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
CN109922346A (en) * | 2019-02-28 | 2019-06-21 | 兰州交通大学 | A kind of convolutional neural networks for the reconstruct of compressed sensing picture signal |
CN110111257A (en) * | 2019-05-08 | 2019-08-09 | 哈尔滨工程大学 | A kind of super resolution image reconstruction method based on the weighting of feature channel adaptive |
CN110136135A (en) * | 2019-05-17 | 2019-08-16 | 深圳大学 | Dividing method, device, equipment and storage medium |
CN110211057A (en) * | 2019-05-15 | 2019-09-06 | 武汉Tcl集团工业研究院有限公司 | A kind of image processing method based on full convolutional network, device and computer equipment |
CN110766063A (en) * | 2019-10-17 | 2020-02-07 | 南京信息工程大学 | Image classification method based on compressed excitation and tightly-connected convolutional neural network |
CN111354442A (en) * | 2018-12-20 | 2020-06-30 | 中国医药大学附设医院 | Tumor image deep learning assisted cervical cancer patient prognosis prediction system and method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105069825A (en) * | 2015-08-14 | 2015-11-18 | 厦门大学 | Image super resolution reconstruction method based on deep belief network |
CN106910161A (en) * | 2017-01-24 | 2017-06-30 | 华南理工大学 | A kind of single image super resolution ratio reconstruction method based on depth convolutional neural networks |
CN106952229A (en) * | 2017-03-15 | 2017-07-14 | 桂林电子科技大学 | Image super-resolution rebuilding method based on the enhanced modified convolutional network of data |
CN107481188A (en) * | 2017-06-23 | 2017-12-15 | 珠海经济特区远宏科技有限公司 | A kind of image super-resolution reconstructing method |
CN107507134A (en) * | 2017-09-21 | 2017-12-22 | 大连理工大学 | Super-resolution method based on convolutional neural networks |
CN107578377A (en) * | 2017-08-31 | 2018-01-12 | 北京飞搜科技有限公司 | A kind of super-resolution image reconstruction method and system based on deep learning |
-
2018
- 2018-04-25 CN CN201810376429.9A patent/CN108647775B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105069825A (en) * | 2015-08-14 | 2015-11-18 | 厦门大学 | Image super resolution reconstruction method based on deep belief network |
CN106910161A (en) * | 2017-01-24 | 2017-06-30 | 华南理工大学 | A kind of single image super resolution ratio reconstruction method based on depth convolutional neural networks |
CN106952229A (en) * | 2017-03-15 | 2017-07-14 | 桂林电子科技大学 | Image super-resolution rebuilding method based on the enhanced modified convolutional network of data |
CN107481188A (en) * | 2017-06-23 | 2017-12-15 | 珠海经济特区远宏科技有限公司 | A kind of image super-resolution reconstructing method |
CN107578377A (en) * | 2017-08-31 | 2018-01-12 | 北京飞搜科技有限公司 | A kind of super-resolution image reconstruction method and system based on deep learning |
CN107507134A (en) * | 2017-09-21 | 2017-12-22 | 大连理工大学 | Super-resolution method based on convolutional neural networks |
Non-Patent Citations (5)
Title |
---|
CHAO DONG 等: "Image Super-Resolution Using Deep Convolutional Networks", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
SAMUEL SCHULTER 等: "Fast and Accurate Image Upscaling with Super-Resolution Forests", 《CVPR 2015》 * |
XI CHENG 等: "SESR: Single Image Super Resolution with Recursive Squeeze and Excitation Networks", 《ARXIV》 * |
彭亚丽 等: "基于深度反卷积神经网络的图像超分辨率算法", 《软件学报》 * |
李伟 等: "基于卷积神经网络的深度图像超分辨率重建方法", 《电子测量与仪器学报》 * |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109362066B (en) * | 2018-11-01 | 2021-06-25 | 山东大学 | Real-time behavior recognition system based on low-power-consumption wide-area Internet of things and capsule network and working method thereof |
CN109362066A (en) * | 2018-11-01 | 2019-02-19 | 山东大学 | A kind of real-time Activity recognition system and its working method based on low-power consumption wide area network and capsule network |
CN111354442A (en) * | 2018-12-20 | 2020-06-30 | 中国医药大学附设医院 | Tumor image deep learning assisted cervical cancer patient prognosis prediction system and method |
CN109784242A (en) * | 2018-12-31 | 2019-05-21 | 陕西师范大学 | EEG Noise Cancellation based on one-dimensional residual error convolutional neural networks |
CN109784242B (en) * | 2018-12-31 | 2022-10-25 | 陕西师范大学 | Electroencephalogram signal denoising method based on one-dimensional residual convolution neural network |
CN109727197B (en) * | 2019-01-03 | 2023-03-14 | 云南大学 | Medical image super-resolution reconstruction method |
CN109727197A (en) * | 2019-01-03 | 2019-05-07 | 云南大学 | A kind of medical image super resolution ratio reconstruction method |
CN109903226A (en) * | 2019-01-30 | 2019-06-18 | 天津城建大学 | Image super-resolution rebuilding method based on symmetrical residual error convolutional neural networks |
CN109903226B (en) * | 2019-01-30 | 2023-08-15 | 天津城建大学 | Image super-resolution reconstruction method based on symmetric residual convolution neural network |
CN109886875A (en) * | 2019-01-31 | 2019-06-14 | 深圳市商汤科技有限公司 | Image super-resolution rebuilding method and device, storage medium |
CN109903219B (en) * | 2019-02-28 | 2023-06-30 | 深圳市商汤科技有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
CN109922346A (en) * | 2019-02-28 | 2019-06-21 | 兰州交通大学 | A kind of convolutional neural networks for the reconstruct of compressed sensing picture signal |
CN109903219A (en) * | 2019-02-28 | 2019-06-18 | 深圳市商汤科技有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
CN110111257B (en) * | 2019-05-08 | 2023-01-03 | 哈尔滨工程大学 | Super-resolution image reconstruction method based on characteristic channel adaptive weighting |
CN110111257A (en) * | 2019-05-08 | 2019-08-09 | 哈尔滨工程大学 | A kind of super resolution image reconstruction method based on the weighting of feature channel adaptive |
CN110211057A (en) * | 2019-05-15 | 2019-09-06 | 武汉Tcl集团工业研究院有限公司 | A kind of image processing method based on full convolutional network, device and computer equipment |
CN110211057B (en) * | 2019-05-15 | 2023-08-29 | 武汉Tcl集团工业研究院有限公司 | Image processing method and device based on full convolution network and computer equipment |
CN110136135B (en) * | 2019-05-17 | 2021-07-06 | 深圳大学 | Segmentation method, device, equipment and storage medium |
CN110136135A (en) * | 2019-05-17 | 2019-08-16 | 深圳大学 | Dividing method, device, equipment and storage medium |
CN110766063A (en) * | 2019-10-17 | 2020-02-07 | 南京信息工程大学 | Image classification method based on compressed excitation and tightly-connected convolutional neural network |
CN110766063B (en) * | 2019-10-17 | 2023-04-28 | 南京信息工程大学 | Image classification method based on compressed excitation and tightly connected convolutional neural network |
Also Published As
Publication number | Publication date |
---|---|
CN108647775B (en) | 2022-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108647775A (en) | Super-resolution image reconstruction method based on full convolutional neural networks single image | |
CN109712203B (en) | Image coloring method for generating antagonistic network based on self-attention | |
CN110197468A (en) | A kind of single image Super-resolution Reconstruction algorithm based on multiple dimensioned residual error learning network | |
CN112734646B (en) | Image super-resolution reconstruction method based on feature channel division | |
CN108537733B (en) | Super-resolution reconstruction method based on multi-path deep convolutional neural network | |
CN110634108B (en) | Composite degraded network live broadcast video enhancement method based on element-cycle consistency confrontation network | |
CN107563965A (en) | Jpeg compressed image super resolution ratio reconstruction method based on convolutional neural networks | |
CN108765296A (en) | A kind of image super-resolution rebuilding method based on recurrence residual error attention network | |
CN107155110A (en) | A kind of picture compression method based on super-resolution technique | |
CN110120011A (en) | A kind of video super resolution based on convolutional neural networks and mixed-resolution | |
CN110232653A (en) | The quick light-duty intensive residual error network of super-resolution rebuilding | |
CN112819737B (en) | Remote sensing image fusion method of multi-scale attention depth convolution network based on 3D convolution | |
CN108805808A (en) | A method of improving video resolution using convolutional neural networks | |
CN110136060B (en) | Image super-resolution reconstruction method based on shallow dense connection network | |
CN109509160A (en) | A kind of remote sensing image fusion method by different level using layer-by-layer iteration super-resolution | |
Xiao et al. | A dual-UNet with multistage details injection for hyperspectral image fusion | |
CN109949224A (en) | A kind of method and device of the connection grade super-resolution rebuilding based on deep learning | |
CN112767252B (en) | Image super-resolution reconstruction method based on convolutional neural network | |
CN109191392A (en) | A kind of image super-resolution reconstructing method of semantic segmentation driving | |
CN112288630A (en) | Super-resolution image reconstruction method and system based on improved wide-depth neural network | |
CN114841856A (en) | Image super-pixel reconstruction method of dense connection network based on depth residual channel space attention | |
CN110163803A (en) | A kind of image super-resolution rebuilding method and system based on convolutional neural networks | |
CN111768340A (en) | Super-resolution image reconstruction method and system based on dense multi-path network | |
CN112001843A (en) | Infrared image super-resolution reconstruction method based on deep learning | |
CN110363704A (en) | Merge the image super-resolution rebuilding model construction and method for reconstructing of form and color |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220329 |