CN109118428B - Image super-resolution reconstruction method based on feature enhancement - Google Patents

Image super-resolution reconstruction method based on feature enhancement Download PDF

Info

Publication number
CN109118428B
CN109118428B CN201810581039.5A CN201810581039A CN109118428B CN 109118428 B CN109118428 B CN 109118428B CN 201810581039 A CN201810581039 A CN 201810581039A CN 109118428 B CN109118428 B CN 109118428B
Authority
CN
China
Prior art keywords
layer
output
resolution reconstruction
network
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810581039.5A
Other languages
Chinese (zh)
Other versions
CN109118428A (en
Inventor
赖睿
官俊涛
徐昆然
李奕诗
王东
杨银堂
王炳健
周慧鑫
秦翰林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201810581039.5A priority Critical patent/CN109118428B/en
Publication of CN109118428A publication Critical patent/CN109118428A/en
Application granted granted Critical
Publication of CN109118428B publication Critical patent/CN109118428B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a feature enhancement-based image super-resolution reconstruction method, which comprises the following steps: constructing a characteristic calibration network; constructing a characteristic enhancement convolution module according to the characteristic calibration network; constructing an image super-resolution reconstruction model according to the characteristic enhancement convolution module; training the image super-resolution reconstruction model; and acquiring a reconstructed image according to the trained super-resolution reconstruction model and the original image. The network architecture adopts a residual error learning method in the super-resolution reconstruction process of the image, and particularly, the method selectively enhances and suppresses the extracted features of the feature map, so that the reconstructed image has higher peak signal-to-noise ratio and structural similarity, false information in the reconstructed image is avoided, and sharper visual effect and vivid detail reduction capability are obtained.

Description

Image super-resolution reconstruction method based on feature enhancement
Technical Field
The invention belongs to the field of digital image processing, and particularly relates to an image super-resolution reconstruction method based on feature enhancement.
Background
The image is taken as an important information form of human perception world, and the richness and detail of the content directly determine the detail degree of the human perception of the content. The higher the pixel density on a per-unit scale of the image, the clearer the image, the more detail it expresses, and the more information the human perception is rich, i.e. the high resolution image. Super-resolution reconstruction of images has been studied in many ways, such as remote sensing images, satellite imaging fields, medical image fields, etc.
At present, the existing image super-resolution reconstruction method comprises a super-resolution reconstruction method based on a model and a super-resolution reconstruction method based on learning. The super-resolution reconstruction method based on the model mainly comprises a total variation method, an iterative reflection projection method, a Gihonov regularization method and the like. The super-resolution reconstruction method based on learning comprises an SRCNN method, a VDSR method and the like, wherein the SRCNN method obtains stronger sparse coding capability through learning, so that stronger detail reduction capability is obtained compared with the traditional super-resolution method; the VDSR method has higher signal-to-noise ratio and sharper visual effect.
However, the srcn method has a poor visual effect, and features extracted by the VDSR method are mixed and easily cause pseudo information in a reconstructed image, so that the super-resolution reconstruction method based on the model relies on a model designed manually and cannot describe the mapping relationship between the low-resolution image and the high-resolution image completely, and texture details of the image can be lost or pseudo information of the high-resolution image can be generated while the image is reconstructed.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an image super-resolution reconstruction method based on feature enhancement. The technical problems to be solved by the invention are realized by the following technical scheme:
embodiments of the present invention provide
An image super-resolution reconstruction method based on feature enhancement comprises the following steps:
constructing a characteristic calibration network;
constructing a characteristic enhancement convolution module according to the characteristic calibration network;
constructing an image super-resolution reconstruction model according to the characteristic enhancement convolution module;
training the image super-resolution reconstruction model;
and acquiring a reconstructed image according to the trained super-resolution reconstruction model and the original image.
In one embodiment of the present invention, the feature calibration network includes: the device comprises a pooling layer, a first full-connection layer, a first activation layer, a second full-connection layer and a second activation layer; wherein,,
the output end of the pooling layer is connected with the input end of the first full-connection layer and is used for compressing the input characteristic diagram;
the output end of the first full-connection layer is connected with the input end of the first activation layer and is used for carrying out weighted fusion on the feature images output by the pooling layer;
the output end of the first activation layer is connected with the input end of the second full-connection layer and is used for increasing the sparsity of the output characteristic diagram parameters of the first full-connection layer;
the output end of the second full-connection layer is connected with the input end of the second activation layer and is used for expanding the feature map output by the first activation layer;
and the second activation layer is used for normalizing the feature map output by the second full-connection layer.
In one embodiment of the present invention, the pooling layer is an average pooling layer, the size of the output feature map of the average pooling layer is c×h×w, and the output dimension is c×1×1; wherein c is the channel number of the output characteristic diagram of the average pooling layer, h is the height of the output characteristic diagram of the average pooling layer, and w is the width of the output characteristic diagram of the average pooling layer.
In one embodiment of the present invention, the dimension of the output feature map of the first full connection layer is
Figure BDA0001688214170000031
Wherein c is the channel number of the output feature map of the first full-connection layer, and N is the compression ratio when the output feature map of the first full-connection layer is fused.
In one embodiment of the present invention, the first active layer is a modified linear unit active layer, and the dimension of the output feature map of the second full connection layer is c×1×1; and c is the number of channels of the second full-connection layer output characteristic diagram.
In one embodiment of the present invention, the second active layer is an S-type active layer, and the dimension of the output feature map is c×1×1; and c is the channel number of the second active layer output feature map.
In one embodiment of the present invention, constructing a feature enhanced convolution module from the feature-scaled sub-network includes:
constructing a convolution sub-network;
and constructing the characteristic enhancement convolution module according to the convolution sub-network and the characteristic identification sub-network.
In one embodiment of the invention, constructing a convolutional sub-network includes:
constructing a convolution layer and a third activation layer;
and connecting the output end of the convolution layer with the input end of the third activation layer to construct the convolution sub-network.
In one embodiment of the present invention, constructing an image super-resolution reconstruction model according to the feature enhanced convolution module includes:
constructing a direct-connection convolutional neural network according to a plurality of characteristic enhancement convolutional modules;
and constructing the image super-resolution reconstruction model according to the direct-connection convolutional neural network and a residual error learning bypass, wherein the residual error learning bypass is used for adding the input characteristic diagram and the output characteristic diagram of the direct-connection convolutional neural network point to point.
In one embodiment of the present invention, training the image super-resolution reconstruction model includes:
selecting a training sample set and a test sample set;
and training the image super-resolution reconstruction model according to the training sample set.
Compared with the prior art, the invention has the beneficial effects that:
the network architecture adopts a residual error learning method in the super-resolution reconstruction process of the image, and particularly, the method selectively enhances and suppresses the extracted features of the feature map, so that the reconstructed image has higher peak signal-to-noise ratio and structural similarity, false information in the reconstructed image is avoided, and sharper visual effect and vivid detail reduction capability are obtained.
Drawings
FIG. 1 is a flowchart of a feature enhancement-based image super-resolution reconstruction method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a feature enhancement convolution module of the feature enhancement-based image super-resolution reconstruction method according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of an image super-resolution reconstruction model of an image super-resolution reconstruction method based on feature enhancement according to an embodiment of the present invention;
FIG. 4 is an original image of an embodiment of the present invention;
fig. 5a-5d are images output after bicubic interpolation, srcn, VDSR and reconstruction methods of the present invention, respectively.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but embodiments of the present invention are not limited thereto.
Example 1
Referring to fig. 1, fig. 2 and fig. 3, fig. 1 is a flowchart of a feature enhancement-based image super-resolution reconstruction method according to an embodiment of the present invention; fig. 2 is a schematic structural diagram of a feature enhancement convolution module of the feature enhancement-based image super-resolution reconstruction method according to an embodiment of the present invention; fig. 3 is a schematic structural diagram of an image super-resolution reconstruction model of an image super-resolution reconstruction method based on feature enhancement according to an embodiment of the present invention. As shown in fig. 1, a feature enhancement-based image super-resolution reconstruction method includes: constructing a characteristic calibration network; constructing a characteristic enhancement convolution module according to the characteristic calibration network; constructing an image super-resolution reconstruction model according to the characteristic enhancement convolution module; training the image super-resolution reconstruction model; and acquiring a reconstructed image according to the trained super-resolution reconstruction model and the original image.
Preferably, as shown in fig. 2, the feature calibration network includes: the device comprises a pooling layer, a first full-connection layer, a first activation layer, a second full-connection layer and a second activation layer; the output end of the pooling layer is connected with the input end of the first full-connection layer and is used for compressing the input characteristic diagram; the output end of the first full-connection layer is connected with the input end of the first activation layer and is used for carrying out weighted fusion on the compressed feature images; the output end of the first activation layer is connected with the input end of the second full-connection layer and is used for increasing the sparsity of the weighted and fused characteristic diagram parameters and accelerating the convergence process; the output end of the second full-connection layer is connected with the input end of the second activation layer and is used for expanding the feature map output by the first activation layer; and the second activation layer is used for normalizing the feature map output by the second full-connection layer.
Preferably, the pooling layer is used for compressing the input feature map, so that the feature map is reduced on one hand, and the computational complexity is simplified; on one hand, carrying out feature compression and extracting main features; the pooling layer adopted in this embodiment is an average pooling layer, and the average pooling layer uses an average value of a specific feature on a region of the feature image to represent the feature of the region, that is, calculates an average value of the region of the feature image as a value after pooling the region.
Preferably, the average pooling layer output feature map has a size of c×h×w, and the output dimension is c×1×1; where c is the number of channels outputting the feature map, h is the height of the feature map, and w is the width of the feature map.
Preferably, the first full-connection layer is composed of a full-connection layer with a compression ratio of N, the first full-connection layer performs weighted fusion on each input characteristic value, compresses the characteristic values in dimensions, eliminates unnecessary information in the compression process, and outputs the characteristic values after weighted fusion, namely performs weighted fusion on the characteristic images output by the pooling layer. The dimension of the output characteristic diagram is
Figure BDA0001688214170000061
Wherein c is a first full linkThe number of channels of the input feature map of the joint layer, N is the compression ratio when the features are fused. />
Preferably, the first active layer is a modified linear unit active layer, i.e. a ReLU (Rectified LinearUnit, abbreviated as ReLU) active layer, i.e. performs operations with a ReLU function, wherein the ReLU function may be expressed as:
f(x)=max(0,x),
wherein x is a calibration value of the output characteristic diagram of the first full-connection layer. The first activation layer is used for increasing sparsity of parameters of the output feature map of the first full-connection layer, and the feature can remove redundant data in the feature map, so that features of the feature map are reserved to the greatest extent, and the convergence process is accelerated.
Preferably, the dimension of the second full connection layer output feature map is c×1×1; and c is the number of channels of the second full-connection layer output characteristic diagram. The second full-connection layer expands the dimension of the input feature map and outputs the feature map which is not activated and smooth.
Preferably, the second active layer is an S-type active layer, i.e. a Sigmoid active layer, which performs an operation using a Sigmoid function, wherein the Sigmoid function can be expressed as:
Figure BDA0001688214170000071
wherein y is a calibration value of the output characteristic diagram of the second full-connection layer, and e is a natural base number. The Sigmoid activation layer is also called an S-type activation layer, and the input of the layer is an unactivated smooth feature map, and the output is a final feature map. The dimension of the output feature diagram is c multiplied by 1; where c is the number of channels of the feature map.
Preferably, constructing a convolution sub-network includes: constructing a convolution layer and a third activation layer; and connecting the output end of the convolution layer with the input end of the third activation layer to construct the convolution sub-network. The convolution layer is formed by a convolution kernel size w×h=3×3, the number of the convolution kernels is 64, the step value is 1, and the edge is filled with 1; the convolution kernel is a template used for performing convolution operation, the step value refers to the distance moved by the convolution kernel during each convolution, and the edge filling is used for preventing the images after the convolution from being inconsistent with the convolution image in size. The third active layer is a ReLU active layer, and the output of the third active layer is the output of the convolution sub-network.
Preferably, after the convolutional sub-network is constructed, the enhanced convolutional module is constructed according to the convolutional sub-network and the feature calibration sub-network, namely, the output end of the convolutional sub-network is connected with the input end of the feature calibration sub-network, and the output of the convolutional sub-network is multiplied with the output of the feature calibration sub-network, so that the feature enhanced convolutional module is obtained. The characteristic calibration sub-network in the characteristic enhancement convolution module can selectively enhance and inhibit each characteristic graph output by the convolution sub-network.
Preferably, as shown in fig. 3, the multiple feature enhancement convolution modules are sequentially connected, that is, the output end of the first feature enhancement convolution module is connected to the output end of the second feature enhancement convolution module, and sequentially connected until the output end of the M-1 feature enhancement convolution module is connected to the input end of the M-1 feature enhancement convolution module, so as to construct a direct-connection convolution neural network, where M is a natural number greater than 1, the input end of the first feature enhancement convolution module is the input end of the direct-connection convolution neural network, and the output end of the M-1 feature enhancement convolution module is the output end of the direct-connection convolution neural network. Preferably, M has a value of 10.
Preferably, as shown in fig. 3, a bypass for residual error learning is introduced on the basis of the direct-connection convolutional neural network to form an image super-resolution reconstruction model based on feature-enhanced deep learning, wherein the bypass for residual error learning is used for performing point-to-point addition on a feature map input into the direct-connection convolutional neural network and a feature map output by the direct-connection convolutional neural network.
Preferably, training the image super-resolution reconstruction model includes: selecting a training sample set and a test sample set; training the image super-resolution reconstruction model according to the training sample set; and detecting the image super-resolution reconstruction model according to the test sample set.
Preferably, the training sample set is used for training model parameters of the image super-resolution reconstruction model, so that the model can accurately reconstruct a reconstruction graph with the reconstruction, and the test sample set is used for testing the reconstruction model after training to evaluate the performance of the reconstruction model. The training sample Set is selected from the BSD400 data Set, the test sample Set is selected from the Set12 data Set, and the training sample Set is utilized to perform network training on the image super-resolution reconstruction model according to a Set training method. The specific training method comprises the following steps: using the existing Adam (A Method for Stochastic Optimization) optimizer, the batch size was set to 128, training 25 rounds at a learning rate of 0.001, then training 25 rounds at a learning rate of 0.0001, and a total of 50 rounds. And inputting the test sample set into an image super-resolution reconstruction model, and detecting the performance of the trained image super-resolution reconstruction model.
Preferably, a reconstructed image is obtained according to the trained super-resolution reconstruction model and the original image, namely the original image is input into the trained super-resolution reconstruction model, and the super-resolution reconstructed image is output after the processing of the reconstruction model.
The image super-resolution reconstruction method based on feature enhancement provided by the invention constructs a super-resolution data model, trains the super-resolution model, and finally acquires a reconstructed image through the super-resolution data model.
Example two
Referring to fig. 4, 5a-5d, fig. 4 is an original image according to an embodiment of the present invention; fig. 5a-5d are images output after bicubic interpolation, srcn, VDSR and reconstruction methods of the present invention, respectively. The present embodiment describes, on the basis of the above embodiments, image reconstruction by using the reconstruction method provided by the present invention and the existing reconstruction method. Specifically, the bicubic interpolation method, the srcn method, the VDSR method and the method of the present invention are respectively adopted to reconstruct the low resolution image in fig. 4 at the image super resolution by a scaling factor of 4. Wherein FIG. 5a is an image output after reconstruction using bicubic interpolation; FIG. 5b is an image output by the SRCNN method; FIG. 5c is an image output after reconstruction using the VDSR method; fig. 5d is an image output after reconstruction using the method of the present invention.
Preferably, as can be seen from the comparison of fig. 5a to 5d, the image reconstructed by the method of the present invention has more detail and clearer edges than the other reconstruction results.
Preferably, peak signal-to-noise ratio (PSNR) and Structural Similarity (SSIM) are respectively adopted to quantify, compare and evaluate performances of the feature-enhanced-based image super-resolution reconstruction method and the existing bicubic interpolation method, srcn method and VDSR method, and comparison and evaluation results are shown in the following table:
Figure BDA0001688214170000091
as can be seen from the table above:
(1) The peak signal-to-noise ratio (PSNR) of the image reconstructed by the reconstruction method provided by the invention is obviously higher than that of the bicubic interpolation method, the SRCNN method and the VDSR method, namely the image reconstructed by the method provided by the invention is proved to retain more image detail information.
(2) The structural similarity coefficient (SSIM) of the image after super-resolution reconstruction by the method provided by the invention is obviously higher than the results of a bicubic interpolation method, a SRCNN method and a VDSR method, namely, the image after super-resolution reconstruction by the method provided by the invention is proved to retain more structural characteristics of the original image.
The reconstruction method provided by the invention has better reconstruction effect, clearer edges and clearer detail information in the image, and can reserve the structural information such as edges, details and the like of the original image to a greater extent.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.

Claims (8)

1. The image super-resolution reconstruction method based on feature enhancement is characterized by comprising the following steps of:
constructing a characteristic calibration network;
constructing a convolution sub-network;
constructing the characteristic enhancement convolution module according to the convolution sub-network and the characteristic identification sub-network;
the constructing the feature enhanced convolution module according to the convolution sub-network and the feature identification sub-network includes: connecting the output end of the convolution sub-network with the input end of the characteristic calibration sub-network, and multiplying the output of the convolution sub-network with the output of the characteristic calibration sub-network to construct the characteristic enhancement convolution module; the characteristic calibration sub-network is used for selectively enhancing and suppressing each characteristic graph output by the convolution sub-network;
constructing a direct-connection convolutional neural network according to a plurality of characteristic enhancement convolutional modules;
constructing the image super-resolution reconstruction model according to the direct-connection convolutional neural network and a residual error learning bypass, wherein the residual error learning bypass is used for adding an input characteristic image and an output characteristic image of the direct-connection convolutional neural network point to point;
training the image super-resolution reconstruction model;
and acquiring a reconstructed image according to the trained super-resolution reconstruction model and the original image.
2. The image super-resolution reconstruction method according to claim 1, wherein the feature calibration sub-network comprises: the device comprises a pooling layer, a first full-connection layer, a first activation layer, a second full-connection layer and a second activation layer; wherein,,
the output end of the pooling layer is connected with the input end of the first full-connection layer and is used for compressing the input characteristic diagram;
the output end of the first full-connection layer is connected with the input end of the first activation layer and is used for carrying out weighted fusion on the feature images output by the pooling layer;
the output end of the first activation layer is connected with the input end of the second full-connection layer and is used for increasing the sparsity of the output characteristic diagram parameters of the first full-connection layer;
the output end of the second full-connection layer is connected with the input end of the second activation layer and is used for expanding the feature map output by the first activation layer;
and the second activation layer is used for normalizing the feature map output by the second full-connection layer.
3. The image super-resolution reconstruction method according to claim 2, wherein the pooling layer is an average pooling layer, the size of the output feature map of the average pooling layer is c×h×w, and the output dimension is c×1×1; wherein c is the channel number of the output characteristic diagram of the average pooling layer, h is the height of the output characteristic diagram of the average pooling layer, and w is the width of the output characteristic diagram of the average pooling layer.
4. The image super-resolution reconstruction method according to claim 2, wherein the dimension of the output feature map of the first full-connection layer is
Figure FDA0004104004840000021
Wherein c is the channel number of the output feature map of the first full-connection layer, and N is the compression ratio when the output feature map of the first full-connection layer is fused.
5. The image super-resolution reconstruction method according to claim 2, wherein the first activation layer is a modified linear unit activation layer, and the dimension of the second full-connection layer output feature map is c×1×1; and c is the number of channels of the second full-connection layer output characteristic diagram.
6. The image super-resolution reconstruction method according to claim 2, wherein the second activation layer is an S-type activation layer, and the dimension of the output feature map is c×1×1; and c is the channel number of the second active layer output feature map.
7. The image super-resolution reconstruction method according to claim 1, wherein constructing a convolution sub-network comprises:
constructing a convolution layer and a third activation layer;
and connecting the output end of the convolution layer with the input end of the third activation layer to construct the convolution sub-network.
8. The image super-resolution reconstruction method according to claim 1, wherein training the image super-resolution reconstruction model comprises:
selecting a training sample set and a test sample set;
and training the image super-resolution reconstruction model according to the training sample set.
CN201810581039.5A 2018-06-07 2018-06-07 Image super-resolution reconstruction method based on feature enhancement Active CN109118428B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810581039.5A CN109118428B (en) 2018-06-07 2018-06-07 Image super-resolution reconstruction method based on feature enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810581039.5A CN109118428B (en) 2018-06-07 2018-06-07 Image super-resolution reconstruction method based on feature enhancement

Publications (2)

Publication Number Publication Date
CN109118428A CN109118428A (en) 2019-01-01
CN109118428B true CN109118428B (en) 2023-05-19

Family

ID=64822962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810581039.5A Active CN109118428B (en) 2018-06-07 2018-06-07 Image super-resolution reconstruction method based on feature enhancement

Country Status (1)

Country Link
CN (1) CN109118428B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120019B (en) * 2019-04-26 2023-03-28 电子科技大学 Residual error neural network based on feature enhancement and image deblocking method
CN112508780A (en) * 2019-09-16 2021-03-16 中移(苏州)软件技术有限公司 Training method and device of image processing model and storage medium
CN114245126B (en) * 2021-11-26 2022-10-14 电子科技大学 Depth feature map compression method based on texture cooperation
CN115082317B (en) * 2022-07-11 2023-04-07 四川轻化工大学 Image super-resolution reconstruction method for attention mechanism enhancement

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077511A (en) * 2013-01-25 2013-05-01 西安电子科技大学 Image super-resolution reconstruction method based on dictionary learning and structure similarity
CN104991241A (en) * 2015-06-30 2015-10-21 西安电子科技大学 Target signal extraction and super-resolution enhancement processing method in strong clutter condition
CN106709875A (en) * 2016-12-30 2017-05-24 北京工业大学 Compressed low-resolution image restoration method based on combined deep network
CN106952229A (en) * 2017-03-15 2017-07-14 桂林电子科技大学 Image super-resolution rebuilding method based on the enhanced modified convolutional network of data
CN107464216A (en) * 2017-08-03 2017-12-12 济南大学 A kind of medical image ultra-resolution ratio reconstructing method based on multilayer convolutional neural networks
CN107633520A (en) * 2017-09-28 2018-01-26 福建帝视信息科技有限公司 A kind of super-resolution image method for evaluating quality based on depth residual error network
CN107977932A (en) * 2017-12-28 2018-05-01 北京工业大学 It is a kind of based on can differentiate attribute constraint generation confrontation network face image super-resolution reconstruction method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014145452A1 (en) * 2013-03-15 2014-09-18 Real Time Tomography, Llc Enhancements for displaying and viewing tomosynthesis images
US10648924B2 (en) * 2016-01-04 2020-05-12 Kla-Tencor Corp. Generating high resolution images from low resolution images for semiconductor applications
CN106228124B (en) * 2016-07-17 2019-03-08 西安电子科技大学 SAR image object detection method based on convolutional neural networks
CN107123089B (en) * 2017-04-24 2023-12-12 中国科学院遥感与数字地球研究所 Remote sensing image super-resolution reconstruction method and system based on depth convolution network
CN107240066A (en) * 2017-04-28 2017-10-10 天津大学 Image super-resolution rebuilding algorithm based on shallow-layer and deep layer convolutional neural networks
CN107358576A (en) * 2017-06-24 2017-11-17 天津大学 Depth map super resolution ratio reconstruction method based on convolutional neural networks
CN107633482B (en) * 2017-07-24 2020-12-29 西安电子科技大学 Super-resolution reconstruction method based on sequence image
CN107563965A (en) * 2017-09-04 2018-01-09 四川大学 Jpeg compressed image super resolution ratio reconstruction method based on convolutional neural networks
CN108122197B (en) * 2017-10-27 2021-05-04 江西高创保安服务技术有限公司 Image super-resolution reconstruction method based on deep learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077511A (en) * 2013-01-25 2013-05-01 西安电子科技大学 Image super-resolution reconstruction method based on dictionary learning and structure similarity
CN104991241A (en) * 2015-06-30 2015-10-21 西安电子科技大学 Target signal extraction and super-resolution enhancement processing method in strong clutter condition
CN106709875A (en) * 2016-12-30 2017-05-24 北京工业大学 Compressed low-resolution image restoration method based on combined deep network
CN106952229A (en) * 2017-03-15 2017-07-14 桂林电子科技大学 Image super-resolution rebuilding method based on the enhanced modified convolutional network of data
CN107464216A (en) * 2017-08-03 2017-12-12 济南大学 A kind of medical image ultra-resolution ratio reconstructing method based on multilayer convolutional neural networks
CN107633520A (en) * 2017-09-28 2018-01-26 福建帝视信息科技有限公司 A kind of super-resolution image method for evaluating quality based on depth residual error network
CN107977932A (en) * 2017-12-28 2018-05-01 北京工业大学 It is a kind of based on can differentiate attribute constraint generation confrontation network face image super-resolution reconstruction method

Also Published As

Publication number Publication date
CN109118428A (en) 2019-01-01

Similar Documents

Publication Publication Date Title
CN109886871B (en) Image super-resolution method based on channel attention mechanism and multi-layer feature fusion
CN109118428B (en) Image super-resolution reconstruction method based on feature enhancement
CN111047516B (en) Image processing method, image processing device, computer equipment and storage medium
CN111127374B (en) Pan-sharing method based on multi-scale dense network
CN114092330A (en) Lightweight multi-scale infrared image super-resolution reconstruction method
CN104123705B (en) A kind of super-resolution rebuilding picture quality Contourlet territory evaluation methodology
CN110288524B (en) Deep learning super-resolution method based on enhanced upsampling and discrimination fusion mechanism
WO2016127271A1 (en) An apparatus and a method for reducing compression artifacts of a lossy-compressed image
CN112801904B (en) Hybrid degraded image enhancement method based on convolutional neural network
US11887218B2 (en) Image optimization method, apparatus, device and storage medium
CN112767243B (en) Method and system for realizing super-resolution of hyperspectral image
CN115170915A (en) Infrared and visible light image fusion method based on end-to-end attention network
CN111951164A (en) Image super-resolution reconstruction network structure and image reconstruction effect analysis method
Liu et al. Learning cascaded convolutional networks for blind single image super-resolution
CN116486074A (en) Medical image segmentation method based on local and global context information coding
CN103020940B (en) Local feature transformation based face super-resolution reconstruction method
CN115984110A (en) Swin-transform-based second-order spectral attention hyperspectral image super-resolution method
CN115984117A (en) Variational self-coding image super-resolution method and system based on channel attention
CN112508786B (en) Satellite image-oriented arbitrary-scale super-resolution reconstruction method and system
CN112184552B (en) Sub-pixel convolution image super-resolution method based on high-frequency feature learning
CN116188272B (en) Two-stage depth network image super-resolution reconstruction method suitable for multiple fuzzy cores
CN117455770A (en) Lightweight image super-resolution method based on layer-by-layer context information aggregation network
CN116309221A (en) Method for constructing multispectral image fusion model
CN113205005B (en) Low-illumination low-resolution face image reconstruction method
CN115294222A (en) Image encoding method, image processing method, terminal, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant