CN110232653A - The quick light-duty intensive residual error network of super-resolution rebuilding - Google Patents

The quick light-duty intensive residual error network of super-resolution rebuilding Download PDF

Info

Publication number
CN110232653A
CN110232653A CN201811515913.1A CN201811515913A CN110232653A CN 110232653 A CN110232653 A CN 110232653A CN 201811515913 A CN201811515913 A CN 201811515913A CN 110232653 A CN110232653 A CN 110232653A
Authority
CN
China
Prior art keywords
network
image
layer
residual error
layers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811515913.1A
Other languages
Chinese (zh)
Inventor
李素梅
石永莲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University Marine Technology Research Institute
Original Assignee
Tianjin University Marine Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University Marine Technology Research Institute filed Critical Tianjin University Marine Technology Research Institute
Priority to CN201811515913.1A priority Critical patent/CN110232653A/en
Publication of CN110232653A publication Critical patent/CN110232653A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The quick light-duty intensive residual error network of super-resolution rebuilding, the binary channels depth residual error network (FLSR) based on convolutional network, the main function of deep channel are the high frequency textures for learning image, the low-frequency information of shallow channel study image.For the convergence rate for accelerating network, residual error connection type joined in structure, the image detail information of front convolutional layer directly can be transmitted to subsequent convolutional layer by furthermore residual error connection, be conducive to reconstruction quality better image;Additionally using in structure, which helps to weaken gradient, disappears, and improves the intensive connection type of model performance.The parameter and computation complexity of model are reduced while the image quality evaluations indexs such as raising Y-PSNR (PSNR), structural similarity (SSIM) and fidelity of information gauge (IFC), the reconstruction speed for improving image, can apply in real life.

Description

The quick light-duty intensive residual error network of super-resolution rebuilding
Technical field
The invention belongs to Image Super-resolution Reconstruction fields, are related to the improvement and application of image super-resolution rebuilding method, More particularly to a kind of quickly light-duty intensive residual error network of super-resolution rebuilding.
Background technique
With the rapid advancement of social development and science and technology, diversity is presented in the type that people obtain information, is believed The mode of breath is also continuously increased, wherein being more than 70% by the information proportion that vision obtains, image and video are to regard at present Feel the main carriers of information, this is allowed for, and image procossing research is more and more extensive, and image processing techniques is more more and more universal.
Image resolution ratio refers to the pixel number that per inch image contains, and indicates the degree of scenery details resolution capability.It Spatial resolution, temporal resolution, spectral resolution, radiometric resolution etc. can be divided into, point mentioned in content herein below What resolution referred to is all spatial resolution.The spatial resolution of image is higher, then it represents that the pixel of the image is more intensive, grain details Abundanter, apparently, image is more clear for human eye.Currently, many fields require high-resolution image, with realize target identification with The tasks such as track, analyzing and detecting.In terms of video monitoring, due to being limited by hardware device itself, gained image sharpness It is not high, it is limited to details reduction degree, affect quality monitoring;In terms of remote sensing satellite, high-definition picture could preferably be sentenced Disconnected identification land object information;Medical aspect, doctor can be analyzed and determined by high-definition picture, grasp disease Feelings improve the accuracy of medical treatment.
There are two types of the common methods for promoting image resolution ratio: first is that from hardware point of view, by the performance for improving imaging device Improve image resolution ratio, this method higher cost;Second is that people are acquired by innovatory algorithm handle by hardware device from software respective To low-resolution image be converted to corresponding high-definition picture, i.e., image super-resolution (Super Resolution, SR it) rebuilds.This method effectively avoids the limitation of hardware technology, reduces cost of device, has very strong practical application meaning, at For the common method for improving image resolution ratio at present.
Have a large amount of oversubscription method at present to be suggested, including the method (interpolation-based based on interpolation Methods), the method based on reconstruct (reconstruction-based methods) and based on study method ( Example-based methods).Method based on interpolation, including bilinear interpolation, bicubic interpolation, arest neighbors interpolation etc., This kind of algorithm is simple and quick, and since reconstructed results lack detailed information, reconstruction image is fuzzy;Method based on reconstruct, including repeatedly For Inverse Projection, maximum a posteriori probability method etc., when with enough prior informations, such methods can quickly recover high-resolution Rate picture;Method based on study, including neighborhood embedding inlay technique, rarefaction representation method etc., this method mainly pass through study HR and LR Mapping relations between are rebuild low resolution picture to obtain high-resolution pictures using the mapping relations learnt. Since performance of the first two method often in the large scale factor is very poor, so super-resolution (SR) method is all third substantially recently Kind, i.e., from LR and HR image to study priori knowledge.
In recent years, by the inspiration of many Computer Vision Tasks, depth network achieves very big breakthrough in the field SR.Depth The basic principle of study is to be constructed more hidden layers based on traditional artificial neural network, converted by multilayer, by original sky Between eigentransformation to new space obtain better data characteristics so as to be fitted more complicated functional relation.Finally reach To the purpose for promoting classification or recurrence accuracy.In recent years, deep learning is in image procossing, target detection, speech recognition, face The fields effects such as identification have been above traditional algorithm, and very big effect has also been played in image super-resolution problem.Because its Powerful ability in feature extraction, the super-resolution image of the available higher precision of deep learning algorithm, therefore this research has Great scientific research value and practical application meaning.
Although the effect performance of depth network is good, most of depth networks still contain that there are many disadvantages.Deepen at present Or widen network and become the designer trends of network, these methods need it is a large amount of calculate and memory consumption, be difficult in reality In directly apply;Secondly, being using single pass shallow-layer or depth currently based on the method majority of convolutional neural networks (CNNS) Layer network realizes super-resolution rebuilding, and needs to pre-process but pretreatment may can introduce new noise, and shallowly leads to The high-frequency information of road picture easy to be lost;And deep layer network when being rebuild convergence rate it is slow, be also easy to happen gradient explosion/disappearance Phenomenon.
Summary of the invention
To solve the above-mentioned problems, the quick intensive residual error network of light-duty super-resolution rebuilding of the present invention is based on convolutional network Binary channels depth residual error network (FLSR), the main function of deep channel is the high frequency texture for learning image, and shallow channel is learned Practise the low-frequency information of image.For the convergence rate for accelerating network, residual error connection type joined in structure, furthermore residual error connection can The image detail information of front convolutional layer is directly transmitted to subsequent convolutional layer, be conducive to reconstruction quality better image; Additionally using in structure, which helps to weaken gradient, disappears, and improves the intensive connection type of model performance.Improving Y-PSNR (PSNR), it is reduced while the image quality evaluations index such as structural similarity (SSIM) and fidelity of information gauge (IFC) The parameter and computation complexity of model improve the reconstruction speed of image, can apply in real life.
To realize single image super-resolution rebuilding task, we devise the convolution binary channels based on intensive residual error Network structure.The network has lightweight parameter and computation complexity, as shown in Figure 1, network frame is as shown in Figure 2.Entirely There is no pond layer and full articulamentum, only convolutional layer and warp lamination in FLSR structure.The structure by one 3 layers shallow channel and One 29 layers of deep channel is constituted, and is melted the reconstruction information in depth channel by a convolutional layer in the end of whole network Conjunction obtains HR.Shallow channel is mainly used for restoring the overall profile of image, retains the raw information of image, deep channel main function is Learn the high frequency texture of image, it includes feature extraction layer, Nonlinear Mapping layer, up-sampling layer and multiple dimensioned reconstruction layer four A part.Firstly, deep channel can accelerate network in such a way that intensive and residual error alternately connects in feature extraction phases Convergence rate.Secondly, whole network directly up-samples image using deconvolution, the preprocessing process of image is avoided, The complexity that whole network carries out oversubscription reconstruction can be reduced, so that network can be with significantly more efficient training.Finally, rebuilding rank Section, deep channel can be extracted short and long texture information simultaneously and be carried out image reconstruction by the way of multiple dimensioned reconstruction.Furthermore The enhancing model FLSR-G of FLSR combines group convolution in feature extraction phases, in holding result in the case where same level, Greatly reduce the parameter and computation complexity of network, feature extraction block (the Dense Block-G) such as Fig. 3 (b) of improved model It is shown.
The key of single image super-resolution rebuilding based on CNN is exactly that the mapping found between LR image and HR image is closed System.And the deep layer network in network structure is constantly extracted with the convolutional network intensively connected and the potential feature of iteration, therefore Deep channel can accurately restore details, such as marginal information.Shallow channel is simply to up-sample, and can retain original graph The overall profile information of picture, Nonlinear Mapping only only used a convolutional layer.
Indicate that original high-resolution image, X indicate the LR picture that down-sampling obtains, have N to instruction in the training process using Y Practice collection, corresponding high-low resolution imageThe SR figure rebuild by convolutional network As being expressed as.The loss letter of mean square error (Mean Square Error, MSE) as network is used herein Number L, is expressed as follows:
MSE is used to help to obtain higher Y-PSNR PSNR (Peak Signal to Noise as loss function Ratio), and PSNR be widely used in be quantitatively evaluated image quality, be able to reflect the perception matter of image to a certain extent Amount.
1 feature extraction
Image super-resolution rebuilding rebuilds better effect, but with network with the increase of depth convolutional neural networks depth The increase of depth, common gradient extinction tests can make network be difficult to restrain in depth network.Therefore add in deep channel herein The mode of intensive connection is added, has maximized the information flow in whole network, has reduced what deep layer network easily occurred in the training process Gradient extinction tests, it is significantly more efficient to reconstruct high frequency texture;In addition to this in order to accelerate the convergence rate of depth network, In deep channel by the way of residual error connection.
Network proposed in this paper has largely used the intensive mode for connecting and connecting with residual error in feature extraction phases, intensively connects It connects and all layers of feature extraction phases is all feedovered connection two-by-two, so that all layers obtain the stage all convolutional layers from front Characteristic pattern is obtained as input, plays important function for slowing down gradient extinction tests, so that network is more easily trained;With this Meanwhile residual error connection realizes connection shorter between subsequent convolutional layer and the convolutional layer of front, and the connection allows signal Backpropagation, reduce in depth convolutional neural networks, high-frequency information can be lost in being transmitted to subsequent convolutional network The phenomenon that, the effect of compensation high-frequency information is also acted in a network.The intensive connection type of convolutional neural networks is most early in figure As proposing in identification, block is as shown in Fig. 3 (a) for intensive connection (Dense block) proposed in this paper.
Feature extraction be from original LR image X extract (overlapping) fritter, and by each fritter be expressed as higher-dimension to Amount.These vectors include one group of Feature Mapping, and quantity is equal to the dimension of vector.Characteristic extraction step includes 11 convolutional layers, Each layer is provided with size64 filters.Convolutional layer can indicate are as follows:
L represents the l convolutional layer, and w indicates l layers of filter.It is the feature extraction figure of output, * indicates convolution algorithm.It indicatesParameter, whereinIt is the size of a filter convolution kernel,It is the quantity of filter.Each convolution Followed by activation primitive after layer.SRCNN is using rectification linear unitAs activation primitive.ReLU It is saturated firmly in x < 0.Derivative is 1 when due to x > 0, so, ReLU can keep gradient unattenuated in x > 0, thus Gradient disappearance problem is avoided to a certain extent.And herein using PReLU as activation primitive, PReLU is to increase parameter Modified ReLU, wherein the parameter of negative semiaxis functionIt can learn, it is quasi- which improves the model near zero Conjunction ability, and still have the characteristics that fast convergence rate.So in structure we using PReLU as activation primitive, PReLU can be with It is defined as a general activation primitive:
It isThe input signal of activation primitive on layer, and the output of activation primitive can be described as:
It is the characteristic pattern of final output,It is l layers of biasing.To solve degenerate problem, we are to the every of feature extraction Group network layer has used the quick connection of identical mapping, in addition, residual error structure can also be such that network more quickly restrains.
2 Nonlinear Mapping layers
Nonlinear Mapping layer is made of 5 layers of convolutional layer, Nonlinear Mapping be each high dimension vector is non-linearly mapped to it is another A high dimension vector, each map vector are conceptually the expressions of high-resolution fritter, these vectors include that another group of feature is reflected It penetrates, network also up-samples image using deconvolution, and the pretreatment for effectively avoiding image in this way introduces new noise.
The fusion of 3 depth channels
In a network, the detail of the high frequency of deep layer network recovery high-definition picture, and shallow-layer network only recovers image Profile information merges the feature in depth channel in the last convolutional layer using one layer of not no activation primitive of network, and Realize that dimension is reduced using 1x1 convolution, fused layer can be formulated as:
It is the high-resolution fusion feature figure of output,It isConvolution operation,It is the characteristic pattern of input.
4 image reconstructions
The mode that phase of regeneration in network structure is rebuild with traditional single scaling filter is different, uses more rulers Degree is rebuild, and the feature that such network can obtain multiple scales simultaneously is rebuild, which assembles above-mentioned high-resolution fritter It indicates to generate final high-definition picture.Last reconstruction image is expected similar to the LR of input.
The quick light-duty intensive residual error network of super-resolution rebuilding extracts the feature of mass efficient using intensive residual block and adds Rapid convergence speed;It carries out rebuilding HR image using the convolution kernel of different scale;Furthermore it is effectively reduced further combined with grouping convolution Network parameter and computation complexity obtain a new network structure FLSR-G.It is all real in PSNR, SSIM and IFC evaluation index Competitive result is showed.And reconstruction time is much smaller than best method such as DRRN and MemNet instantly.This lightweight construction Network can utilize extensively in practice.
Detailed description of the invention
Fig. 1 is the comparison figure of performance between different models, computation complexity and parameter amount;
Fig. 2 is FLSR structural schematic diagram proposed in this paper;
Fig. 3 is intensive residual block and enhanced intensive residual block schematic diagram in structure.
Specific embodiment
The quick light-duty intensive residual error network of super-resolution rebuilding, is divided into two channels, shallow channel only includes 3 convolutional layers Restore the exterior contour information of image with a warp lamination.The main task for rebuilding HR image is completed by deep channel.It masters Road includes feature extraction, Nonlinear Mapping, four up-sampling, multiple dimensioned reconstruction parts.The tool of the network is described in detail below Body implementation process.
1 data set
1.1 training dataset
Use 91image and Berkeley partitioned data set the Berkeley Segmentation of General-100, Yang et al. Dataset(BSD for 200 images) as training data, General-100 data set includes 100 bmp formats without compression Image;91image includes 91 pictures, therefore raw data set has 391 pictures altogether, under normal conditions the effect of the bigger training of data Fruit can be better, and in order to make full use of training data, we use 3 kinds of data enhancement methods: (1) every picture is respectively rotated, The angle of rotation is;(2) flip horizontal image;(3) each image is zoomed in and out in proportion, scaling It is 0.9,0.8,0.7,0.6.Therefore, final training dataset is 40 times of initial data.That is the quantity of training image isOpen image.
2.2 test data set
When amplification coefficient k is respectively 2,3,4, with Y-PSNR (PSNR) and structural similarity (Structural Similarity Index, SSIM) it is used as objective evaluation index, it is based on 4 standard base data collection Set5, Set14, BSD100 With Urban100 assessment models performance.In these data, Set5, Set14 and BSD100 contain natural scene, Urban100 The urban scenery of tool challenge is contained, the details with different frequency bands.Original image adopt using bicubic interpolation The available LR/HR image pair of sample, to be trained and test data set.We convert every coloured picture as YCbCr color space, And the channel Y is only handled, color component is expanded using the mode of bicubic interpolation.
The quick light-duty intensive residual error network of super-resolution rebuilding, the network with same even depth are compared, and have low amounts grade Computation complexity and network parameter, as shown in Figure 1;And compare the network in image quality evaluation from network reconnection effect All realized on index PSNR, SSIM and IFC it is competitive as a result, especially do up-sampling 2 times of tasks when effect More preferably, the PSNR/SSIM on Set5 data set is 37.78/0.9597, the PSNR/SSIM on Urban100 on data set It is 31.35/0.9195, remaining model performance experimental result is as shown in table 1.In addition, the reconstruction time of the network is much smaller than instantly Best method such as DRRN and MemNet, as shown in table 2, the network of this lightweight construction can be utilized extensively in practice.
2. realizing details
In order to prepare training sample, first to original HR image carry out down-sampling, decimation factor be S(S=2,3,4), utilize double three Sublinear interpolation generates corresponding LR image, is then a series of by LR image croppingThe subgraph image set conduct of size Training set.Corresponding HR test image is tailored toThe subgraph image set of size.In view of training the size of picture, To make training effect more preferable, by LR image cut out at random forImage block and corresponding HR image block as input instruction Practice network.The model is trained using Caffe, batch size when training is 64, and initial learning rate is initially set to 0.01, In the fine tuning stage divided by 10.Filter in convolutional layer is all Gaussian distributed random initializtion, and network uses boarding steps The mode of degree decline optimizes.The training on one piece of 1080 GPU 12G memory of NVIDIA GeForce GTX of entire model About half a day.In view of the compromise of training time and computation complexity and reconstruction effect, in two block of script network First four convolutional layer addition grouping convolution has eight grouping convolution altogether, and the block of modification is as shown in Fig. 2, structure is denoted as FLSR-G.? Test phase, i5-7500 CPU, NVIDIA GeForce GTX of the Open Source Code for the algorithm that we are used to compare on 16G The operation of 1080Ti GPU 12G memory.Because the MemNet and DRRN of official there is a phenomenon where GPU memory overflow, by institute The 3.5GHz Intel E5-2367 CPU (64RAM) for thering is test job to be transferred on 64G, NVIDIA TITAN X (Pascal) it is carried out on GPU (12G memory) memory, trained and test job can be completed smoothly with this.

Claims (2)

1. the quick light-duty intensive residual error network of super-resolution rebuilding, it is characterised in that: do not have in entire FLSR structure pond layer with Full articulamentum, only convolutional layer and warp lamination;The structure is made of one 3 layers of shallow channel and one 29 layers of deep channel, It is merged the reconstruction information in depth channel to obtain HR by a convolutional layer in the end of whole network;It mainly uses in shallow channel In the overall profile for restoring image, retain the raw information of image, deep channel main function is the high frequency texture letter for learning image Breath, it includes feature extraction layer, Nonlinear Mapping layer, up-sampling layer and multiple dimensioned four part of reconstruction layer;Firstly, being mentioned in feature The stage is taken, deep channel can accelerate the convergence rate of network in such a way that intensive and residual error alternately connects;Secondly, entire net Network directly up-samples image using deconvolution, avoids the preprocessing process of image, it is possible to reduce whole network carries out The complexity that oversubscription is rebuild, so that network can be with significantly more efficient training;Finally, deep channel is using multiple dimensioned in phase of regeneration The mode of reconstruction can extract short and long texture information simultaneously and carry out image reconstruction;Furthermore the enhancing model FLSR-G of FLSR Group convolution is combined in feature extraction phases, in holding result in the case where same level, greatly reduces the parameter of network And computation complexity.
2. the quick light-duty intensive residual error network of super-resolution rebuilding according to claim 1, it is characterised in that: deep channel is main Include:
1 feature extraction layer
The intensive mode for connecting and connecting with residual error has largely been used in feature extraction phases, it is intensive to connect feature extraction phases institute There is layer all to be feedovered two-by-two connection, so that the stage all convolutional layers all layers from front obtain characteristic pattern as input, Important function is played for slowing down gradient extinction tests, so that network is more easily trained;Residual error connection realizes subsequent Shorter connection between convolutional layer and the convolutional layer of front, and the connection allows the backpropagation of signal, reduces and rolls up in depth In product neural network, the phenomenon that high-frequency information can be lost in being transmitted to subsequent convolutional network, benefit is also acted in a network Repay the effect of high-frequency information;
Feature extraction is to extract a large amount of fritter from original LR image X, and each fritter is expressed as high dimension vector, these Vector includes one group of Feature Mapping, and quantity is equal to the dimension of vector, and characteristic extraction step includes 11 convolutional layers, and each layer is equal It is provided with size64 filters, convolutional layer can indicate are as follows:
L represents the l convolutional layer, and w indicates l layers of filter,It is the feature extraction figure of output,Indicate convolution algorithm,It indicatesParameter, whereinIt is the size of a filter convolution kernel,It is the quantity of filter, Mei Gejuan Followed by activation primitive after lamination, SRCNN using rectification linear unit (As activation primitive, ReLU is saturated firmly in x < 0;Derivative is 1 when due to x > 0, so, ReLU can keep gradient unattenuated in x > 0, from And gradient disappearance problem is avoided to a certain extent;Using PReLU as activation primitive, PReLU increases parameters revision ReLU, wherein the parameter of negative semiaxis functionIt can learn, which improves the models fitting ability near zero, And still have the characteristics that fast convergence rate, PReLU can be defined as a general activation primitive:
It isThe input signal of activation primitive on layer, and the output of activation primitive can be described as:
It is the characteristic pattern of final output,It is l layers of biasing;Every networking to solve degenerate problem, to feature extraction Network layers have used the quick connection of identical mapping, in addition, residual error structure can also be such that network more quickly restrains;
2 Nonlinear Mapping layers
Nonlinear Mapping layer is made of 5 layers of convolutional layer, Nonlinear Mapping be each high dimension vector is non-linearly mapped to it is another A high dimension vector, each map vector are conceptually the expressions of high-resolution fritter, these vectors include that another group of feature is reflected It penetrates, network also up-samples image using deconvolution, and the pretreatment for effectively avoiding image in this way introduces new noise;
The fusion of 3 depth channels
In a network, the detail of the high frequency of deep layer network recovery high-definition picture, and shallow-layer network only recovers image Profile information merges the feature in depth channel in the last convolutional layer using one layer of not no activation primitive of network, and Realize that dimension is reduced using 1x1 convolution, fused layer can be formulated as:
It is the high-resolution fusion feature figure of output,It isConvolution operation,It is the characteristic pattern of input;
4 image reconstructions
Multiple dimensioned reconstruction is used in network structure, the feature that can obtain multiple scales simultaneously is rebuild, in operation aggregation Stating high-resolution fritter indicates that last reconstruction image is expected similar to the LR of input to generate final high-definition picture.
CN201811515913.1A 2018-12-12 2018-12-12 The quick light-duty intensive residual error network of super-resolution rebuilding Pending CN110232653A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811515913.1A CN110232653A (en) 2018-12-12 2018-12-12 The quick light-duty intensive residual error network of super-resolution rebuilding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811515913.1A CN110232653A (en) 2018-12-12 2018-12-12 The quick light-duty intensive residual error network of super-resolution rebuilding

Publications (1)

Publication Number Publication Date
CN110232653A true CN110232653A (en) 2019-09-13

Family

ID=67861945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811515913.1A Pending CN110232653A (en) 2018-12-12 2018-12-12 The quick light-duty intensive residual error network of super-resolution rebuilding

Country Status (1)

Country Link
CN (1) CN110232653A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110992265A (en) * 2019-12-02 2020-04-10 北京数码视讯科技股份有限公司 Image processing method and model, model training method and electronic equipment
CN111080533A (en) * 2019-10-21 2020-04-28 南京航空航天大学 Digital zooming method based on self-supervision residual error perception network
CN111091553A (en) * 2019-12-12 2020-05-01 哈尔滨市科佳通用机电股份有限公司 Method for detecting loss of blocking key
CN111161146A (en) * 2019-12-25 2020-05-15 大连理工大学 Coarse-to-fine single-image super-resolution reconstruction method
CN111311490A (en) * 2020-01-20 2020-06-19 陕西师范大学 Video super-resolution reconstruction method based on multi-frame fusion optical flow
CN111311488A (en) * 2020-01-15 2020-06-19 广西师范大学 Efficient super-resolution reconstruction method based on deep learning
CN111652170A (en) * 2020-06-09 2020-09-11 电子科技大学 Secondary radar signal processing method based on two-channel residual error deep neural network
CN111696035A (en) * 2020-05-21 2020-09-22 电子科技大学 Multi-frame image super-resolution reconstruction method based on optical flow motion estimation algorithm
CN111932460A (en) * 2020-08-10 2020-11-13 北京大学深圳医院 MR image super-resolution reconstruction method and device, computer equipment and storage medium
CN113763244A (en) * 2021-08-18 2021-12-07 济宁安泰矿山设备制造有限公司 Endoscope image super-resolution reconstruction method for intelligent pump cavity fault diagnosis
CN113793263A (en) * 2021-08-23 2021-12-14 电子科技大学 Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution
CN114049251A (en) * 2021-09-01 2022-02-15 中国矿业大学 Fuzzy image super-resolution reconstruction method and device for AI video analysis
CN114463189A (en) * 2020-11-09 2022-05-10 中国科学院沈阳自动化研究所 Image information analysis modeling method based on dense residual UNet
CN115601792A (en) * 2022-12-14 2023-01-13 长春大学(Cn) Cow face image enhancement method
EP4089640A4 (en) * 2020-01-07 2024-02-14 Raycan Technology Co., Ltd. (Suzhou) Image reconstruction method, apparatus, device, system, and computer readable storage medium
US11948279B2 (en) 2020-11-23 2024-04-02 Samsung Electronics Co., Ltd. Method and device for joint denoising and demosaicing using neural network
WO2024082796A1 (en) * 2023-06-21 2024-04-25 西北工业大学 Spectral cross-domain transfer super-resolution reconstruction method for multi-domain image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107240066A (en) * 2017-04-28 2017-10-10 天津大学 Image super-resolution rebuilding algorithm based on shallow-layer and deep layer convolutional neural networks
CN107680044A (en) * 2017-09-30 2018-02-09 福建帝视信息科技有限公司 A kind of image super-resolution convolutional neural networks speed-up computation method
CN108734660A (en) * 2018-05-25 2018-11-02 上海通途半导体科技有限公司 A kind of image super-resolution rebuilding method and device based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107240066A (en) * 2017-04-28 2017-10-10 天津大学 Image super-resolution rebuilding algorithm based on shallow-layer and deep layer convolutional neural networks
CN107680044A (en) * 2017-09-30 2018-02-09 福建帝视信息科技有限公司 A kind of image super-resolution convolutional neural networks speed-up computation method
CN108734660A (en) * 2018-05-25 2018-11-02 上海通途半导体科技有限公司 A kind of image super-resolution rebuilding method and device based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘村等: "基于卷积神经网络的视频图像超分辨率重建方法", 《计算机应用研究》 *
王民等: "基于优化卷积神经网络的图像超分辨率重建", 《激光与光电子学进展》 *
陈书贞等: "利用多尺度卷积神经网络的图像超分辨率算法", 《信号处理》 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080533A (en) * 2019-10-21 2020-04-28 南京航空航天大学 Digital zooming method based on self-supervision residual error perception network
CN111080533B (en) * 2019-10-21 2023-05-16 南京航空航天大学 Digital zooming method based on self-supervision residual sensing network
CN110992265B (en) * 2019-12-02 2023-10-20 北京数码视讯科技股份有限公司 Image processing method and model, training method of model and electronic equipment
CN110992265A (en) * 2019-12-02 2020-04-10 北京数码视讯科技股份有限公司 Image processing method and model, model training method and electronic equipment
CN111091553A (en) * 2019-12-12 2020-05-01 哈尔滨市科佳通用机电股份有限公司 Method for detecting loss of blocking key
CN111161146B (en) * 2019-12-25 2022-10-14 大连理工大学 Coarse-to-fine single-image super-resolution reconstruction method
CN111161146A (en) * 2019-12-25 2020-05-15 大连理工大学 Coarse-to-fine single-image super-resolution reconstruction method
EP4089640A4 (en) * 2020-01-07 2024-02-14 Raycan Technology Co., Ltd. (Suzhou) Image reconstruction method, apparatus, device, system, and computer readable storage medium
CN111311488A (en) * 2020-01-15 2020-06-19 广西师范大学 Efficient super-resolution reconstruction method based on deep learning
CN111311490A (en) * 2020-01-20 2020-06-19 陕西师范大学 Video super-resolution reconstruction method based on multi-frame fusion optical flow
CN111696035A (en) * 2020-05-21 2020-09-22 电子科技大学 Multi-frame image super-resolution reconstruction method based on optical flow motion estimation algorithm
CN111652170A (en) * 2020-06-09 2020-09-11 电子科技大学 Secondary radar signal processing method based on two-channel residual error deep neural network
CN111932460A (en) * 2020-08-10 2020-11-13 北京大学深圳医院 MR image super-resolution reconstruction method and device, computer equipment and storage medium
CN111932460B (en) * 2020-08-10 2023-09-22 北京大学深圳医院 MR image super-resolution reconstruction method, device, computer equipment and storage medium
CN114463189A (en) * 2020-11-09 2022-05-10 中国科学院沈阳自动化研究所 Image information analysis modeling method based on dense residual UNet
US11948279B2 (en) 2020-11-23 2024-04-02 Samsung Electronics Co., Ltd. Method and device for joint denoising and demosaicing using neural network
CN113763244B (en) * 2021-08-18 2024-05-14 济宁安泰矿山设备制造有限公司 Endoscope image super-resolution reconstruction method for diagnosing intelligent pump cavity fault
CN113763244A (en) * 2021-08-18 2021-12-07 济宁安泰矿山设备制造有限公司 Endoscope image super-resolution reconstruction method for intelligent pump cavity fault diagnosis
CN113793263A (en) * 2021-08-23 2021-12-14 电子科技大学 Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution
CN113793263B (en) * 2021-08-23 2023-04-07 电子科技大学 Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution
CN114049251A (en) * 2021-09-01 2022-02-15 中国矿业大学 Fuzzy image super-resolution reconstruction method and device for AI video analysis
CN115601792A (en) * 2022-12-14 2023-01-13 长春大学(Cn) Cow face image enhancement method
WO2024082796A1 (en) * 2023-06-21 2024-04-25 西北工业大学 Spectral cross-domain transfer super-resolution reconstruction method for multi-domain image

Similar Documents

Publication Publication Date Title
CN110232653A (en) The quick light-duty intensive residual error network of super-resolution rebuilding
CN112507997B (en) Face super-resolution system based on multi-scale convolution and receptive field feature fusion
CN110570353B (en) Super-resolution reconstruction method for generating single image of countermeasure network by dense connection
Engin et al. Cycle-dehaze: Enhanced cyclegan for single image dehazing
CN108537733B (en) Super-resolution reconstruction method based on multi-path deep convolutional neural network
An et al. TR-MISR: Multiimage super-resolution based on feature fusion with transformers
Zhao et al. Invertible image decolorization
CN110119780A (en) Based on the hyperspectral image super-resolution reconstruction method for generating confrontation network
Shen et al. Convolutional neural pyramid for image processing
CN106097253B (en) A kind of single image super resolution ratio reconstruction method based on block rotation and clarity
CN109035146A (en) A kind of low-quality image oversubscription method based on deep learning
CN111768340B (en) Super-resolution image reconstruction method and system based on dense multipath network
CN114841856A (en) Image super-pixel reconstruction method of dense connection network based on depth residual channel space attention
CN115880225A (en) Dynamic illumination human face image quality enhancement method based on multi-scale attention mechanism
CN116152120A (en) Low-light image enhancement method and device integrating high-low frequency characteristic information
CN112001843A (en) Infrared image super-resolution reconstruction method based on deep learning
CN117575915B (en) Image super-resolution reconstruction method, terminal equipment and storage medium
CN117274059A (en) Low-resolution image reconstruction method and system based on image coding-decoding
CN116739899A (en) Image super-resolution reconstruction method based on SAUGAN network
CN112163998A (en) Single-image super-resolution analysis method matched with natural degradation conditions
CN117474764B (en) High-resolution reconstruction method for remote sensing image under complex degradation model
Tang et al. An improved CycleGAN based model for low-light image enhancement
Chen et al. Attention-based broad self-guided network for low-light image enhancement
Tang et al. Structure-embedded ghosting artifact suppression network for high dynamic range image reconstruction
Xie et al. MRSCFusion: Joint residual Swin transformer and multiscale CNN for unsupervised multimodal medical image fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190913

RJ01 Rejection of invention patent application after publication