CN110009565A - A kind of super-resolution image reconstruction method based on lightweight network - Google Patents

A kind of super-resolution image reconstruction method based on lightweight network Download PDF

Info

Publication number
CN110009565A
CN110009565A CN201910272182.0A CN201910272182A CN110009565A CN 110009565 A CN110009565 A CN 110009565A CN 201910272182 A CN201910272182 A CN 201910272182A CN 110009565 A CN110009565 A CN 110009565A
Authority
CN
China
Prior art keywords
network
weight
convolution
super
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910272182.0A
Other languages
Chinese (zh)
Inventor
杜娟
范赐恩
邹炼
魏文澜
周紫玉
田胜
沈家蔚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201910272182.0A priority Critical patent/CN110009565A/en
Publication of CN110009565A publication Critical patent/CN110009565A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution

Abstract

The invention discloses a kind of super-resolution image reconstruction methods based on lightweight network.This method mainly includes two big modules of network light-weight design and network Quantitative design, wherein, network lightweight module is mainly to improve original EDSR network structure using the ShuffleNet unit structure that the present invention designs, to simplify structure, network parameter is largely reduced, storage pressure is mitigated;Network quantization modules mainly by network beta pruning, weight is shared, huffman coding three parts form, by the combination of three parts, network parameter is quantified, and change coding mode, so that huge compression network parameter amount, improves calculating speed.The present invention improves network structure on existing image super-resolution rebuilding network foundation, and the method for combining a variety of depth-compressions optimizes, the effect after image reconstruction can be effectively ensured, while having accomplished that network parameter is few, processing speed is fast, portable strong.

Description

A kind of super-resolution image reconstruction method based on lightweight network
Technical field
The present invention relates to computer visions, image super-resolution rebuilding field, are based on network structure more particularly to one kind The super-resolution image reconstruction method of lightweight and parameter quantization.
Background technique
Super-resolution image reconstruction, which refers to, recovers high-definition picture by a width low-resolution image or image sequence Process.
Compared to low-resolution image, it is thin that high-definition picture generally comprises bigger pixel density, richer texture Section and higher Reliability.But in actually, degenerated by acquisition equipment and environment, Network Transfer Media and bandwidth, image The constraint of the factors such as model itself, we are usually unable to directly obtain the ideal fuzzy with edge sharpening, non-block High-definition picture.
The most direct way for promoting image resolution ratio is to improve to the optical hardware in acquisition system, but this do Method be limited to manufacturing process be difficult to be greatly improved, very high etc. constraint of manufacturing cost.As a result, from the angle of software and algorithm Hand realizes that the technology of image super-resolution rebuilding becomes the hot research class of the multiple fields such as image procossing and computer vision Topic.
And by the neural network of super-resolution rebuilding constructed by deep learning, it is mostly all excessively bulky or want The computing resource asked is excessive, so light-weighted super resolution ratio reconstruction method becomes the hot spot of research.
It is more existing about lightweight super-resolution rebuilding patent (including invention granted patent and Invention Announce it is special Benefit), as follows:
1) application No. is the Chinese invention patent " super-resolutions based on recurrence residual error network of CN201810638253.X Image rebuilding method ", the invention trains neural network using overall situation residual error study used in local residual error study rather than VDSR, and By introducing recursive structure in residual unit.But the method has still used residual error network, and residual error network depth is deep, parameter amount It is larger.And residual error network is suitable for solving high-rise computer vision problem, and super-resolution belongs to underlying computer vision Problem.
2) application No. is the Chinese invention patent of: CN201810535634.5, " one kind is based on improved dense convolutional Neural The super resolution ratio reconstruction method of network ", the invention is by dense convolutional neural networks structure (Dense Convolutional Network, DenseNet) thought be applied to the super-resolution rebuilding of single-frame images, and on the basis of DenseNet structure Network structure is improved, reduces certain parameter, but the method committed memory is high, it is computationally intensive, be not suitable in general electricity Brain or mobile terminal use.
With the development of deep learning, the development of computer vision field is very rapid.Light-weighted super-resolution rebuilding Network is also very much, but they appoint there are many deficiency, and such as: parameter is more, committed memory is high, computationally intensive, these networks have very Big raising space.The innovation of the invention consists in that being shared in conjunction with beta pruning, weight, Huffman by redesigning network structure The method of a variety of quantizations is encoded to reach the content for reducing the network scale of construction, reducing computation complexity, removing redundancy, so as to transport It uses mobile terminal or accomplishes the purpose handled in real time.
Summary of the invention
In view of the deficiencies of the prior art, the present invention provides a kind of super-resolution image reconstruction sides based on lightweight network Method.
The present invention is based on EDSR (Enhanced Deep Residual Networks, lower abbreviation EDSR) models to be set Meter, technical solution comprise the steps of:
Step 1, the ResBlock (residual error of EDSR Super-resolution reconstruction established model is replaced with the ShuffleNet unit of design Block) in twoConvolution kernel, obtain improved network model, wherein ShuffleNet unit is by 1 × 1GConv (organizing convolution point by point), channel are shuffled, 3 × 3DWConv (depth separation convolution), 1 × 1GConv tetra- are partially formed;
Step 2, weighting parameter compression training, including following sub-step are carried out to network model improved in step 1;
Step 2.1, the connection that weight absolute value in weight matrix is less than threshold value is subjected to beta pruning, it is sparse after obtaining beta pruning Matrix;
Step 2.2, the weight in sparse matrix obtained to step 2.1 quantifies, and the numerical value for completing all weights is total It enjoys;
Step 2.3, weight data is encoded using huffman coding mode;
Step 3, the image of required super-resolution rebuilding is calculated by above-mentioned trained network model, is obtained The image amplified after reconstruction.
Further, the concrete processing procedure of ShuffleNet unit is as follows in step 1,
Convolution first is carried out with 1 × 1GConv, it is assumed that upper one layer of output characteristic pattern has N number of, i.e. port number=N, by channel Number is divided into 3 parts, then it is 3 groups that 1 × 1Gconv, which is divided, each organizes corresponding N/3 channel, carries out convolution operation;Then each group Output is stacked after the completion of convolution, as the output channel of this layer, after obtaining new characteristic pattern, carries out batch normalizing Change and activation primitive ReLU is handled;
It remakes channel to shuffle, obtained new characteristic pattern is divided into g group, then has g × n output channel, utilizes reshape It is converted into the size of (g, n), then transposition is (n, g), last average packet, then divides back g group as next layer of input;
Then convolution algorithm, the i.e. convolution kernel with in_channels 3 × 3 and input are carried out using 3 × 3DWConv Corresponding channel characteristic spectrum convolution, then carry out convolution with out_channels 1 × 1 convolution kernel and obtain out_channels Characteristic spectrum is simultaneously merged, batch normalized;
Finally, doing channel fusion with identical mapping to be adapted to, then convolution is carried out with 1 × 1GConv and does batch normalization Processing.
Further, the process of beta pruning is in step 2.1, the use of threshold value is that a carries out beta pruning, weight absolute value is less than a's Connection will be cut up, and then retraining network model adjusts if resulting new network model will not make image quality decrease Threshold value is 10*a, if quality sharp fall at this time, just carries out the dot interlace scanning of threshold value, and the range of scanning is from 10*a to a, step A length of a guarantees that picture quality is constant until finding, the highest value of threshold value, and is finely adjusted to threshold value at this time.
Further, the sparse matrix in step 2.1 after beta pruning, wherein the index of weight is changed to storage and keeps up with one and have The relative position of weight is imitated, i.e., what subsequent element successively stored is index difference (the span threshold with previous nonzero element Value), while index difference is stored using fixed bit, wherein span threshold value is set as 8 in convolutional layer, and full articulamentum is 5.
Further, the specific implementation of step 2.2 is as follows,
Step 2.2.1, it is first evenly spaced between the maximum value and minimum value of weight that quantization is gone to export, to obtain just Beginningization k-means mass center, formula is as follows, and wherein n is the digit of quantization:
Wherein, wminFor weight the smallest in sparse matrix, wmaxFor weight maximum in sparse matrix, k is k-th of matter The heart, k ∈ [0,2n),For the value of k-th of mass center of the initialization of calculating acquisition;
Then quantization threshold is determined using k-means function, that is, determine each weight is exported using which quantization Instead of value, the weight in a cluster shares a value (center of mass values);
Step 2.2.2 carries out normal propagated forward and backpropagation first, and pytorch frame is waited to automatically generate gradient After matrix, k-means mass center of birdsing of the same feather flock together is finely adjusted, the mode of fine tuning is to the corresponding gradient of all weights for belonging to same cluster It sums, is subtracted multiplied by learning rate, then from mass center, formula is as follows:
WhereinFor n-th fine tuning after as a result, lr is learning rate, CkFor birds of the same feather flock together belong to k cluster all weights constitute Set, grad (w) indicate the corresponding gradient of weight w, and i, j are the index of sparse matrix, wijIt is arranged for the i-th row jth of sparse matrix, The initial value of trim processFor the mass center of birdsing of the same feather flock together of k-means output.
Compared with prior art, the present invention has the following advantages and beneficial effects:
1) ShuffleNet unit is utilized instead of ResBlock (residual error by redesigning network structure in the present invention Block) in two 3 × 3 convolution kernels, significantly reduce parameter amount.
2) present invention utilizes beta prunings, in the case where guaranteeing effect, will affect lesser weight and are set to zero, not only reduce Parameter amount, and accelerate calculating speed.
3) it is shared that present invention utilizes weights, and original sparse matrix is become a sparse matrix and adds a look-up table, The position of i.e. original sparse matrix storage weight w becomes storing the number k of the affiliated cluster of w, and the digit of cluster number k is less than weight w Digit, to achieve the purpose that further compression.
4) present invention utilizes huffman coding, the probability that occurs according to character is encoded, and is reduced certain superfluous It is remaining.
Detailed description of the invention
Fig. 1 is flow chart of the invention.
Fig. 2 is the structure chart of ShuffleNet Unit.
Fig. 3 is overall network structure chart.
Fig. 4 is the flow chart of beta pruning.
Specific embodiment
Present invention is primarily based on EDSR Super-resolution reconstruction established model, consider that the deficiencies in the prior art make super-resolution rebuilding Neural network structure is excessively huge, provides a kind of super-resolution image reconstruction method based on lightweight network.The present invention can be with In the case where picture quality is not lost, model parameter amount is reduced to 1/10 or so of EDSR model.
The evaluation criterion of image quality is image Y-PSNR (PSRN), and when image is amplified 2 times, PSRN is greater than 34dB is considered as acceptable.When image is amplified 4 times, PSRN is greater than 28dB, is considered as acceptable.
Fig. 1 is substantially process frame of the invention, will carry out a specific elaboration to process of the invention below.
Step 1, ResBlock (residual error in EDSR Super-resolution reconstruction established model is replaced with the ShuffleNet unit of design Block) in twoConvolution kernel, obtain improved network model, wherein ShuffleUnit structure chart join See Fig. 2.
Convolution first is carried out with 1 × 1GConv, it is N number of first to assume that upper one layer of output characteristic pattern has, i.e. port number=N will lead to Road number is divided into 3 parts.It is again 3 groups by 1 × 1Gconv points, each organizes corresponding N/3 channel, carries out convolution operation, then each Output is stacked after the completion of group convolution, as the output channel of this layer, after obtaining new characteristic pattern, batch is carried out and returns One changes and activation primitive ReLU processing.
It remakes channel to shuffle, obtained new characteristic pattern is divided into g group, then has g × n output channel, utilizes reshape It is converted into the size of (g, n), then transposition is (n, g), last average packet, then divides back g group as next layer of input.
Then convolution algorithm, the i.e. convolution kernel with in_channels 3 × 3 and input are carried out using 3 × 3DWConv Corresponding channel characteristic spectrum convolution, then carry out convolution with out_channels 1 × 1 convolution kernel and obtain out_channels Characteristic spectrum is simultaneously merged, batch normalized.Finally, do channel fusion with identical mapping to be adapted to, then with 1 × 1GConv carries out convolution and does batch normalized.
Step 2, weighting parameter compression training, including following sub-step are carried out to network model improved in step 1;
Step 2.1, beta pruning is carried out to network model improved in step 1, obtains sparse matrix, beta pruning flow chart is shown in figure 4, specific implementation process is as follows:
Network model and threshold value are first inputted, the weight that all absolute values are less than threshold value is set to 0, that is, indicates this connection quilt It cuts, and in trim process later, the gradient of this connection will also be set to 0, i.e., do not participate in training.Assuming that using threshold value Beta pruning is carried out for 0.01, the connection less than 0.01 will be cut up, then retraining network.If resulting new network will not make figure The decline of image quality amount, i.e. adjustment threshold value are 0.1, and quality sharp fall, just carries out the dot interlace scanning of threshold value, the range of scanning at this time From 0.1 to 0.01, step-length 0.01 guarantees that picture quality is substantially close until finding, the highest value of threshold value, and to threshold at this time Value is finely adjusted.The weight matrix finally obtained becomes a sparse matrix due to the reason of beta pruning.
In order to further compress, for the index of weight, the index of absolute position is no longer stored, but storage keeps up with one The relative position of effective weight, i.e., what subsequent element successively stored is index difference (the span threshold with previous nonzero element Value), the byte number indexed in this way can be compressed by.This index difference is stored using fixed bit.Span threshold value is being rolled up Lamination is set as 8, and full articulamentum is 5.Sparse matrix compressed sparse row (CSR) and compressed The format of sparse column (CSC) is compressed, and needs 2a+n+1 storage unit in total, and a is nonzero element number, and n is Line number or columns.
Step 2.2, the weight obtained to step 2.1 retraining quantifies, and the numerical value for completing all weights is shared;
Wherein quantizing process is divided into two steps of quantization and fine tuning, and quantization step is realized using the k-mean of sklearn, micro- It adjusts and is realized using pytorch itself.Here quantization is to specify a series of values, selects all weights all therefrom, i.e., The numerical value for completing all weights is shared.The process is divided into following steps:
It is first evenly spaced between the maximum value and minimum value of weight that quantization is gone to export, to obtain initialization k- Means mass center, formula is as follows, wherein n be quantization digit, 2nAs have 2nA mass center:
wminFor weight the smallest in sparse matrix, wmaxFor weight maximum in sparse matrix, k is k-th of mass center, k ∈ [0,2n),For the value of k-th of mass center of the initialization of calculating acquisition.
Then quantization threshold is determined using k-means function, that is, determine each weight is exported using which quantization Instead of value, the weight in a cluster shares a value (center of mass values).Then the mass center of k-means is finely adjusted again: first Normal propagated forward and backpropagation are carried out, pytorch frame will automatically generate gradient matrix at this time, when generation gradient matrix Afterwards, mass center of birdsing of the same feather flock together is finely adjusted, the mode of fine tuning is summed to the corresponding gradient of all weights for belonging to same cluster, is multiplied It with learning rate, then subtracts from mass center, formula is as follows:
WhereinFor n-th fine tuning after as a result, lr is learning rate, CkFor birds of the same feather flock together belong to k cluster all weights constitute Set, grad (w) indicate the corresponding gradient of weight w, and i, j are the index of sparse matrix, wijIt is arranged for the i-th row jth of sparse matrix, The initial value of trim processFor the mass center of birdsing of the same feather flock together of k-means output.
After completing quantization, i.e., fine tuning terminates, and the mark of end is to have trained training set (the DIV2K data obtained from network Collection).Sparse matrix originally becomes a sparse matrix and adds a look-up table, i.e., the position of original sparse matrix storage weight w Setting becomes storing the affiliated cluster number k of w, and the digit of cluster number k is less than the digit of weight w, has achieved the purpose that compression.Look-up table rope It is cited as cluster number, is worth the mass center C that birdss of the same feather flock together for the clusterk(quantization output).The process for restoring a matrix becomes first from sparse square Corresponding cluster number is read in battle array, then such corresponding value is searched from look-up table.
Step 2.3, weight data is encoded using huffman coding mode;Due to weight and weight index distribution It is heterogeneous, double-peak shape, therefore can use huffman coding to handle it.During carrying out operation from Data required for being decoded in the storage of huffman coding.
Step 3, the image of required super-resolution rebuilding is calculated by our trained network models The image amplified after being rebuild.
It is above exactly detailed step of the invention, it should be appreciated that the part that this specification does not elaborate belongs to existing There is technology.The invention proposes a kind of new network structures, and combine beta pruning, weight shared, a variety of quantizations of huffman coding Method has accomplished the reduction network scale of construction, has reduced computation complexity, removal redundant content.
Specific embodiment described herein is only an example for the spirit of the invention.The neck of technology belonging to the present invention The technical staff in domain can make various modifications or additions to the described embodiments or replace by a similar method In generation, however, it does not deviate from the spirit of the invention or beyond the scope of the appended claims.

Claims (5)

1. a kind of super-resolution image reconstruction method based on lightweight network, which is characterized in that comprise the steps of:
Step 1, it is replaced in the ResBlock (residual block) of EDSR Super-resolution reconstruction established model with the ShuffleNet unit of design TwoConvolution kernel, obtain improved network model, wherein ShuffleNet unit by 1 × 1GConv (organizing convolution point by point), channel are shuffled, 3 × 3DWConv (depth separation convolution), 1 × 1GConv tetra- are partially formed;
Step 2, weighting parameter compression training, including following sub-step are carried out to network model improved in step 1;
Step 2.1, the connection that weight absolute value in weight matrix is less than threshold value is subjected to beta pruning, the sparse square after obtaining beta pruning Battle array;
Step 2.2, the weight in sparse matrix obtained to step 2.1 quantifies, and the numerical value for completing all weights is shared;
Step 2.3, weight data is encoded using huffman coding mode;
Step 3, the image of required super-resolution rebuilding is calculated by above-mentioned trained network model, is rebuild The image amplified afterwards.
2. a kind of super-resolution image reconstruction method based on lightweight network according to claim 1, feature exist In: the concrete processing procedure of ShuffleNet unit is as follows in step 1,
Convolution first is carried out with 1 × 1GConv, it is assumed that upper one layer of output characteristic pattern has N number of, i.e. port number=N, by port number point It is 3 groups at 3 parts, then by 1 × 1Gconv points, each organizes corresponding N/3 channel, carries out convolution operation;Then each group of convolution Output is stacked after the completion, as the output channel of this layer, after obtaining new characteristic pattern, carry out batch normalization and Activation primitive ReLU processing;
Remake channel to shuffle, obtained new characteristic pattern be divided into g group, then has g × n output channel, using reshape by its It is converted to the size of (g, n), then transposition is (n, g), last average packet, then divides back g group as next layer of input;
Then convolution algorithm is carried out using 3 × 3DWConv, i.e., with in_channels 3 × 3 convolution kernels and the correspondence of input Channel characteristics map convolution, then carry out convolution with out_channels 1 × 1 convolution kernel and obtain out_channels feature Map is simultaneously merged, batch normalized;
Finally, doing channel fusion with identical mapping to be adapted to, then convolution is carried out with 1 × 1GConv and is done at batch normalization Reason.
3. a kind of super-resolution image reconstruction method based on lightweight network according to claim 1, it is characterised in that: The process of beta pruning is in step 2.1, the use of threshold value is that a carries out beta pruning, connection of the weight absolute value less than a will be cut up, then Retraining network model, if resulting new network model will not make image quality decrease, i.e. adjustment threshold value is 10*a, if this Shi Zhiliang sharp fall just carries out the dot interlace scanning of threshold value, and the range of scanning is from 10*a to a, step-length a, protects until finding It is constant to demonstrate,prove picture quality, the highest value of threshold value, and threshold value at this time is finely adjusted.
4. a kind of super-resolution image reconstruction method based on lightweight network according to claim 3, it is characterised in that: Sparse matrix in step 2.1 after beta pruning, wherein the index of weight is changed to the relative position that storage keeps up with an effective weight, What i.e. subsequent element successively stored be with the index difference of previous nonzero element (span threshold value), while using fixed bit Index difference is stored, wherein span threshold value is set as 8 in convolutional layer, and full articulamentum is 5.
5. a kind of super-resolution image reconstruction method based on lightweight network according to claim 1 or 2 or 3 or 4, Be characterized in that: the specific implementation of step 2.2 is as follows,
Step 2.2.1, it is first evenly spaced between the maximum value and minimum value of weight that quantization is gone to export, to be initialized K-means mass center, formula is as follows, and wherein n is the digit of quantization:
Wherein, wminFor weight the smallest in sparse matrix, wmaxFor weight maximum in sparse matrix, k is k-th of mass center, k ∈ [0,2n),For the value of k-th of mass center of the initialization of calculating acquisition;
Then quantization threshold is determined using k-means function, that is, determine each weight is replaced using which quantization output It is worth, the weight in a cluster shares a value (center of mass values);
Step 2.2.2 carries out normal propagated forward and backpropagation first, and pytorch frame is waited to automatically generate gradient matrix Afterwards, k-means mass center of birdsing of the same feather flock together is finely adjusted, the mode of fine tuning is carried out to the corresponding gradient of all weights for belonging to same cluster Summation, subtracts, formula is as follows multiplied by learning rate, then from mass center:
WhereinFor n-th fine tuning after as a result, lr is learning rate, CkFor birds of the same feather flock together belong to k cluster all weights constitute set, Grad (w) indicates the corresponding gradient of weight w, and i, j are the index of sparse matrix, wijIt is arranged for the i-th row jth of sparse matrix, fine tuning The initial value of processFor the mass center of birdsing of the same feather flock together of k-means output.
CN201910272182.0A 2019-04-04 2019-04-04 A kind of super-resolution image reconstruction method based on lightweight network Pending CN110009565A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910272182.0A CN110009565A (en) 2019-04-04 2019-04-04 A kind of super-resolution image reconstruction method based on lightweight network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910272182.0A CN110009565A (en) 2019-04-04 2019-04-04 A kind of super-resolution image reconstruction method based on lightweight network

Publications (1)

Publication Number Publication Date
CN110009565A true CN110009565A (en) 2019-07-12

Family

ID=67170043

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910272182.0A Pending CN110009565A (en) 2019-04-04 2019-04-04 A kind of super-resolution image reconstruction method based on lightweight network

Country Status (1)

Country Link
CN (1) CN110009565A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110516602A (en) * 2019-08-28 2019-11-29 杭州律橙电子科技有限公司 A kind of public traffice passenger flow statistical method based on monocular camera and depth learning technology
CN110619603A (en) * 2019-08-29 2019-12-27 浙江师范大学 Single image super-resolution method for optimizing sparse coefficient
CN110662069A (en) * 2019-09-20 2020-01-07 中国科学院自动化研究所南京人工智能芯片创新研究院 Image generation method based on rapid GAN
CN110992267A (en) * 2019-12-05 2020-04-10 北京科技大学 Abrasive particle identification method based on DPSR and Lightweight CNN
CN111263163A (en) * 2020-02-20 2020-06-09 济南浪潮高新科技投资发展有限公司 Method for realizing depth video compression framework based on mobile phone platform
CN112329766A (en) * 2020-10-14 2021-02-05 北京三快在线科技有限公司 Character recognition method and device, electronic equipment and storage medium
CN112364925A (en) * 2020-11-16 2021-02-12 哈尔滨市科佳通用机电股份有限公司 Deep learning-based rolling bearing oil shedding fault identification method
CN112927174A (en) * 2019-12-06 2021-06-08 阿里巴巴集团控股有限公司 Method and device for image processing and image training to channel shuffling
CN113177445A (en) * 2021-04-16 2021-07-27 新华智云科技有限公司 Video mirror moving identification method and system
CN115239557A (en) * 2022-07-11 2022-10-25 河北大学 Light-weight X-ray image super-resolution reconstruction method
WO2022262660A1 (en) * 2021-06-15 2022-12-22 华南理工大学 Pruning and quantization compression method and system for super-resolution network, and medium
CN115565068A (en) * 2022-09-30 2023-01-03 宁波大学 Full-automatic detection method for high-rise building glass curtain wall damage based on light-weight deep convolutional neural network
CN117237190A (en) * 2023-09-15 2023-12-15 中国矿业大学 Lightweight image super-resolution reconstruction system and method for edge mobile equipment
CN112927174B (en) * 2019-12-06 2024-05-03 阿里巴巴集团控股有限公司 Image processing, image training channel shuffling method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898087A (en) * 2018-06-22 2018-11-27 腾讯科技(深圳)有限公司 Training method, device, equipment and the storage medium of face key point location model
CN109064396A (en) * 2018-06-22 2018-12-21 东南大学 A kind of single image super resolution ratio reconstruction method based on depth ingredient learning network
CN109191392A (en) * 2018-08-09 2019-01-11 复旦大学 A kind of image super-resolution reconstructing method of semantic segmentation driving

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898087A (en) * 2018-06-22 2018-11-27 腾讯科技(深圳)有限公司 Training method, device, equipment and the storage medium of face key point location model
CN109064396A (en) * 2018-06-22 2018-12-21 东南大学 A kind of single image super resolution ratio reconstruction method based on depth ingredient learning network
CN109191392A (en) * 2018-08-09 2019-01-11 复旦大学 A kind of image super-resolution reconstructing method of semantic segmentation driving

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BEE LIM等: "Enhanced Deep Residual Networks for Single Image Super-Resolution", 《COMPUTER VISION AND PATTERN RECOGNITION》 *
SONG HAN等: "Deep Compression: Compressing Deep Neural Networks with Pruning,Trained Quantization and Huffman Coding", 《COMPUTER VISION AND PATTERN RECOGNITION》 *
XIANGYU ZHANG等: "ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices", 《COMPUTER VISION AND PATTERN RECOGNITION》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110516602A (en) * 2019-08-28 2019-11-29 杭州律橙电子科技有限公司 A kind of public traffice passenger flow statistical method based on monocular camera and depth learning technology
CN110619603B (en) * 2019-08-29 2023-11-10 浙江师范大学 Single image super-resolution method for optimizing sparse coefficient
CN110619603A (en) * 2019-08-29 2019-12-27 浙江师范大学 Single image super-resolution method for optimizing sparse coefficient
CN110662069A (en) * 2019-09-20 2020-01-07 中国科学院自动化研究所南京人工智能芯片创新研究院 Image generation method based on rapid GAN
CN110992267A (en) * 2019-12-05 2020-04-10 北京科技大学 Abrasive particle identification method based on DPSR and Lightweight CNN
CN112927174A (en) * 2019-12-06 2021-06-08 阿里巴巴集团控股有限公司 Method and device for image processing and image training to channel shuffling
WO2021110147A1 (en) * 2019-12-06 2021-06-10 阿里巴巴集团控股有限公司 Methods and apparatuses for image processing, image training and channel shuffling
CN112927174B (en) * 2019-12-06 2024-05-03 阿里巴巴集团控股有限公司 Image processing, image training channel shuffling method and device
CN111263163A (en) * 2020-02-20 2020-06-09 济南浪潮高新科技投资发展有限公司 Method for realizing depth video compression framework based on mobile phone platform
CN112329766A (en) * 2020-10-14 2021-02-05 北京三快在线科技有限公司 Character recognition method and device, electronic equipment and storage medium
CN112364925A (en) * 2020-11-16 2021-02-12 哈尔滨市科佳通用机电股份有限公司 Deep learning-based rolling bearing oil shedding fault identification method
CN112364925B (en) * 2020-11-16 2021-06-04 哈尔滨市科佳通用机电股份有限公司 Deep learning-based rolling bearing oil shedding fault identification method
CN113177445A (en) * 2021-04-16 2021-07-27 新华智云科技有限公司 Video mirror moving identification method and system
WO2022262660A1 (en) * 2021-06-15 2022-12-22 华南理工大学 Pruning and quantization compression method and system for super-resolution network, and medium
CN115239557B (en) * 2022-07-11 2023-10-24 河北大学 Light X-ray image super-resolution reconstruction method
CN115239557A (en) * 2022-07-11 2022-10-25 河北大学 Light-weight X-ray image super-resolution reconstruction method
CN115565068A (en) * 2022-09-30 2023-01-03 宁波大学 Full-automatic detection method for high-rise building glass curtain wall damage based on light-weight deep convolutional neural network
CN115565068B (en) * 2022-09-30 2023-04-18 宁波大学 Full-automatic detection method for breakage of high-rise building glass curtain wall based on light-weight deep convolutional neural network
CN117237190A (en) * 2023-09-15 2023-12-15 中国矿业大学 Lightweight image super-resolution reconstruction system and method for edge mobile equipment
CN117237190B (en) * 2023-09-15 2024-03-15 中国矿业大学 Lightweight image super-resolution reconstruction system and method for edge mobile equipment

Similar Documents

Publication Publication Date Title
CN110009565A (en) A kind of super-resolution image reconstruction method based on lightweight network
CN113362223B (en) Image super-resolution reconstruction method based on attention mechanism and two-channel network
CN109726799A (en) A kind of compression method of deep neural network
CN110225350B (en) Natural image compression method based on generation type countermeasure network
CN110349230A (en) A method of the point cloud Geometric compression based on depth self-encoding encoder
CN110111256A (en) Image Super-resolution Reconstruction method based on residual error distillation network
CN107240136B (en) Static image compression method based on deep learning model
CN110351568A (en) A kind of filtering video loop device based on depth convolutional network
CN108921789A (en) Super-resolution image reconstruction method based on recurrence residual error network
CN112653899A (en) Network live broadcast video feature extraction method based on joint attention ResNeSt under complex scene
CN112967178B (en) Image conversion method, device, equipment and storage medium
CN109544451A (en) A kind of image super-resolution rebuilding method and system based on gradual iterative backprojection
CN114202017A (en) SAR optical image mapping model lightweight method based on condition generation countermeasure network
CN110097177A (en) A kind of network pruning method based on pseudo- twin network
CN106658003A (en) quantization method of dictionary learning-based image compression system
CN106851102A (en) A kind of video image stabilization method based on binding geodesic curve path optimization
CN112734867A (en) Multispectral image compression method and system based on space spectrum feature separation and extraction
CN112699844A (en) Image super-resolution method based on multi-scale residual error level dense connection network
CN115100039B (en) Lightweight image super-resolution reconstruction method based on deep learning
CN111461978A (en) Attention mechanism-based resolution-by-resolution enhanced image super-resolution restoration method
CN112288630A (en) Super-resolution image reconstruction method and system based on improved wide-depth neural network
CN109118428A (en) A kind of image super-resolution rebuilding method based on feature enhancing
CN115955563A (en) Satellite-ground combined multispectral remote sensing image compression method and system
CN113902658B (en) RGB image-to-hyperspectral image reconstruction method based on dense multiscale network
CN110782396B (en) Light-weight image super-resolution reconstruction network and reconstruction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190712