CN106485661A - A kind of high-quality image magnification method - Google Patents

A kind of high-quality image magnification method Download PDF

Info

Publication number
CN106485661A
CN106485661A CN201611010215.7A CN201611010215A CN106485661A CN 106485661 A CN106485661 A CN 106485661A CN 201611010215 A CN201611010215 A CN 201611010215A CN 106485661 A CN106485661 A CN 106485661A
Authority
CN
China
Prior art keywords
image
network
data
ybicubic
yuv
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611010215.7A
Other languages
Chinese (zh)
Inventor
赵海宾
谢亚光
陈梅丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Arcvideo Technology Co ltd
Original Assignee
Hangzhou Arcvideo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Arcvideo Technology Co ltd filed Critical Hangzhou Arcvideo Technology Co ltd
Priority to CN201611010215.7A priority Critical patent/CN106485661A/en
Publication of CN106485661A publication Critical patent/CN106485661A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/12Indexing scheme for image data processing or generation, in general involving antialiasing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of high-quality image magnification method, including off-line learning method and on-line processing method, the invention also discloses a kind of high-quality image method system, using the depth learning technology based on convolutional neural networks, edge blurry can be obtained to sharp-edged Nonlinear Mapping relation by great amount of samples study, sharp keen through the image edge clear of this network and there is no the artifact effect such as sawtooth, while adding certain noise in training sample, so that network possesses the ability for removing coding noise, last further using algorithm for image enhancement to image edge acuity, without affecting flat site, so the noise of flat site will not be increased while edge sharpness is strengthened, to the stronger subjective feeling of user.

Description

A kind of high-quality image magnification method
Technical field
The present invention relates to a kind of high-quality image magnification method and system.
Background technology
High-resolution display supported more and more by electronic equipment of today, particularly supports that the TV of 4K resolution ratio is got over More to popularize, enable people to view and admire the TV programme of ultra high-definition.
But the 4K content being born around practical application at present is ecological also and imperfection, or even can say also and be not reaching to The related 4K video resource of exhibition initial stage, especially TV programme is the deficientest.In order to play 2K or lower on 4K TV The video content of resolution ratio, it is necessary to which video is amplified.Common image amplify essence be by low pass filter to image Filtered.The effect of low pass filter be allow low-frequency information by and end high-frequency information, so can be because low pass filter Effect and cause lose high-frequency information, so as to cause image to seem fuzzy.
In addition to fuzzy, also there is the impact of artifact effect and coding noise in common interpolator arithmetic.Due to common amplification Algorithm uses interpolation technique, and the artifact effect of sawtooth like occurs at the edge of non-horizontal or vertical direction.Due in image The transmission of appearance will necessarily be compressed, and can produce coding noise near image border, through amplification and the enhancing of image, encode Noise can all the more substantially, so that the overall visual experience after amplifying is poor.
Content of the invention
It is an object of the invention to overcome of the prior art not enough and provide a kind of high-quality image magnification method and System.
For achieving the above object, on the one hand, a kind of high-quality image magnification method that the present invention is provided, including offline Learning method and on-line processing method, off-line learning method comprise the steps:
Collection image simultaneously extracts the image block in image;
Deep learning network is built using image block and obtain reconstruction image;
Cost function is set up according to reconstruction image and required network parameter is tried to achieve,
On-line processing method comprises the steps:
Obtain image YUV panel data and obtain Ybicubic;
Ybicubic is input into deep learning network and exports Ysrcnn;
Enhancing process is carried out to Ysrcnn.
Preferably, in the step of gathering image and extract the image block in image, following steps are specifically included:
Collection image is simultaneously changed to YUV panel data;
Extract all image blocks { Xi } of Y plane;
Ynoise is obtained to Y plane process and extracts all image blocks { Yi }.
Preferably, in the step of building deep learning network and obtain reconstruction image using image block, specially using figure As block { Yi } builds deep learning network and obtains reconstruction image.
Preferably, in the step of building deep learning network, specifically comprise the steps:
Input picture block { Yi } builds input layer;
Image block { Yi } is filtered operation fisrt feature figure structure convolutional layer is obtained, operation formula is F1 (Y)=max (0, W1 × Y+B1), wherein Y are image block { Yi } data, and W1 is filter factor, and B1 is bias term;
By fisrt feature figure reassemble into second feature figure build Nonlinear Mapping layer, operation formula for F2 (Y)=max (0, W2 × F1 (Y)+B2), wherein Y is image block { Yi } data, and W2 is filter factor, and B2 is bias term;
Second feature figure is become output image, operation formula is that F (Y)=W3 × F2 (Y)+B3, wherein W3 are for filtering Number, B3 are bias term.
Preferably, in the step of setting up cost function and try to achieve required network parameter according to reconstruction image, cost function isWherein n is training sample sum, and Xi is image block { Xi }, and Yi is image block { Yi }, F (Yi;It is θ) output image, cost function calculates θ parameter value by stochastic gradient descent method.
Preferably, obtain image YUV panel data and the step of obtain Ybicubic in, whether first judge the data of image For YUV panel data, when the data of image are not YUV panel data, the data of image are converted to YUV panel data.
Preferably, in the step of obtaining image YUV panel data and obtain Ybicubic, by carrying out double three to Y-component Secondary interpolation amplification algorithm amplifies 2 times and obtains Ybicubic.
Preferably, in the step of Ybicubic being input into deep learning network and exports Ysrcnn, first to Ybicubic Sobel edge edge detection is carried out, Ysrcnn is directly exported when edge strength is less than given threshold, when edge strength is more than setting threshold The pixel value of Ysrcnn is calculated during value by P+thre × (P-P1)/100, wherein P is current picture value, thre is 0- Integer between 100.
Preferably, off-line learning method and on-line processing method are adopted using identical network model.
On the other hand, a kind of high-quality image method system that the present invention is provided, including off-line learning module and online Processing module, off-line learning module are used for obtaining network parameter according to off-line learning method, and online processing module is used for basis and exists Line processing method, using network parameter obtain image output data.
A kind of high-quality image magnification method provided according to the present invention and system, using based on convolutional neural networks Depth learning technology, can obtain edge blurry to sharp-edged Nonlinear Mapping relation by great amount of samples study, pass through The image edge clear of this network is sharp keen and there is no the artifact effect such as sawtooth, while adding certain making an uproar in training sample Sound so that network possesses the ability for removing coding noise, last further sharp to image border using algorithm for image enhancement Change, without affecting flat site, so the noise of flat site will not be increased while edge sharpness is strengthened, to user relatively Strong subjective feeling.
Description of the drawings
Fig. 1 is the workflow schematic diagram of a kind of high-quality image enhancement system of one embodiment of the invention.
The realization of the object of the invention, functional characteristics and advantage will be described further in conjunction with the embodiments referring to the drawings.
Specific embodiment
Embodiments of the invention are described below in detail.
One embodiment of the invention provides a kind of high-quality image magnification method and system.
A kind of high-quality image magnification method, including off-line learning method and on-line processing method, on-line processing method It is the video image amplifying method really used in video code conversion system, and the network parameter used in on-line processing method is necessary Obtained by the study of off-line learning method.Off-line learning method and on-line processing method employ identical network model, difference It is that off-line learning method obtains all of network parameter by the input and output source that demarcates, and on-line processing method is then using net Network parameter obtains output to calculate.
Off-line learning method includes that image block extracts, builds deep learning network and train deep learning network three to walk greatly Suddenly.
Wherein image block extraction step is as follows:
Collection image, reads view data and changes to YUV panel data form;
The image block { Xi } of all of 24 × 24 size is extracted according to overlapping scan rule in Y plane, as depth The output of learning network, needs special instruction, and tile size is not construed as limiting, and is input into, theory consistent with output size Upper desirable arbitrary size, but in order to picture material is preferably embodied, more than 20x20 is typically taken, the present embodiment takes 24x24, this theory Bright suitable for following descriptions with regard to image block;
Down-sampling is carried out to Y plane, then is up-sampled to original size and obtains Yscale, then add on Yscale high This white noise obtains Ynoise, extracts all of 24 × 24 according to same overlapping scan rule big in Ynoise datagram Little image block { Yi }, used as the input of deep learning network, this step establishes fuzzy, clear containing noisy image block and standard The corresponding relation of clear image block.
It is as follows that deep learning network step is wherein built:
The ground floor of network is input layer, that is, be input into the image block Yi of 24 × 24 sizes;
The second layer is convolutional layer, and input is carried out the filtering operation of 32 times 9 × 9, obtains 32 fisrt feature figures, and operation is public It is 24 × 24 image blocks { Yi } data that formula is F1 (Y)=max (0, W1 × Y+B1), wherein Y, and W1 is 9 × 9 filter factor, B1 For bias term, max >=0 (when broad embodiment is enumerated, W1, B1 do not limit scope, in real case, from study to value In can summarize scope typically between (- 2.5,2.5)), need special instruction, 32 times in network convolutional layer, 16 filters Ripple can also value as needed, the efficiency of present invention consideration taken 32 and 16, it is also possible to take 64 and 32 or 128 and 64, The wave filter size of same W1, W2, W3 is also not construed as limiting, and W1 can take that 11*11 is even more big, and W2, W3 are in the same manner.Choose different ginsengs The principle of numerical value is acquirement balance between efficiency and effect, and max is function, means and takes maximum between the two, similarly hereinafter;
Third layer is Nonlinear Mapping layer, will 32 fisrt feature figures carry out restructuring and form 16 second feature figures, behaviour It is 24 × 24 image blocks { Yi } data to make formula for F2 (Y)=max (0, W2 × F1 (Y)+B2), wherein Y, and W2 is 1 × 1 filtering Coefficient, B2 are bias term, max >=0;
4th layer for restructuring layer, will 16 second feature figures carry out 5 × 5 convolution operation and merge cumulative formation Final output image, operation formula are the filter factor that F (Y)=W3 × F2 (Y)+B3, wherein W3 are 5 × 5, and B3 is biasing ?.
Wherein training deep learning network step is as follows:
Mapping function F needs network parameter θ={ W1, W2, W3, B1, B2, B3 } end to end for study.This can With by minimizing reconstruction image F (Y;θ) realize with the cost of reference map.Cost function can be defined asWherein n be training sample sum, Xi be 24 × 24 image blocks { Xi }, Yi be 24 × 24 image blocks { Yi }, F (Yi;θ) it is output image.This cost function can be asked by this current techique of stochastic gradient descent method Obtain network parameter θ.
The step of on-line processing method, is as follows:
Image data format is changed to YUV planar format, Y, U, V component are carried out bicubic interpolation algorithm amplification respectively To target sizes, Ybicubic, Ubicubic and Vbicubic is obtained;
Ybicubic is input into deep learning network, obtains exporting Ysrcnn;
Ysrcnn is obtained sharpening enhancement further using algorithm for image enhancement;
Rim detection is carried out to Ysrcnn using Sobel edge edge detector;
Edge Esrcnn of the edge strength more than given threshold is strengthened, edge enhancing method is as follows:
Gaussian Blur being carried out to Esrcnn and obtaining blurred picture Egaussian, it is Eenhance=to strengthen image formula Esrcnn+threshold × (Esrcnn-Egaussian), wherein threshold are for strengthening regulation parameter.
Special instruction is needed, threshold value set by edge strength is empirical value, in pixel coverage 0-255, edge strength meter When calculating formula Gx*Gx+Gy*Gy, it is the calculated x of Sobel edge edge detection algorithm and y side that threshold value can use 1600, Gx and Gy To Grad.
The technology, coordinates explanation with an embodiment below for a better understanding of the present invention.
One video code conversion system, system input is 2k video source, video is decoded into YUV420 through Video decoding module Data form, then according to the image magnification method of the present invention, by the YUV420 data form of video amplifier to 4k size, finally 4k video frequency output is encoded video into through Video coding.
The present embodiment describes video amplifier technology in detail.Video amplifier technology includes off-line learning module 1 and online process Module 2, is illustrated in figure 1 the workflow schematic diagram of the high-quality image enhancement system of the present embodiment, and the flow process of Fig. 1 is illustrated In comprising off-line learning module 1 and online processing module 2.Online processing module 2 is regarding of really using in video code conversion system Frequency image amplification module, and the network parameter used in online processing module 2 must be learned by offline by off-line learning module 1 Learning method study is obtained.Off-line learning module 1 and online processing module 2 employ identical network model, and difference is offline Study module 1 is obtained all of network parameter by the input and output source that demarcates, and online processing module 2 is then using network parameter Output is obtained to calculate.
By off-line learning module 1 by the acquistion of off-line learning methodology to network parameter before video amplifier, including as follows Step:
Step one, 1000 natural images of collection, image decoding are become Y, U, V data form, extract Y-component data. Then according to span 14 scanning Y-component data, extracts the image block { Xi } of all of 24 × 24 size.Y data is used 0.5 times of bicubic interpolation algorithm down-sampling, reuses 2 times of bicubic interpolation algorithm up-sampling and obtains Yscale.Then exist Add the white Gaussian noise that standard deviation is 0.3 above Yscale and Ynoise is obtained, according to same rule in Noises datagram Middle scanning extracts all of 24 × 24 sized images block { Yi }.By above operation establish fuzzy, containing noisy image Block and the corresponding relation of standard-definition image block, and the input as network and output.
Step 2, by { Yi } as network ground floor;Processed using 1 (Y)=max (0, W1 × Y+B1) of formula F and obtain The second layer characteristic pattern of network, carries out 32 process altogether, that is, obtains 32 fisrt feature figures.Wherein W1 joins for 9 × 9 filtering Number, B1 are offset parameter (calling bias term in the following text), so total parameter of W1 has 9 × 9 × 32, B1 has 32;Third layer is non-thread Property mapping layer, will 32 fisrt feature figures carry out restructuring and form 16 second feature figures, operation formula for F2 (Y)=max (0, W2 × F1 (Y)+B2), wherein W2 is 1 × 1 filtering parameter, so total parameter of W2 has 32 × 16, B2 has 16;4th layer For layer of recombinating, will 16 second feature figures carry out 5 × 5 filtering operation and merge cumulative forming final output figure Picture, operation formula are the filtering parameter that F (Y)=W3 × F2 (Y)+B3, wherein W3 are 5 × 5, so the total parameter of W3 has 16 × 5 × 5 Individual, B3 has 1.
Step 3, training deep learning network;Study end to end mapping function F need estimate parameter θ=W1, W2, W3, B1, B2, B3 }.This can be by minimizing reconstruction image F (Y;θ) realize with the cost of reference map.Cost function can To be defined asWherein n is training sample sum, and Xi is 24 × 24 image blocks { Xi }, Yi is 24 × 24 image blocks { Yi }, F (Yi;θ) it is output image.This cost function can be tried to achieve by stochastic gradient descent method and be estimated Meter parameter θ.
Step 4, first determines whether whether the view data being input into is YUV panel data, if not needing to turn into row format Change, then by Y-component, U component and V component all use bicubic interpolation interpolator arithmetic amplify 2 times, obtain Ybicubic, Ubicubic and Vbicubic, Ybicubic is input into deep learning network and obtains output image Ysrcnn, finally using figure Image intensifying module carries out enhancing process to Ysrcnn.
Step 5, carries out Sobel edge edge detection first to Ysrcnn, and pixel of the edge strength more than given threshold is carried out Enhancing is processed, and the pixel for being less than given threshold is then directly exported, and strengthens processing method as follows:
Hypothesis current pixel value is P, carries out obscuring using Gaussian kernel and obtains P1, then the pixel value after strengthening is P+thre × (P-P1)/100, wherein thre are the integer between 0-100, for controlling enhanced intensity.
A kind of high-quality image magnification method provided according to the present invention and system, using based on convolutional neural networks Depth learning technology, can obtain edge blurry to sharp-edged Nonlinear Mapping relation by great amount of samples study, pass through The image edge clear of this network is sharp keen and there is no the artifact effect such as sawtooth, while adding certain making an uproar in training sample Sound so that network possesses the ability for removing coding noise, last further sharp to image border using algorithm for image enhancement Change, without affecting flat site, so the noise of flat site will not be increased while edge sharpness is strengthened, to user relatively Strong subjective feeling
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means that the specific features, structure or the feature that describe with reference to the embodiment or example include In at least one embodiment or example of the present invention.In this manual, the schematic representation to above-mentioned term not necessarily refers to Be identical embodiment or example.And, the specific features of description, structure or feature can be any one or more Combined in embodiment or example in an appropriate manner.Although embodiments of the invention, Ke Yili has been shown and described above Solution, above-described embodiment is exemplary, it is impossible to be interpreted as limitation of the present invention, one of ordinary skill in the art is not Above-described embodiment can be changed in the case of the principle and objective of the disengaging present invention within the scope of the invention, change, Replace and modification.

Claims (10)

1. a kind of high-quality image magnification method, it is characterised in that including off-line learning method and on-line processing method, described Off-line learning method comprises the steps:
Collection image simultaneously extracts the image block in described image;
Deep learning network is built using described image block and obtain reconstruction image;
Cost function is set up according to the reconstruction image and required network parameter is tried to achieve,
The on-line processing method comprises the steps:
Obtain described image YUV panel data and obtain Ybicubic;
The Ybicubic is input into the deep learning network and exports Ysrcnn;
Enhancing process is carried out to Ysrcnn.
2. a kind of high-quality image magnification method according to claim 1, it is characterised in that the collection image is simultaneously carried In the step of taking the image block in described image, following steps are specifically included:
Collection described image is simultaneously changed to YUV panel data;
Extract all image blocks { Xi } of Y plane;
Ynoise is obtained to Y plane process and extracts all image blocks { Yi }.
3. a kind of high-quality image magnification method according to claim 2, it is characterised in that the utilization described image In the step of block builds deep learning network and obtains reconstruction image, specially deep learning is built using described image block { Yi } Network simultaneously obtains reconstruction image.
4. a kind of high-quality image magnification method according to claim 3, it is characterised in that the structure deep learning In the step of network, specifically comprise the steps:
Input described image block { Yi } builds input layer;
Described image block { Yi } is filtered operation fisrt feature figure structure convolutional layer is obtained, operation formula is F1 (Y)=max (0, W1 × Y+B1), wherein described Y are image block { Yi } data, and the W1 is filter factor, and the B1 is bias term;
By the fisrt feature figure reassemble into second feature figure build Nonlinear Mapping layer, operation formula for F2 (Y)=max (0, W2 × F1 (Y)+B2), wherein described Y is image block { Yi } data, and the W2 is filter factor, and the B2 is bias term;
The second feature figure is become output image, operation formula is F (Y)=W3 × F2 (Y)+B3, wherein described W3 for filtering Wave system number, the B3 are bias term.
5. a kind of high-quality image magnification method according to claim 4, it is characterised in that described schemed according to described reconstruction In as the step of set up cost function and try to achieve required network parameter, the cost function is Wherein described n is training sample sum, and the Xi is image block { Xi }, and the Yi is image block { Yi }, the F (Yi;θ) for institute Output image is stated, the cost function calculates θ parameter value by stochastic gradient descent method.
6. a kind of high-quality image magnification method according to claim 1, it is characterised in that the acquisition described image YUV panel data and the step of obtain Ybicubic in, first judge whether the data of described image are the YUV panel data, When the data of described image are not the YUV panel data, the data of described image are converted to the YUV panel data.
7. a kind of high-quality image magnification method according to claim 1, it is characterised in that the acquisition described image YUV panel data and the step of obtain Ybicubic in, amplify 2 times by carrying out bicubic interpolation interpolator arithmetic to Y-component To the Ybicubic.
8. a kind of high-quality image magnification method according to claim 1, it is characterised in that described will be described In the step of Ybicubic is input into the deep learning network and exports Ysrcnn, first Sobel is carried out to the Ybicubic Rim detection, directly exports the Ysrcnn when edge strength is less than given threshold, when the edge strength is set more than described Determine during threshold value, to calculate the pixel value of the Ysrcnn by P+thre × (P-P1)/100, wherein described P is current picture Value, the thre are the integer between 0-100.
9. a kind of high-quality image magnification method according to claim 1, it is characterised in that the off-line learning method Adopt using identical network model with the on-line processing method.
10. a kind of high-quality image enhancement system, it is characterised in that including off-line learning module and online processing module, institute State off-line learning module for according to any one of claim 1-9 off-line learning method obtain network parameter, described Line processing module is used for the online processing module according to any one of claim 1-9, obtains figure using the network parameter As output data.
CN201611010215.7A 2016-11-15 2016-11-15 A kind of high-quality image magnification method Pending CN106485661A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611010215.7A CN106485661A (en) 2016-11-15 2016-11-15 A kind of high-quality image magnification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611010215.7A CN106485661A (en) 2016-11-15 2016-11-15 A kind of high-quality image magnification method

Publications (1)

Publication Number Publication Date
CN106485661A true CN106485661A (en) 2017-03-08

Family

ID=58272273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611010215.7A Pending CN106485661A (en) 2016-11-15 2016-11-15 A kind of high-quality image magnification method

Country Status (1)

Country Link
CN (1) CN106485661A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107197260A (en) * 2017-06-12 2017-09-22 清华大学深圳研究生院 Video coding post-filter method based on convolutional neural networks
CN107392857A (en) * 2017-04-14 2017-11-24 杭州当虹科技有限公司 A kind of image enchancing method based on deep learning
CN109859110A (en) * 2018-11-19 2019-06-07 华南理工大学 The panchromatic sharpening method of high spectrum image of control convolutional neural networks is tieed up based on spectrum
CN109996085A (en) * 2019-04-30 2019-07-09 北京金山云网络技术有限公司 Model training method, image processing method, device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931179A (en) * 2016-04-08 2016-09-07 武汉大学 Joint sparse representation and deep learning-based image super resolution method and system
CN105976318A (en) * 2016-04-28 2016-09-28 北京工业大学 Image super-resolution reconstruction method
CN106067161A (en) * 2016-05-24 2016-11-02 深圳市未来媒体技术研究院 A kind of method that image is carried out super-resolution

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931179A (en) * 2016-04-08 2016-09-07 武汉大学 Joint sparse representation and deep learning-based image super resolution method and system
CN105976318A (en) * 2016-04-28 2016-09-28 北京工业大学 Image super-resolution reconstruction method
CN106067161A (en) * 2016-05-24 2016-11-02 深圳市未来媒体技术研究院 A kind of method that image is carried out super-resolution

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
吴炜: "《基于学习的图像增强技术》", 28 February 2013, 西安:西安电子科技大学出版社 *
李玲慧 等: "基于时空特征和神经网络的视频超分辨率算法", 《北京邮电大学学报》 *
邱建华 等: "《生物特征识别 身份认证的革命》", 31 January 2016, 北京:清华大学出版社 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392857A (en) * 2017-04-14 2017-11-24 杭州当虹科技有限公司 A kind of image enchancing method based on deep learning
CN107197260A (en) * 2017-06-12 2017-09-22 清华大学深圳研究生院 Video coding post-filter method based on convolutional neural networks
CN107197260B (en) * 2017-06-12 2019-09-13 清华大学深圳研究生院 Video coding post-filter method based on convolutional neural networks
CN109859110A (en) * 2018-11-19 2019-06-07 华南理工大学 The panchromatic sharpening method of high spectrum image of control convolutional neural networks is tieed up based on spectrum
CN109859110B (en) * 2018-11-19 2023-01-06 华南理工大学 Hyperspectral image panchromatic sharpening method based on spectrum dimension control convolutional neural network
CN109996085A (en) * 2019-04-30 2019-07-09 北京金山云网络技术有限公司 Model training method, image processing method, device and electronic equipment
CN109996085B (en) * 2019-04-30 2021-05-14 北京金山云网络技术有限公司 Model training method, image processing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN104067311B (en) Digital makeup
CN106485661A (en) A kind of high-quality image magnification method
CN101981911B (en) Image processing method and device
US8615042B2 (en) Pre-processing method and system for data reduction of video sequences and bit rate reduction of compressed video sequences using spatial filtering
CN101877123A (en) Image enhancement method and device
CN106709875A (en) Compressed low-resolution image restoration method based on combined deep network
JP2017091529A (en) Method for upscaling noisy images, and apparatus for upscaling noisy images
CN107481278B (en) Image bit depth expansion method and device based on combination frame
CN103440630A (en) Large-dynamic-range infrared image display and detail enhancement method based on guiding filter
CN105096280A (en) Method and device for processing image noise
CN111541894B (en) Loop filtering method based on edge enhancement residual error network
CN104091310A (en) Image defogging method and device
Habib et al. Adaptive fuzzy inference system based directional median filter for impulse noise removal
CN104537678B (en) A kind of method that cloud and mist is removed in the remote sensing images from single width
CN104574293A (en) Multiscale Retinex image sharpening algorithm based on bounded operation
CN106664368A (en) Image processing apparatus, image processing method, recording medium, and program
CN111429357B (en) Training data determining method, video processing method, device, equipment and medium
CN102262778A (en) Method for enhancing image based on improved fractional order differential mask
Pandey et al. Enhancing the quality of satellite images by preprocessing and contrast enhancement
CN103702116B (en) A kind of dynamic range compression method and apparatus of image
CN104657941B (en) A kind of image border self-adapting enhancement method and device
CN107862666A (en) Mixing Enhancement Methods about Satellite Images based on NSST domains
CN103500436A (en) Image super-resolution processing method and system
CN107833182A (en) The infrared image super resolution ratio reconstruction method of feature based extraction
CN103607589A (en) Level selection visual attention mechanism-based image JND threshold calculating method in pixel domain

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 310000 E, 16 floor, A block, Paradise software garden, 3 West Gate Road, Xihu District, Hangzhou, Zhejiang.

Applicant after: Hangzhou Dang Hong Polytron Technologies Inc

Address before: 310000 E, 16 floor, A block, Paradise software garden, 3 West Gate Road, Xihu District, Hangzhou, Zhejiang.

Applicant before: HANGZHOU DANGHONG TECHNOLOGY CO., LTD.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170308