CN113902651A - Video image quality enhancement system based on deep learning - Google Patents

Video image quality enhancement system based on deep learning Download PDF

Info

Publication number
CN113902651A
CN113902651A CN202111495910.8A CN202111495910A CN113902651A CN 113902651 A CN113902651 A CN 113902651A CN 202111495910 A CN202111495910 A CN 202111495910A CN 113902651 A CN113902651 A CN 113902651A
Authority
CN
China
Prior art keywords
frame
module
processing model
enhancement
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111495910.8A
Other languages
Chinese (zh)
Other versions
CN113902651B (en
Inventor
张卫平
岑全
丁园
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Global Digital Group Co Ltd
Original Assignee
Global Digital Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Global Digital Group Co Ltd filed Critical Global Digital Group Co Ltd
Priority to CN202111495910.8A priority Critical patent/CN113902651B/en
Publication of CN113902651A publication Critical patent/CN113902651A/en
Application granted granted Critical
Publication of CN113902651B publication Critical patent/CN113902651B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a video image quality enhancement system based on deep learning, which comprises a frame extraction module, a frame enhancement module, an inter-frame enhancement module, a learning processing module, a feedback module and a video restoration module, wherein the frame extraction module processes a video into a plurality of frame images, the frame enhancement module performs image quality enhancement on an individual frame image, the inter-frame enhancement module performs image quality enhancement on the frame image according to the relation between two adjacent frame images, the learning processing module provides a processing model for performing image quality enhancement, the feedback module is used for calculating the overall harmony of the processed frame images and feeding back the harmony to the learning processing model, the learning processing model improves the processing model according to a feedback result, and the video restoration module is used for reconstructing the processed frame images into a video form. The system adopts two angles of intraframe processing and interframe processing to enhance the image quality, so that the enhanced content part is distorted, and a better image quality enhancement effect is obtained.

Description

Video image quality enhancement system based on deep learning
Technical Field
The invention relates to the technical field of image processing, in particular to a video image quality enhancement system based on deep learning.
Background
With the popularization of mobile terminals such as mobile phones, people are more and more familiar with watching network videos by using mobile terminals such as mobile phones. When people watch videos, the video quality is mainly affected by the following factors: 1. influenced by video shooting quality and post-production factors, including shooting equipment factors, shooting environment factors, damaged post-editing and the like; 2. the network video service provider can transcode the original video into different code rates for selection by users, the video is subjected to lossy compression in the transcoding process, and the quality of the transcoded video is reduced compared with that of the original video.
A lot of image quality enhancement systems have been developed, and through a lot of search and reference, it is found that the existing enhancement system, for example, the system disclosed in the publication number CN109345490B, decodes video stream data to obtain RGB image data; the image partition is carried out on the RGB image data, and the image is divided into four types of areas: edge and within protective regions, non-edge and within protective regions, edge and within non-protective regions, non-edge and within non-protective regions, labeled P1, P2, P3, P4, respectively; performing detail enhancement processing on P1, P2, P3 and P4 by adopting different scales to obtain enhanced images; carrying out contrast adjustment on the enhanced image to obtain an adjusted image; and adjusting the brightness of the adjusted image to obtain the image with the adjusted brightness. However, the system processes the single frame picture in the processing process, the processed video has the defects of incoherence and distortion, and the image quality enhancement effect needs to be improved.
Disclosure of Invention
The invention aims to provide a video image quality enhancement system based on deep learning aiming at the defects,
the invention adopts the following technical scheme:
a video image quality enhancement system based on deep learning comprises a frame extraction module, a frame enhancement module, an inter-frame enhancement module, a learning processing module, a feedback module and a video restoration module, wherein the frame extraction module processes a video into a plurality of frame images, the frame enhancement module performs image quality enhancement on an individual frame image, the inter-frame enhancement module performs image quality enhancement on the frame image according to the relation between two adjacent frame images, the learning processing module provides a processing model for performing image quality enhancement, the feedback module is used for calculating the overall harmony Q of the processed frame images and feeding back the overall harmony Q to the learning processing model, the learning processing model improves the processing model according to a feedback result, and the video restoration module is used for reconstructing the processed frame images into a video form;
the processing model comprises a frame processing model and an interframe processing model, the frame enhancement module executes the frame processing model, and the interframe enhancement module executes the interframe processing model;
the frame processing model converts the resolution into X0*Y0To be processed frame picture is enlarged to a resolution of X1*Y1Dividing pixel points in the initial frame picture into a plurality of point sets according to whether adjacent pixel point information is the same or not, determining whether to change the pixel point information or not by edge points in the point sets according to the calculated fusion degree Z, wherein the calculation formula of the fusion degree Z is as follows:
Figure 253789DEST_PATH_IMAGE001
wherein n is1、n2、n3And n4Respectively representing the number of non-edge points belonging to the same point set, the number of non-edge points belonging to different point sets and the number of edge points belonging to different point sets in pixel points in an edge point adjacent region;
when the fusion degree Z is larger than 0, the edge point is kept unchanged, and when the fusion degree Z is smaller than 0, the pixel point information of the edge point is converted into the pixel point information in the adjacent point set;
the interframe processing model obtains gray information of pixels at the same position in adjacent frame pictures by using a pixel point window to obtain two matrixes P1 and P2, and performs the following operation on the two matrixes to obtain a matrix volume difference C:
Figure 955029DEST_PATH_IMAGE002
wherein, aijIs an element in the matrix P1, bijThe element in the matrix P2, m and n are the length and width of the pixel point window respectively;
calculating to obtain a gray scale to-be-varied quantity delta of pixel points corresponding to peripheral elements in a matrix P1 according to the matrix volume difference C, calculating the sum of the gray scale to-be-varied quantities delta of each pixel point to obtain a correction quantity delta 'after the pixel point window traverses adjacent frame pictures, and performing gray scale change processing on all the pixel points by the inter-frame processing model according to the correction quantity delta';
further, the frame processing model copies the (a, b) pixel point information in the frame picture to be processed into the (c, d) pixel point information in the initial frame picture, and the a, b, c, d satisfy the following conditions:
Figure 867621DEST_PATH_IMAGE003
further, the adjacent area refers to an area formed by pixel points with coordinate distances not more than 4;
further, the formula for calculating the gray scale to-be-changed quantity Δ is as follows:
Figure 552681DEST_PATH_IMAGE004
further, the calculation formula of the overall harmony Q is as follows:
Figure 107290DEST_PATH_IMAGE005
nz is the number of times that the gray value of the pixel point is abnormal in all the frame pictures, and N is the number of all the frame pictures.
The beneficial effects obtained by the invention are as follows:
in the process of enhancing the image quality, the invention not only processes the single frame image, but also carries out smoothing processing according to the relation of the adjacent frame images, so that the video has no phenomena of distortion and discontinuity after the image quality is enhanced; the invention is also provided with a feedback system for evaluating the frame picture after the picture quality enhancement, and the processing model for enhancing the frame picture carries out model correction according to the evaluation result so as to improve the picture quality enhancement effect.
Drawings
The invention will be further understood from the following description in conjunction with the accompanying drawings. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the embodiments. Like reference numerals designate corresponding parts throughout the different views.
FIG. 1 is a schematic view of an overall structural framework;
FIG. 2 is a schematic diagram of edge point locations;
FIG. 3 is a schematic diagram of a pixel point in a neighboring area;
FIG. 4 is a schematic diagram of the positions of pixels with gray-scale to-be-changed values;
FIG. 5 is a schematic diagram of M-type and W-type segments of pixel gray levels.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to embodiments thereof; it should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. Other systems, methods, and/or features of the present embodiments will become apparent to those skilled in the art upon review of the following detailed description. It is intended that all such additional systems, methods, features and advantages be included within this description, be within the scope of the invention, and be protected by the accompanying claims. Additional features of the disclosed embodiments are described in, and will be apparent from, the detailed description that follows.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not indicated or implied that the device or component referred to must have a specific orientation, be constructed and operated in a specific orientation, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes and are not to be construed as limitations of the present patent, and specific meanings of the terms may be understood by those skilled in the art according to specific situations.
The first embodiment.
The embodiment provides a video image quality enhancement system based on deep learning, which is combined with a picture 1 and comprises a frame extraction module, a frame enhancement module, an inter-frame enhancement module, a learning processing module, a feedback module and a video restoration module, wherein the frame extraction module processes a video into a plurality of frames of images, the frame enhancement module performs image quality enhancement on an individual frame of image, the inter-frame enhancement module performs image quality enhancement on the frame of image according to the relation between two adjacent frames of image, the learning processing module provides a processing model for performing image quality enhancement, the feedback module is used for calculating the overall harmony Q of the processed frame of image and feeding back the harmony Q to the learning processing model, the learning processing model improves the processing model according to a feedback result, and the video restoration module is used for recombining the processed frame of image into a video form;
the processing model comprises a frame processing model and an interframe processing model, the frame enhancement module executes the frame processing model, and the interframe enhancement module executes the interframe processing model;
the frame processing model converts the resolution into X0*Y0To be processed frame picture is enlarged to a resolution of X1*Y1Dividing pixel points in the initial frame picture into a plurality of point sets according to whether adjacent pixel point information is the same or not, determining whether to change the pixel point information or not by edge points in the point sets according to the calculated fusion degree Z, wherein the calculation formula of the fusion degree Z is as follows:
Figure 194194DEST_PATH_IMAGE006
wherein n is1、n2、n3And n4Respectively representing the number of non-edge points belonging to the same point set, the number of non-edge points belonging to different point sets and the number of edge points belonging to different point sets in pixel points in an edge point adjacent region;
when the fusion degree Z is larger than 0, the edge point is kept unchanged, and when the fusion degree Z is smaller than 0, the pixel point information of the edge point is converted into the pixel point information in the adjacent point set;
the interframe processing model obtains gray information of pixels at the same position in adjacent frame pictures by using a pixel point window to obtain two matrixes P1 and P2, and performs the following operation on the two matrixes to obtain a matrix volume difference C:
Figure 808847DEST_PATH_IMAGE007
wherein, aijIs an element in the matrix P1, bijThe element in the matrix P2, m and n are the length and width of the pixel point window respectively;
calculating to obtain a gray scale to-be-varied quantity delta of pixel points corresponding to peripheral elements in a matrix P1 according to the matrix volume difference C, calculating the sum of the gray scale to-be-varied quantities delta of each pixel point to obtain a correction quantity delta 'after the pixel point window traverses adjacent frame pictures, and performing gray scale change processing on all the pixel points by the inter-frame processing model according to the correction quantity delta';
the frame processing model copies the pixel point information (a, b) in the frame picture to be processed into the pixel point information (c, d) in the initial frame picture, and the a, b, c and d meet the following conditions:
Figure 981202DEST_PATH_IMAGE008
the adjacent area refers to an area formed by pixel points with the coordinate distance not more than 4;
the formula for calculating the gray-level to-be-changed quantity delta is as follows:
Figure 73923DEST_PATH_IMAGE009
the calculation formula of the overall harmony Q is as follows:
Figure 280913DEST_PATH_IMAGE010
nz is the number of times that the gray value of the pixel point is abnormal in all the frame pictures, and N is the number of all the frame pictures.
Example two.
The embodiment includes all the contents of the first embodiment, and provides a video image quality enhancement system based on deep learning, which includes a frame extraction module, a frame enhancement module, an inter-frame enhancement module, a learning processing module, a feedback module and a video restoration module, wherein the frame extraction module processes a video to be processed into a frame, the frame enhancement module performs enhancement processing on each frame, the inter-frame enhancement module performs smoothing processing according to the continuity between two adjacent frames, the learning processing module is used for providing a processing model for the frame enhancement module and the inter-frame enhancement module, the feedback module calculates the overall harmony of the processed video and feeds back the calculation result to the learning processing module, the learning processing module improves the processing model according to the calculation result, when the overall harmony is smaller than a threshold value, the video restoration module recombines the processed frame pictures into a video;
the frame pictures acquired by the frame extraction module are arranged in sequence and are sequentially sent to the frame enhancement module;
the frame enhancement module identifies the resolution of a frame picture when receiving a first frame picture, and the identified resolution is recorded as X0*Y0
The processing model in the learning processing module comprises a frame processing model and an inter-frame processing model, when the frame enhancement module needs to process a frame picture, an application is submitted to the learning processing module, the learning processing module sends a copy of the frame processing model to the frame enhancement module, the frame enhancement module executes the frame processing model to process the frame picture, when the inter-frame enhancement module needs to process the frame picture, the application is submitted to the learning processing module, the learning processing module sends a copy of the inter-frame processing model to the inter-frame enhancement module, and the inter-frame enhancement module executes the inter-frame processing model to process the frame picture;
with reference to fig. 2 and fig. 3, the process of processing the frame picture to be processed by the frame processing model includes the following steps:
s1, applying for an initial frame picture by the frame processing model, wherein the resolution of the initial frame picture is X1*Y1
S2, copying the (a, b) pixel point information in the frame picture to be processed to the (c, d) pixel point information in the initial frame picture by the frame processing model, wherein the a, b, c, d satisfy the following conditions:
Figure 63537DEST_PATH_IMAGE011
s3, dividing a plurality of pixel points obtained by the same pixel point information in the frame picture to be processed in the original picture into a set by the frame processing model to obtain a plurality of point sets;
s4, the frame processing model performs normalization processing on each point set:
when the pixel point information of two adjacent point sets is the same, classifying the two point sets into the same point set;
s5, the frame processing model carries out edge marking on each point set:
setting pixel points adjacent to other point sets in the point set as edge points;
s6, the frame processing model calculates the property of the pixel point in the adjacent area of each edge point:
the properties of the pixel points comprise non-edge points in the same point set, non-edge points in different point sets and edge points in different point sets, and the number of the non-edge points and the edge points in the different point sets is respectively n1、n2、n3And n4Expressing that the pixel points in the adjacent area of the edge points refer to the pixel points with the coordinate distance from the edge points not more than 4;
s7, the frame processing model changes pixel point information of the edge points:
Figure 723188DEST_PATH_IMAGE012
when Z is larger than 0, the edge point is kept unchanged, and when Z is smaller than 0, the pixel point information of the edge point is converted into the pixel point information in the adjacent point set;
the frame enhancement module sends the processed frame pictures to the inter-frame enhancement module in sequence;
the process of processing two continuous frame pictures to be processed by the interframe processing model comprises the following steps:
s21, the interframe processing model takes a previous frame picture as a processing frame and takes a later frame picture as a reference frame;
s22, setting a pixel window by the interframe processing model, and respectively acquiring gray information of pixel points at the same position of the processing frame and the reference frame by using the pixel window to obtain two matrixes P1 and P2:
Figure 619600DEST_PATH_IMAGE013
Figure 149939DEST_PATH_IMAGE014
wherein, the element aijRepresenting the gray value of a pixel point of the processed frame (i, j) within said pixel window, element bijRepresenting the gray value of a reference frame at the pixel point of the pixel window (i, j), wherein m and n are the length and the width of the pixel window;
s23, the interframe processing model performs the following operation on the two matrixes to obtain a matrix volume difference C:
Figure 168710DEST_PATH_IMAGE015
s24, calculating the matrix periphery variation delta of the processing frame by the interframe processing model:
Figure 253341DEST_PATH_IMAGE016
referring to FIG. 4, the matrix peripheral variation Δ is defined as an element a1x,amx,ay1,aynThe gray level of the corresponding pixel point is to be varied, wherein x belongs to [1, n ]],y∈[1,m];
S25, the interframe processing model uses the pixel window to traverse all the positions of the processing frame and the reference frame and obtain corresponding gray waiting variables, and independent accumulation is carried out on the gray waiting variables of each pixel point in the processing frame to obtain delta' (i, j), wherein X belongs to [1, X1],y∈[1,Y1];
S26, correcting the gray value of the pixel point located at the (i, j) coordinate in the processing frame, wherein the correction amount is delta' (i, j), after all the pixel points of the processing frame are processed, sending the processing frame to the feedback module, taking the reference frame as a new processing frame, taking the frame picture of the next frame of the reference frame as a new reference frame, and repeating the steps S22 to S26;
with reference to fig. 5, after the feedback module receives all the frame pictures, the gray values of the pixel points at each position in all the frame pictures are extracted to obtain a sequence gi, and the W-type and M-type segments in the sequence gi are determined;
the W-shaped segment satisfies the following inequality:
Figure 750181DEST_PATH_IMAGE017
the M-shaped segment satisfies the following inequality:
Figure 666185DEST_PATH_IMAGE018
the feedback module counts the number of the segments meeting the requirement in the pixel points of all the positions to obtain Nz, and calculates the overall harmony Q:
Figure 527962DEST_PATH_IMAGE019
wherein N is the number of all frame pictures;
when Q is larger than a threshold value, the feedback module feeds back a result to the learning processing module, and the learning processing module adjusts the interframe processing model;
the learning processing module adjusts the length m and the width n of the pixel window according to the received feedback result, the learning processing module increases the values of m and n during the first adjustment, the learning processing module reversely adjusts m and n when the subsequent overall coordination continues to increase, the learning processing module forwardly adjusts m and n when the subsequent overall coordination decreases, the forward adjustment direction is consistent with the adjustment direction of the previous time, and the reverse adjustment direction is opposite to the adjustment direction of the previous time;
and when the overall harmony calculated by the feedback module is smaller than a threshold value, the learning processing module is not adjusted any more, and simultaneously, the video restoring module enables the processed frame pictures to be recombined into a video form.
Although the invention has been described above with reference to various embodiments, it should be understood that many changes and modifications may be made without departing from the scope of the invention. That is, the methods, systems, and devices discussed above are examples. Various configurations may omit, substitute, or add various procedures or components as appropriate. For example, in alternative configurations, the methods may be performed in an order different than that described, and/or various components may be added, omitted, and/or combined. Moreover, features described with respect to certain configurations may be combined in various other configurations, as different aspects and elements of the configurations may be combined in a similar manner. Further, elements therein may be updated as technology evolves, i.e., many elements are examples and do not limit the scope of the disclosure or claims.
Specific details are given in the description to provide a thorough understanding of the exemplary configurations including implementations. However, configurations may be practiced without these specific details, for example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the configurations. This description provides example configurations only, and does not limit the scope, applicability, or configuration of the claims. Rather, the foregoing description of the configurations will provide those skilled in the art with an enabling description for implementing the described techniques. Various changes may be made in the function and arrangement of elements without departing from the spirit or scope of the disclosure.
In conclusion, it is intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that these examples are illustrative only and are not intended to limit the scope of the invention. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (5)

1. A video image quality enhancement system based on deep learning is characterized by comprising a frame extraction module, a frame enhancement module, an inter-frame enhancement module, a learning processing module, a feedback module and a video restoration module, wherein the frame extraction module processes a video into a plurality of frame pictures, the frame enhancement module performs image quality enhancement on an individual frame picture, the inter-frame enhancement module performs image quality enhancement on the frame pictures according to the relation between two adjacent frame pictures, the learning processing module provides a processing model for performing image quality enhancement, the feedback module is used for calculating the overall harmony Q of the processed frame pictures and feeding back the harmony Q to the learning processing model, the learning processing model improves the processing model according to a feedback result, and the video restoration module is used for reconstructing the processed frame pictures into a video form;
the processing model comprises a frame processing model and an interframe processing model, the frame enhancement module executes the frame processing model, and the interframe enhancement module executes the interframe processing model;
the frame processing model converts the resolution into X0*Y0To be processed frame picture is enlarged to a resolution of X1*Y1Dividing pixel points in the initial frame picture into a plurality of point sets according to whether adjacent pixel point information is the same or not, determining whether to change the pixel point information or not by edge points in the point sets according to the calculated fusion degree Z, wherein the calculation formula of the fusion degree Z is as follows:
Figure 430317DEST_PATH_IMAGE001
wherein n is1、n2、n3And n4Respectively representing the number of non-edge points belonging to the same point set, the number of non-edge points belonging to different point sets and the number of edge points belonging to different point sets in pixel points in an edge point adjacent region;
when the fusion degree Z is larger than 0, the edge point is kept unchanged, and when the fusion degree Z is smaller than 0, the pixel point information of the edge point is converted into the pixel point information in the adjacent point set;
the interframe processing model obtains gray information of pixels at the same position in adjacent frame pictures by using a pixel point window to obtain two matrixes P1 and P2, and performs the following operation on the two matrixes to obtain a matrix volume difference C:
Figure 465270DEST_PATH_IMAGE002
wherein, aijIs an element in the matrix P1, bijThe element in the matrix P2, m and n are the length and width of the pixel point window respectively;
and calculating to obtain the gray scale to-be-varied delta of the pixel points corresponding to the peripheral elements in the matrix P1 according to the matrix volume difference C, calculating the sum of the gray scale to-be-varied delta of each pixel point to obtain a correction delta 'after the pixel point window traverses the adjacent frame pictures, and performing gray scale change processing on all the pixel points according to the correction delta' by the inter-frame processing model.
2. The system as claimed in claim 1, wherein the frame processing model copies (a, b) pixel information in a frame to be processed to (c, d) pixel information in the initial frame, and a, b, c, d satisfy the following condition:
Figure 501359DEST_PATH_IMAGE003
3. the system of claim 2, wherein the neighboring region is a region formed by pixels with coordinate distances not exceeding 4.
4. The system as claimed in claim 3, wherein the formula for calculating the gray scale waiting variable Δ is:
Figure 531107DEST_PATH_IMAGE004
5. the system of claim 4, wherein the overall harmony Q is calculated by the following formula:
Figure 387068DEST_PATH_IMAGE005
nz is the number of times that the gray value of the pixel point is abnormal in all the frame pictures, and N is the number of all the frame pictures.
CN202111495910.8A 2021-12-09 2021-12-09 Video image quality enhancement system based on deep learning Active CN113902651B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111495910.8A CN113902651B (en) 2021-12-09 2021-12-09 Video image quality enhancement system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111495910.8A CN113902651B (en) 2021-12-09 2021-12-09 Video image quality enhancement system based on deep learning

Publications (2)

Publication Number Publication Date
CN113902651A true CN113902651A (en) 2022-01-07
CN113902651B CN113902651B (en) 2022-02-25

Family

ID=79025445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111495910.8A Active CN113902651B (en) 2021-12-09 2021-12-09 Video image quality enhancement system based on deep learning

Country Status (1)

Country Link
CN (1) CN113902651B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110279715A1 (en) * 2010-05-12 2011-11-17 Hon Hai Precision Industry Co., Ltd. Blemish detection sytem and method
CN102542536A (en) * 2011-11-18 2012-07-04 上海交通大学 Image quality strengthening method based on generalized equilibrium model
US20180025251A1 (en) * 2016-07-22 2018-01-25 Dropbox, Inc. Live document detection in a captured video stream
US20180374203A1 (en) * 2016-02-03 2018-12-27 Chongqing University Of Posts And Telecommunications Methods, systems, and media for image processing
CN110415202A (en) * 2019-07-31 2019-11-05 浙江大华技术股份有限公司 A kind of image interfusion method, device, electronic equipment and storage medium
CN112019762A (en) * 2020-07-23 2020-12-01 北京迈格威科技有限公司 Video processing method and device, storage medium and electronic equipment
CN113115075A (en) * 2021-03-23 2021-07-13 广州虎牙科技有限公司 Method, device, equipment and storage medium for enhancing video image quality
CN113409435A (en) * 2020-03-15 2021-09-17 英特尔公司 Apparatus and method for performing non-local mean filtering using motion estimation circuitry of a graphics processor
CN113706393A (en) * 2020-05-20 2021-11-26 武汉Tcl集团工业研究院有限公司 Video enhancement method, device, equipment and storage medium
CN113763296A (en) * 2021-04-28 2021-12-07 腾讯云计算(北京)有限责任公司 Image processing method, apparatus and medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110279715A1 (en) * 2010-05-12 2011-11-17 Hon Hai Precision Industry Co., Ltd. Blemish detection sytem and method
CN102542536A (en) * 2011-11-18 2012-07-04 上海交通大学 Image quality strengthening method based on generalized equilibrium model
US20180374203A1 (en) * 2016-02-03 2018-12-27 Chongqing University Of Posts And Telecommunications Methods, systems, and media for image processing
US20180025251A1 (en) * 2016-07-22 2018-01-25 Dropbox, Inc. Live document detection in a captured video stream
CN110415202A (en) * 2019-07-31 2019-11-05 浙江大华技术股份有限公司 A kind of image interfusion method, device, electronic equipment and storage medium
CN113409435A (en) * 2020-03-15 2021-09-17 英特尔公司 Apparatus and method for performing non-local mean filtering using motion estimation circuitry of a graphics processor
CN113706393A (en) * 2020-05-20 2021-11-26 武汉Tcl集团工业研究院有限公司 Video enhancement method, device, equipment and storage medium
CN112019762A (en) * 2020-07-23 2020-12-01 北京迈格威科技有限公司 Video processing method and device, storage medium and electronic equipment
CN113115075A (en) * 2021-03-23 2021-07-13 广州虎牙科技有限公司 Method, device, equipment and storage medium for enhancing video image quality
CN113763296A (en) * 2021-04-28 2021-12-07 腾讯云计算(北京)有限责任公司 Image processing method, apparatus and medium

Also Published As

Publication number Publication date
CN113902651B (en) 2022-02-25

Similar Documents

Publication Publication Date Title
JP4817246B2 (en) Objective video quality evaluation system
CN108495135B (en) Quick coding method for screen content video coding
TWI432034B (en) Multi-view video coding method, multi-view video decoding method, multi-view video coding apparatus, multi-view video decoding apparatus, multi-view video coding program, and multi-view video decoding program
Park et al. Fast multi-type tree partitioning for versatile video coding using a lightweight neural network
CN105472205B (en) Real-time video noise reduction method and device in encoding process
EP3952307A1 (en) Video processing apparatus and processing method of video stream
CN107580222B (en) Image or video coding method based on linear model prediction
US20200007872A1 (en) Video decoding method, video decoder, video encoding method and video encoder
JPWO2010095471A1 (en) Multi-view image encoding method, multi-view image decoding method, multi-view image encoding device, multi-view image decoding device, multi-view image encoding program, and multi-view image decoding program
US20130188691A1 (en) Quantization matrix design for hevc standard
WO2022257759A1 (en) Image banding artifact removal method and apparatus, and device and medium
US20130147910A1 (en) Mobile device and image capturing method
Hadizadeh et al. Video error concealment using a computation-efficient low saliency prior
US20240054786A1 (en) Video stream manipulation
CN108600783B (en) Frame rate adjusting method and device and terminal equipment
KR101622363B1 (en) Method for detection of film mode or camera mode
US20130279598A1 (en) Method and Apparatus For Video Compression of Stationary Scenes
CN110944176B (en) Image frame noise reduction method and computer storage medium
US20140056354A1 (en) Video processing apparatus and method
US8406305B2 (en) Method and system for creating an interpolated image using up-conversion vector with uncovering-covering detection
EP4304172A1 (en) Deblocking filter method and apparatus
Zhang et al. Additive log-logistic model for networked video quality assessment
US10798418B2 (en) Method and encoder for encoding a video stream in a video coding format supporting auxiliary frames
Kazemi et al. A review of temporal video error concealment techniques and their suitability for HEVC and VVC
US8520973B2 (en) Method for restoring transport error included in image and apparatus thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant