WO2013143396A1 - 一种数字视频质量控制方法及其装置 - Google Patents

一种数字视频质量控制方法及其装置 Download PDF

Info

Publication number
WO2013143396A1
WO2013143396A1 PCT/CN2013/072580 CN2013072580W WO2013143396A1 WO 2013143396 A1 WO2013143396 A1 WO 2013143396A1 CN 2013072580 W CN2013072580 W CN 2013072580W WO 2013143396 A1 WO2013143396 A1 WO 2013143396A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
threshold
video quality
quality evaluation
motion vector
Prior art date
Application number
PCT/CN2013/072580
Other languages
English (en)
French (fr)
Inventor
梅海波
Original Assignee
中国移动通信集团公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国移动通信集团公司 filed Critical 中国移动通信集团公司
Publication of WO2013143396A1 publication Critical patent/WO2013143396A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/004Diagnosis, testing or measuring for television systems or their details for digital television systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion

Definitions

  • the present invention relates to digital video technology in the field of communications, and more particularly to a digital video quality control method and apparatus therefor.
  • MPEG-2 has been formed (where MPEG is the English version of Moving Pictures Experts Group/Motion Pictures Experts Group, Chinese translation is Dynamic Picture Experts Group), MPEG-4, JPEG2000, Audio and Video Coding Standard (AVS) , Audio Video coding Standard ) and other compression codec standards, can achieve a high compression ratio while maintaining good image quality.
  • MPEG is the English version of Moving Pictures Experts Group/Motion Pictures Experts Group
  • JPEG2000 JPEG2000
  • AVS Audio and Video Coding Standard
  • Audio Video coding Standard Audio Video coding Standard
  • Subjective measurement is the direct measurement of the quality of the codec system under test by the observer to determine the measurement method of the system.
  • the subjective evaluation method requires a large number of people to participate in the monitoring of video quality, and because the method of human eye recognition video quality is subjective, the same damage to the same video will give a large difference, the accuracy and practicability is poor.
  • Subjective measurement methods are time consuming, costly, and have poor stability and portability, making them unsuitable for real-time video quality measurements.
  • Embodiments of the present invention provide a digital video quality control method and apparatus thereof for implementing objective evaluation of digital video quality, thereby improving the effectiveness of digital video quality control.
  • the video encoder is instructed to adjust the characteristic parameters of the video data to improve the video quality.
  • a monitoring module configured to extract, according to a video quality monitoring period, a feature parameter of the video data encoded by the video encoder
  • a quality evaluation module configured to input a feature parameter of the extracted video data into a neural network, and obtain a video quality evaluation parameter of the video data; wherein, the neural network outputs the video data according to the input video data feature parameter Video quality evaluation parameters;
  • the control module is configured to determine whether the video quality evaluation parameter satisfies the preset condition, and if yes, instruct the video encoder to adjust the feature parameter of the video data to improve the video quality.
  • embodiments of the present invention on the one hand, perform video quality evaluation through a neural network, thereby improving video evaluation efficiency and reducing influence of subjective factors compared with subjective video quality evaluation; on the other hand, embodiments of the present invention cover numbers
  • the video quality monitoring and feedback monitoring structure is applied to the service front-end equipment, and finally the coding parameters are adjusted, thereby realizing dynamic optimization of video quality and providing an effective guarantee for video service quality.
  • FIG. 1 is a schematic diagram of a neural network training process according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of an intra-frame and non-intra-frame quantization matrix according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a video quality control process according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a video quality control apparatus according to an embodiment of the present invention.
  • the type of mobile video compression video damage is different from the normal high-definition and standard-definition digital video impairment types.
  • DCT Discrete Cosine Transform
  • the DCT variables used are all blockwise, that is, the image is first divided into 8 x 8 pixel blocks, and then each block is DCT transformed to obtain 64.
  • the DCT coefficients although greatly reducing the amount of computation, but during the DCT algorithm, the quantization process is lossy, so it may bring a variety of image quality impairments: blockiness, image blur, noise, chrominance distortion, ringing and many more.
  • the above image damage types are common in high-definition digital compression video.
  • the existing main and objective video quality evaluation methods are all aimed at identifying the above kinds of damages.
  • For the resolution of mobile TV video formats because the resolution is too low, it is difficult for human eyes to identify these images in subjective quality evaluation.
  • the type of damage therefore, it is necessary to propose a mobile TV video analysis technology that is easy to implement and has good effects.
  • the embodiment of the present invention proposes a video quality evaluation scheme and a video quality control scheme provided based on the scheme.
  • the digital video feature is quickly inspected and analyzed in consideration of the human visual feature, so that the accuracy of the evaluation result of the digital video is significantly better than the existing digital video evaluation scheme.
  • the video quality objective evaluation scheme of the embodiment of the present invention is a neural network based scheme. That is, a reasonable neural network for evaluating video quality is obtained through training, and in the process of digital video quality control, the feature value of the digital video is extracted in real time, and is input as an input parameter to the neural network, and the output result is the digital video.
  • the quality evaluation parameters enable rapid and effective evaluation of digital video, and further control measures are taken according to the video quality evaluation results to ensure video quality.
  • ANN Artificial Neural Network
  • the neural network has the following characteristics. Information processing is performed between a large number of processing units (called cell elements), and signals between cell elements are transmitted through the connection between them. Each connection has a corresponding weight. , its value is usually multiplied by the input signal, each cell element uses the "excitation function" ( Activation function ) to process the sum of the weighted input signals to produce its output signal.
  • the neural network adopts parallel processing in the implementation mode, that is, multi-objective learning of sample data, and control by cell element interaction. Neural networks are suitable for inexact processing and can simulate multi-large data processing in parallel.
  • the neural network obtains a data processing model through sample training.
  • the neural network in the embodiment of the present invention refers to a video quality evaluation model.
  • the embodiment of the present invention pre-establishes a sample library, which includes a large number of encoded video sequences and corresponding video quality evaluation parameters, which are obtained by subjective evaluation of the video sequence.
  • a sample library which includes a large number of encoded video sequences and corresponding video quality evaluation parameters, which are obtained by subjective evaluation of the video sequence.
  • subjective evaluation is performed on each video sequence as a video material, and the result of the subjective evaluation is stored in the sample library, and the corresponding relationship with the corresponding video sequence is established.
  • feature parameters may be extracted from the video sequence used for subjective evaluation, and the extracted feature parameters are stored in the sample library corresponding to the corresponding video sequence.
  • these characteristic parameters will be used as input to the neural network. That is to say, there will be multiple nodes on the input layer of the neural network, and the subjective evaluation results of the same video sequence will be used as corresponding expected outputs.
  • the algorithm trains the neural network.
  • the goal of training neural networks is to approximate the objective evaluation results of video quality to subjective evaluation results.
  • FIG. 1 is a schematic diagram of a neural network training process according to an embodiment of the present invention. As shown in the figure, the process may include:
  • Step 101 Extract feature parameters of the video sequence from the sample library.
  • Step 102 Input the extracted video sequence feature parameters into the neural network
  • Step 103 Select a corresponding video sequence from the sample library (the video sequence is the video sequence in step 101), and subjectively evaluate the video quality to obtain a video quality evaluation parameter (the video quality evaluation parameter may also be estimated in advance). Measured and recorded in the sample library, in which case the video quality evaluation parameters of the video sequence can be obtained directly from the sample library);
  • Step 104 The video quality evaluation parameter of the video sequence is transmitted to the neural network.
  • Step 105 The neural network obtains the video sequence feature parameter obtained in step 102, and the video quality evaluation parameter obtained in step 104, The relationship between the video quality evaluation parameters and their characteristic parameters, so as to realize the training of the neural network.
  • the embodiments of the present invention can train the neural network periodically or irregularly according to needs.
  • the appearance of blockiness is mainly caused by the quantization error after block quantization.
  • the "block effect" has different performances, distinguishing different types of block effects and correspondingly using different It is important that the method is processed.
  • Trapezoidal noise appears at the strong edge of the image.
  • the high-order coefficients of DCT are quantized to zero, and the high-frequency components related to strong edges are not fully reflected in the transform domain.
  • the continuity of the strong edges passing through the block boundaries cannot be guaranteed, resulting in jagged noise at the edges of the image. This noise is called "ladder noise.”
  • Lattice noise It appears in the flat region of the image. In the transform domain, the DC component reflects the average brightness of the block, and this component contains most of the energy of the block, so the change in brightness in the flat region is small. However, if there is an increase or decrease in the brightness in the flat region, it may cause the DC component to cross the decision threshold of the adjacent quantization level, resulting in a sudden change in brightness at the block boundary in the reconstructed image, which appears as a sheet profile appearing in the flat region. Effect, this noise is called "lattice noise.”
  • the embodiment of the present invention selects the quantization factor average value and the P frame average motion vector as the video sequence feature parameters, which can better reflect the digital video compression quality.
  • the quantization strategy of digital video compression is a relatively mature quantization technology. Considering the characteristics of human vision, its quantization is completed in two steps: First, the visual quantization matrix is used to process the coefficients, and then the quantization factor is used to process the coefficients twice. First, using the visual quantization matrix, as shown in Figure 2, to process the DCT coefficients, the purpose is to select larger parameter values for the high-frequency position in the visual quantization matrix according to the characteristics of human vision that is insensitive to high-frequency data. Visual redundancy.
  • the second bit is quantized (obtained by the bit rate control algorithm) to control the output bit rate, and the final quantized result y is obtained by the following Equation 2:
  • ">>" is a right shift operation Character
  • n represents the number of bits shifted.
  • the quantization factor becomes the main factor affecting the high-frequency coefficients.
  • blockiness, image blur, mosquito noise, etc. The damage of the image is derived from the variable quantization step size in the quantization process, which causes the loss of the high frequency coefficient. Therefore, the quantization factor of all the macroblocks is extracted in the digital compressed video stream, and the average value is obtained, as shown in the following formula 5. :
  • the average _of-quanster-scale obtained is the average of the quantization factors. In general, the lower the average value of the quantization factor of the video, the less the DCT high-frequency coefficient lost during the quantization process, and the better the quality of the compressed video.
  • the "motion vector” reflects the degree of motion of the current image relative to the reference image, and the block matching method is the most commonly used method in motion estimation.
  • ⁇ ( ⁇ , ) " ⁇ ⁇ 2 , ⁇ + ⁇ [6]
  • (Vxi, j , Vyi, j) represents the size of the motion vector of the (i, j) macroblock in the intraframe position.
  • v(i,j) 0.
  • the embodiment of the present invention makes the following modifications: only the number of valid macroblocks is counted, that is, only the number of macroblocks generated by the motion vector is counted, and the macroblock whose motion vector is 0 is not recorded. This will avoid the problems mentioned above and improve the accuracy of the calculation.
  • the minimum unit of motion estimation is macroblock (16 x l6 pixels), so it is easy to cause correlation between image macroblocks in the compression encoding process. Especially for video sequences with high frequency details, if the images have fast motion at the same time, block effect damage is easy to occur. Therefore, the P frame average motion vector is also an important parameter reflecting the degree of compression video damage.
  • the feature parameters are extracted in the same way for the encoded video sequence, and then these feature parameters are input to the input layer of the neural network, and the objective evaluation of the video sequence can be obtained at the output node of the neural network. result.
  • FIG. 3 is a schematic diagram of a video quality control process based on a neural network according to an embodiment of the present invention. As shown in the figure, the process may include:
  • Step 301 Extract, according to a video quality monitoring period, a feature parameter of the video sequence encoded by the video encoder.
  • the length of the video quality monitoring period can be preset according to requirements. For example, when the video quality requirement is high, the length of the video quality monitoring period can be set to be shorter, such as 1 minute, when the video quality is not required. If the video quality control operation does not require too much resource overhead, the length of the video quality monitoring period can be set longer.
  • the feature parameters extracted here are the same as those extracted during the training of the neural network, including the average of the quantization factor and the average motion vector of the P frame.
  • Step 302 input the extracted video sequence feature parameters into the neural network, and obtain the output as an output node.
  • the video quality evaluation parameters input the extracted video sequence feature parameters into the neural network, and obtain the output as an output node.
  • Step 303 Determine whether the video quality evaluation parameter is lower than a threshold, and when the threshold is lower than the threshold, instruct the video encoder to adjust the feature parameter of the video data to improve the video quality.
  • the video quality evaluation parameter value may be directly proportional to the video quality, that is, the larger the video quality evaluation parameter value, indicating that the video quality is higher, or the video quality evaluation parameter value may be inversely proportional to the video quality, that is, the video quality evaluation. The smaller the parameter value, the higher the video quality.
  • the video quality evaluation parameter value is directly proportional to the video quality as an example.
  • the output of the neural network is usually a video quality score, for example, the video quality is from low to high, and the score ranges from 1 to 100.
  • the video quality score can be quantified into several video quality levels, and a corresponding video quality control strategy can be developed for each video quality level.
  • the score range [1, 100] of the video quality score is divided into four levels by setting the threshold values 40, 60, and 80, wherein 0-40 is divided into 1 level, indicating that the video quality is extremely poor.
  • 41-60 is level 2, indicating poor video quality; 61-80 is divided into 3 levels, indicating good video quality; 81-100 is divided into 4 levels, indicating excellent video quality. Since the human eye has unevenness in the evaluation of video quality, the video quality level is not evenly classified.
  • the value of the above threshold is only a preferred example, and the present invention is not limited thereto.
  • the video encoder may be alerted to synchronize the current video quantization factor average value and the P frame average motion vector to the video encoder, and indicate the video.
  • the encoder adjusts the corresponding coding parameters to improve the video quality.
  • the video encoder may be instructed to reduce the average of the quantization factors of the video in the compression coding process and reduce the P frame average motion vector.
  • the video encoder determines that the currently played video has a significant quality problem, and can temporarily pause the playback of the current digital video, and adjust the corresponding coding parameters according to the indication to improve the video quality.
  • the video encoder may be alerted to synchronize the current video quantization factor average value and the P frame average motion vector to the video encoder, and indicate The video encoder adjusts the corresponding coding parameters to improve the video quality. Specifically, the video encoder may be instructed to reduce the P frame average motion vector value. In this case, the video encoder determines the currently playing Video has a large quality problem, there is no need to stop the playback of the current digital video, and the corresponding encoding parameters can be adjusted according to the instructions to improve the video quality.
  • the video quality evaluation value is 61-80 minutes, that is, when the video quality is level 3, the current video broadcast is in good condition, so there is no need to alert the video encoder, but there is a risk of quality degradation, so the length of the video quality monitoring period can be shortened. , closely monitor the quality level of digital video played on the live network at high frequency.
  • the video quality evaluation value is 61-80 minutes, that is, when the video quality is 4 levels, the current video broadcast condition is excellent, and there is no need to alert the video encoder, and the length of the video quality monitoring period can be extended.
  • threshold value 60 it is also possible to set only the threshold value 60, so that when the video quality evaluation parameter is lower than 60, the video encoder is alerted, and the current video quantization factor average and the P frame average motion vector are synchronized to the video encoder, indicating The video encoder reduces the quantization factor average or/and the P frame average motion vector. It is also possible to set only thresholds 40 and 60. When the video quality evaluation parameter is lower than 40 and between 40 and 60, the specific control method is as described above.
  • the foregoing solution of the embodiment of the present invention can be implemented on a mobile terminal device, and can be applied to a mobile phone television service, and the monitoring and feedback of the video quality of the mobile phone television is realized by the foregoing manner.
  • the video sequence sent by the network side may be buffered, and the cached video data is decoded and played according to the existing manner.
  • the currently played video is extracted according to the video quality monitoring period.
  • the eigenvalues of the video sequence are input to the neural network, and the video quality evaluation result is obtained by using the neural network, the video quality control strategy is determined according to the evaluation result, and the video quality control strategy is further fed back to the network side, so that the video encoder on the network side is based on The feedback of the mobile terminal adjusts the coding parameters to ensure video quality.
  • the neural network on the mobile terminal is a video quality evaluation model, and the trained neural network can be downloaded from the network side to the mobile terminal to reduce the overhead of training the neural network on the mobile terminal.
  • the foregoing solution of the embodiment of the present invention may also be implemented on a network side device, and extracting a feature value of a video sequence by using a mobile phone television service as a frequency quality monitoring period and inputting it to a neural network, and obtaining a video quality evaluation result by using a neural network, according to the evaluation result.
  • the video quality control strategy is determined, and the video quality control strategy is further fed back to the video encoder, so that the video encoder adjusts the coding parameters according to the feedback to ensure video quality.
  • the embodiment of the present invention further provides a video quality control apparatus, which may be implemented on a terminal device, or may be implemented on a network side device, or may be an independently set device.
  • FIG. 4 is a schematic structural diagram of a video quality control apparatus according to an embodiment of the present invention.
  • the video quality control device can include a monitoring module 401, a quality evaluation module 402, and a control module 403, where:
  • the monitoring module 401 is configured to extract, according to a video quality monitoring period, a feature parameter of the video data encoded by the video encoder.
  • the feature parameter includes a quantization factor average value and a P frame average motion vector, and the specific meaning thereof is as described above;
  • the quality evaluation module 402 is configured to input the feature parameters of the extracted video data into the neural network, and obtain a video quality evaluation parameter of the video data, where the neural network outputs the video according to the input video data feature parameter. Video quality evaluation parameters of the data;
  • the control module 403 is configured to determine whether the video quality evaluation parameter is lower than a threshold, and when the threshold is lower than the threshold, instruct the video encoder to adjust the feature parameter of the video data to improve the video quality.
  • the threshold includes a first threshold (such as 40) and a second threshold (such as 60), wherein the first threshold is lower than the second threshold.
  • the control module 403 is specifically configured to: when the video quality evaluation parameter is lower than the first threshold, synchronize the quantization factor average value of the current video data and the P frame average motion vector to the video encoder, and instruct the video encoder to reduce the video. Averaging the quantization factor and decreasing the average motion vector of the P frame; when the video quality evaluation parameter is higher than the first threshold but lower than the second threshold, synchronizing the P frame average motion vector of the current video data to the digital video encoder, and indicating The video encoder reduces the P frame average motion vector.
  • the threshold further includes a third threshold (such as 80), wherein the third threshold is higher than the second threshold.
  • the control module is further configured to: when the video quality evaluation parameter is higher than the second threshold but lower than the third threshold, shorten the video quality monitoring period length; when the video quality evaluation parameter is higher than the third threshold, extend the video quality monitoring Cycle length.
  • the apparatus may further include a neural network training module 404, configured to separately extract feature parameters of each training video sequence, and obtain video quality evaluation parameters of the corresponding training video sequence; and feature parameters of each training video sequence and Video quality evaluation parameters of corresponding training video sequences
  • a neural network training module 404 configured to separately extract feature parameters of each training video sequence, and obtain video quality evaluation parameters of the corresponding training video sequence; and feature parameters of each training video sequence and Video quality evaluation parameters of corresponding training video sequences
  • the neural network training module 404 may not be included, and the trained neural network may be downloaded to the apparatus by downloading, and correspondingly, the apparatus provides corresponding Interface module for downloading neural networks.
  • the video quality evaluation is performed by the neural network, thereby improving the video evaluation efficiency and reducing the influence of subjective factors compared with the subjective video quality evaluation;
  • the characteristics of the resolution video, the neural network training and the video quality evaluation based on the quantization factor average of the video data and the P-frame average motion vector, so that the embodiment of the present invention is suitable for small-screen digital video quality analysis and control, such as mobile TV Mobile video service;
  • the embodiment of the present invention covers digital video quality monitoring, and timely feeds back the result back to the service front-end equipment, and finally adjusts the coding parameters, dynamically optimizes the video quality, and provides an effective guarantee for the video service quality.
  • modules in the apparatus in the embodiments may be distributed in the apparatus of the embodiment according to the description of the embodiments, or the corresponding changes may be located in one or more apparatuses different from the embodiment.
  • the modules of the above embodiments may be combined into one module, or may be further split into a plurality of sub-modules.
  • the present invention can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is a better implementation. the way.
  • the technical solution of the present invention which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium, including a plurality of instructions for causing a
  • the terminal device (which may be a cell phone, a personal computer, a server, or a network device, etc.) performs the methods described in various embodiments of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

一种数字视频质量控制方法及其装置,包括:按照视频质量监控周期,提取视频编码器编码后的视频数据的特征参数;将提取到的视频数据的特征参数输入神经网络,得到作为输出结果的视频质量评价参数;判断视频质量评价参数是否满足预置条件,若是,则指示视频编码器调整视频数据的特征参数以提高视频质量。

Description

一种数字视频质量控制方法及其装置
本申请要求于 2012 年 3 月 28 日提交中国专利局、 申请号为 201210088123.6、 发明名称为 "一种数字视频质量控制方法及其装置" 的中国 专利申请的优先权, 其全部内容通过引用结合在本申请中。
技术领域
本发明涉及通信领域的数字视频技术,尤其涉及一种数字视频质量控制方 法及其装置。
背景技术
近年来数字视频技术得到了蓬勃的发展。已经形成标准的 H.264、 MPEG-2 (其中 MPEG为 Moving Pictures Experts Group/Motion Pictures Experts Group 的英文筒称, 中文译名是动态图像专家组)、 MPEG-4、 JPEG2000 、 音视频编 码标准(AVS, Audio Video coding Standard )等压缩编解码标准, 在保持较好 图像质量的前提下可以达到很高的压缩比。但在高压缩比情况下,都会引入不 同程度的图像质量损伤。
目前视频质量的测量方法包括主观测量。主观测量是直接利用观察者对被 测编解码系统质量的直接反映来确定系统的测量方法。主观评价方法需要大量 人员参与到视频质量的监控当中,并且由于人眼识别视频质量的方法带有主观 性, 对于相同视频的相同损伤会给出较大差异的结论, 准确性及实用性较差。 主观测量方法耗时长、 费用高、 稳定性和可移植性差, 不适合实时的视频质量 的测量。
由于目前不具备及时有效的数字视频质量控制方案,造成广播或流媒体数 字视频播放质量大面积恶化时, 无法及时进行视频质量控制, 进而影响数字视 频业务的使用。
发明内容
本发明实施例提供了一种数字视频质量控制方法及其装置,用以实现对数 字视频质量进行客观评价, 进而提高数字视频质量控制的有效性。
本发明实施例提供的数字视频质量控制方法, 包括:
按照视频质量监控周期, 提取视频编码器编码后的视频数据的特征参数; 将提取到的视频数据的特征参数输入神经网络,得到作为输出结果的视频 质量评价参数;
判断视频质量评价参数是否满足预置条件, 若是, 则指示视频编码器调整 视频数据的特征参数以提高视频质量。
本发明实施例提供的数字视频质量控制装置, 包括:
监控模块, 用于按照视频质量监控周期,提取视频编码器编码后的视频数 据的特征参数;
质量评价模块, 用于将提取到的视频数据的特征参数输入神经网络, 并得 到所述视频数据的视频质量评价参数; 其中, 所述神经网络根据输入的视频数 据特征参数, 输出所述视频数据的视频质量评价参数;
控制模块, 用于判断视频质量评价参数是否满足预置条件, 若是, 则指示 视频编码器调整视频数据的特征参数以提高视频质量。
本发明的上述实施例, 一方面, 通过神经网络进行视频质量评价, 从而与 主观视频质量评价相比,提高了视频评价效率以及降低了主观因素的影响; 另 一方面,本发明实施例涵盖数字视频质量监控、反馈监控结构到业务前端设备, 并最终调整编码参数,从而实现了动态优化视频质量, 为视频业务质量提供了 有效保证。
附图说明
图 1为本发明实施例提供的神经网络训练流程示意图;
图 2为本发明实施例中帧内、 非帧内量化矩阵示意图;
图 3为本发明实施例提供的视频质量控制流程示意图;
图 4为本发明实施例提供的视频质量控制装置示意图。
具体实施方式
目前虽然已经有客观评价视频质量的方案,但现有客观评价视频质量的方 案使用于高分辨率视频, 而不适用于低分辨率视频 (如手机电视视频)格式。 现在的数字视频主观、 客观评价方法, 大部分针对标清视频 (SD: 720 540 分辨率)或高清视频 (HD: 1920 X 1080 分辨率); 而目前主流手机电视视频 压缩编码多采用 H.264及 AVS标准, 压缩后为 QVGA格式, 分辨率仅为 320 x 240 , 并且绝大部分视频评价方法无法做到实时, 实现的过程费时间长、 复 杂度高, 并且无法做到对于手机视频质量的即时监控。
另外, 手机电视压缩视频损伤类型, 与普通高、 标清数字视频损伤类型不 同。 在基于离散余弦变换(DCT, Discrete Cosine Transform )的压缩编码系统 中, 采用的 DCT变量都 ^^于块的, 即首先将图像分成 8 x 8的像素块, 然后 对每块进行 DCT变换得到 64个 DCT系数, 这样虽然大大减少了运算量, 但 是在 DCT算法过程中, 量化过程是有损的, 因此可能带来多种图像质量损伤: 块效应、 图像模糊、 噪声、 色度失真、 振铃等等。 以上图像损伤类型在高、 标 清数字压缩视频中普遍存在。 而目前现有的主、客观视频质量评价方法均致力 于识别以上几种损伤, 而对于手机电视视频格式分辨率, 由于分辨率过低, 因 此主观质量评价中人眼很难识别这几种图像损伤类型, 所以,有必要提出一种 筒单易行, 而又具备良好效果的手机电视视频分析技术。
针对低分辨率视频,本发明实施例提出了一种视频质量评价方案以及基于 该方案提供的一种视频质量控制方案。本发明实施例在考虑低码率视频编码特 征的同时, 考虑到人眼视觉特征, 快速检验分析数字视频特征, 使数字视频的 评价结果准确度明显优于现有数字视频评价方案。
本发明实施例的视频质量客观评价方案, 是基于神经网络的方案。 即, 通 过训练方式得到合理的用于评价视频质量的神经网络,在进行数字视频质量控 制过程中, 实时提取数字视频的特征值, 将其作为输入参数输入神经网络, 输 出结果即为该数字视频的质量评价参数,从而实现对数字视频快速和有效的评 价, 并进一步根据视频质量评价结果采取相应的控制措施以保证视频质量。
人工神经网络( ANN, Artificial Neural Network )是一种旨在模仿人脑结 构及其功能的信息处理系统,实质是一个由大量筒单的处理单元组成的高度复 杂的大规模非线性自适应系统, 其功能取决于网络的结构、连接强度以及各单 元的处理方式。
神经网络具有以下几个特点,信息处理是在大量筒单的处理单元(称为细 胞元)之间进行的, 通过它们之间的连接传送细胞元之间的信号, 各连接具有 一个相应的加权, 其值通常与输入信号相乘, 各细胞元利用 "激励函数" ( Activation function )来处理加权的输入信号之和, 以产生它的输出信号。 神经网络在实现方式上采用并行处理, 即对样本数据进行多目标学习,通 过细胞元的相互作用实现控制。神经网络适用于非精确处理, 可模拟并行处理 多大规模数据。
神经网络通过样本训练得到数据处理模型,本发明实施例中的神经网络是 指视频质量评价模型。为了进行神经网络训练,本发明实施例预先建立样本库, 其中包含大量编码后的视频序列以及对应的视频质量评价参数,这些视频质量 评价参数是通过对视频序列进行主观评价得到的。具体实施时,在建立样本库 的过程中, 首先对各个作为视频素材的视频序列进行主观评价,将主观评价的 结果存入样本库中, 并建立与相应视频序列的对应关系。 进一步的, 还可以对 主观评价所使用的视频序列进行特征参数提取,并将提取的特征参数对应于相 应视频序列存入样本库中。在神经网络训练的时候, 这些特征参数将作为神经 网络的输入, 也就是说, 神经网络的输入层上将有相应多个节点, 以同一视频 序列的主观评价结果作为对应的期望输出, 使用学习算法对神经网络进行训 练。对神经网络进行训练的目标是使视频质量的客观评价结果逼近主观评价结 果。
参见图 1 , 为本发明实施例提供的一种神经网络训练流程示意图, 如图所 示, 该流程可包括:
步骤 101 , 从样本库中提取视频序列的特征参数;
步骤 102, 将提取到的视频序列特征参数输入神经网络;
步骤 103 , 从样本库中选择相应视频序列 (该视频序列即为步骤 101中的 视频序列), 并对其视频质量进行主观评价估测以得到视频质量评价参数(视 频质量评价参数也可以事先估测出来并记录在样本库中,此种情况下可直接从 样本库中得到该视频序列的视频质量评价参数 );
步骤 104, 将所述视频序列的视频质量评价参数传输至神经网络; 步骤 105 , 神经网络根据步骤 102中获取到的视频序列特征参数, 以及步 骤 104中获取到的视频质量评价参数,通过计算获得视频质量评价参数与其特 征参数的关系, 从而实现对神经网络的训练。 本发明实施例可根据需要, 定时或不定时的对神经网络进行训练。
本发明实施例中, 在进行神经网络训练时, 提取何种特征参数, 主要出于 以下考虑: 在压缩比率比较高的数字视频中, 块效应是最主要的视频损伤。 因 为人眼对于块效应非常敏感, 而且运动图像很容易产生块效应。
在图像域内, 块效应的出现主要是由分块量化后的量化误差所引起的, 随 着图像内容的不同, "块效应" 有着不同的表现, 区分不同类型的块效应并相 应地用不同的方法进行处理十分重要。
( 1 )梯形噪声: 出现在图像的强边缘处, 在低码率情况下, DCT的 ^艮多 高阶系数被量化为零,结果与强边缘有关的高频分量在变换域内不能被完全体 现, 又因为每个块被分别处理, 不能保证穿过块边界的强边缘的连续性, 导致 在图像边缘处出现锯齿状噪声, 这种噪声称为 "梯形噪声"。
( 2 )格形噪声: 多出现在图像的平坦区域, 在变换域内 DC分量体现了 该块的平均亮度, 而这个分量包含了该块的大部分能量, 所以在平坦区域亮度 的变化很小。 但是, 如果在平坦区域有亮度的递增或递减, 可能会导致 DC分 量越过相邻量化级的判决门限,造成在重建图像中块边界处出现亮度突变,表 现为在平坦区域内出现的片状轮廓效应, 这种噪声称为 "格形噪声"。
针对视频高压缩与高运动频率两种特性,本发明实施例选取量化因子平均 值及 P帧平均运动矢量作为视频序列特征参数,它们能够 4艮好的反应数字视频 压缩质量。
( 1 )量化因子平均值
数字视频压缩的量化策略是一种比较成熟的量化技术,考虑了人类视觉的 特点, 它的量化分两步完成: 首先采用视觉量化矩阵处理系数, 然后采用量化 因子对系数进行二次处理。 首先, 利用视觉量化矩阵, 如图 2 所示, 来处理 DCT 系数, 目的是根据人类视觉对高频数据不敏感的特性, 对视觉量化矩阵 中的高频位置选择较大的参数值, 以消除视觉上的冗余。
用 X表示待量化的系数, 表示一次量化的结果, 量化处理可以表示为: yi = 32x/Q[i, j] [1] 其中, Q[i,j]表示视觉量化矩阵中 i行 j列的值。 然后, 再利用量化因子 。 (通过位率控制算法获得)进行二次量化来控 制输出位率, 最终量化的结果 y由下式 2得到:
y = {y! + sign(x)(pKq II q) )l{2Kq ) [2] 其中, 表示取符号, p和 q为修正参数。 结合式 1和式 2, 可以进 行如等价变换:
y = 32 /(2(2[ , j]Kq) = {2n+4x/(Q[i, j]Kq))» n [3] 其中, ">>" 为位右移运算符, n表示移位的位数。 于是, 对一个确定的 量化因子 。可以构造一个新的移位量化矩阵:
!¾ =2"+4職^) [4] 由于视觉量化矩阵是不变的,所以量化因子即成为影响高频系数的主要因 素。 如之前所述, 块效应、 图像模糊、 蚊子噪声等压缩图像的损伤, 均源自量 化过程中采用可变量化步长, 引起高频系数的丟失。所以在数字压缩视频流中 提取出所有宏块条的量化因子, 并求得平均值, 如下式 5:
average— of— quantister _ scale = total— of— Q— S/Q— Snum [5] 其中, total— of _Q_S = iuantiser _scale[f\,即量化因子之和, Q_Snum 为量化因子总数。 得到的 average _of— quantister— scale即为量化因子平均值。 一般意义上讲, 视频的量化因子平均值越低, 表示在量化过程中损失的 DCT 高频系数越少, 相应的其压缩视频质量也越好。
(2) P帧平均运动矢量
"运动矢量"反映了当前图像相对于参考图像的运动程度, 块匹配方法是 运动估计中最常用的方法。
首先, 定义给定 P帧的 "空间活性矩阵,, Cmv为 Cmv = {v(, } , 其中如 式 6所示:
ν(ϊ, ) =」νχ 2,ϋ + ϋ [6] 其中, (Vxi,j ,Vyi,j)代表了帧内位置为 (i,j)宏块的运动矢量的大小。 当 宏块为帧内编码时, v(i,j)=0。
接下来, 一个有 M X N个宏块的 P帧的平均运动矢量大小被定义为:
Figure imgf000008_0001
这样可得到 C:g。 但在这里存在一个问题, 即, 计算的是一帧内所有宏 块的平均运动矢量的大小, 这样如果帧内的运动是局部运动的话, 那么通过计 算局部运动会分散到整个帧内。例如镜头中部分运动比较剧烈, 而背景是静止 的, 通过计算, 变成了整帧运动緩慢了, 大的局部运动矢量变成了小的全局运 动矢量,这样显然和实际感觉不符。并且 P帧运动矢量值会普遍偏小且没区分 度,会产生较大的误差。因此本发明实施例做以下修改:只统计有效的宏块数, 也就是说, 只统计真正有运动矢量产生的宏块数, 而运动矢量为 0的宏块则不 被记入其中。 这样则可以避免上面提到的问题, 提高计算的准确性。
由于运动估计中采用了块匹配的方法, 运动估值的最小单位为宏块( 16 x l6像素), 所以在压缩编码过程中很容易造成图像宏块间相关性降低。 特别 是高频细节较为丰富的视频序列, 若图像同时存在快速运动, 则很容易出现块 效应损伤。 所以, P帧平均运动矢量也是反映压缩视频损伤程度的重要参数。
当神经网络训练好了以后,对编码后的视频序列采用同样方式提取特征参 数, 然后将这些特征参数输入到神经网络的输入层,在神经网络的输出节点上 即可得到该视频序列的客观评价结果。
参见图 3 , 为本发明实施例提供的基于神经网络实现的视频质量控制流程 示意图, 如图所示, 该流程可包括:
步骤 301 , 按照视频质量监控周期, 提取视频编码器编码后的视频序列的 特征参数。
具体实现时, 视频质量监控周期的长度可根据需要预先设定, 比如, 当对 视频质量要求高时, 可将视频质量监控周期的长度设置得短一些, 如 1分钟, 当对视频质量要求不高,且不希望视频质量控制操作占用太多资源开销时, 可 将视频质量监控周期的长度设置得长一些。 此处所提取的特征参数, 与进行神 经网络训练时所提取的特征参数相同,包括量化因子平均值以及 P帧平均运动 矢量。
步骤 302, 将提取到的视频序列特征参数输入神经网络, 得到作为输出结 果的视频质量评价参数。
步骤 303 , 判断视频质量评价参数是否低于门限, 并在低于门限时指示视 频编码器调整视频数据的特征参数以提高视频质量。
本实施例中,视频质量评价参数值可以与视频质量成正比, 即视频质量评 价参数值越大,表明视频质量越高, 或者视频质量评价参数值也可以与视频质 量成反比, 即视频质量评价参数值越小, 表明视频质量越高。
此处仅以视频质量评价参数值与视频质量成正比为例进行说明。
具体实施时,神经网络的输出结果通常为视频质量评分, 比如按照视频质 量从低到高, 其分值范围为 1~100。 为了技术实现方便, 可将视频质量评分量 化为若干个视频质量等级,并针对每个视频质量等级制定相应的视频质量控制 策略。 本发明实施例优选的, 通过设置的门限值 40、 60、 80, 将视频质量评 分的分值范围 [1,100]划分为 4个等级, 其中 0-40分为 1级, 表示视频质量极 差; 41-60为 2级, 表示视频质量差; 61-80分为 3级, 表示视频质量良好; 81-100分为 4级,表示视频质量优秀。 由于人眼对于视频质量的评价具有不均 勾性, 所以视频质量等级划分也非均勾。 以上门限值的取值仅为优选实例, 本 发明并不对此进行限制。
对应上述 4个视频质量等级, 可采用如下对应的控制策略:
A、 当视频质量评价值低于 40分, 即视频质量为 1级时, 可向视频编码 器报警, 将当前视频的量化因子平均值及 P 帧平均运动矢量同步给视频编码 器, 并指示视频编码器调整相应编码参数以提升视频质量, 具体的, 可指示视 频编码器降低压缩编码过程中的视频的量化因子平均值,并降低 P帧平均运动 矢量。 此种情况下, 视频编码器判断当前播放的视频发生重大质量问题, 可立 刻暂停当前数字视频的播放, 并可根据指示调整相应编码参数以提升视频质 量。
B、 当视频质量评价值为 41-60分, 即视频质量为 2级时, 可向视频编码 器报警, 将当前视频的量化因子平均值及 P 帧平均运动矢量同步给视频编码 器, 并指示视频编码器调整相应编码参数以提升视频质量, 具体的, 可指示视 频编码器降低 P帧平均运动矢量值。此种情况下,视频编码器判断当前播放的 视频存在较大质量问题, 无需停止当前数字视频的播放, 并可根据指示调整相 应编码参数, 以提升视频质量。
C、 当视频质量评价值为 61-80分, 即视频质量为 3级时, 当前视频播出 情况良好, 因此无需向视频编码器报警, 但存在质量劣化风险, 因此可缩短视 频质量监控周期长度, 以高频率密切监控现网播放的数字视频质量等级。
D、 当视频质量评价值为 61-80分, 即视频质量为 4级时, 当前视频播出 情况优秀, 无需向视频编码器报警, 可延长视频质量监控周期长度。
当然, 也可以仅设置门限值 60, 这样, 当视频质量评价参数低于 60时, 向视频编码器报警,将当前视频的量化因子平均值及 P帧平均运动矢量同步给 视频编码器, 指示视频编码器降低量化因子平均值或 /和 P帧平均运动矢量。 还可以仅设门限 40和 60,当视频质量评价参数低于 40,以及在 40-60之间时, 具体控制方式如前所述。
本发明实施例的上述方案可在移动终端设备上实现,可应用于手机电视业 务, 通过上述方式实现对手机电视视频质量的监控和反馈。 具体的, 当移动终 端进行手机电视业务时, 可将网络侧发送的视频序列进行緩存, 一方面按照现 有方式解码并播放緩存的视频数据, 另一方面,按照视频质量监控周期提取当 前播放的视频序列的特征值并输入到神经网络,利用神经网络得到视频质量评 价结果,根据评价结果确定视频质量控制策略, 并进一步将视频质量控制策略 反馈给网络侧, 以使网络侧的视频编码器根据移动终端的反馈调整编码参数, 以保证视频质量。 其中, 移动终端上的神经网络即为视频质量评价模型, 可从 网络侧将训练好的神经网络下载到移动终端上,以减少移动终端上训练神经网 络的开销。
本发明实施例的上述方案也可在网络侧的设备上实现,以手机电视业务为 频质量监控周期提取视频序列的特征值并输入到神经网络,利用神经网络得到 视频质量评价结果,根据评价结果确定视频质量控制策略, 并进一步将视频质 量控制策略反馈给视频编码器, 以使视频编码器根据反馈调整编码参数, 以保 证视频质量。 基于相同的技术构思, 本发明实施例还提供了一种视频质量控制装置, 该 视频质量控制装置可在终端设备上实现,也可在网络侧设备上实现,还可以为 独立设置的装置。
参见图 4, 为本发明实施例提供的视频质量控制装置的结构示意图。 如图 所示, 该视频质量控制装置可包括监控模块 401、 质量评价模块 402和控制模 块 403 , 其中:
监控模块 401 , 用于按照视频质量监控周期, 提取视频编码器编码后的视 频数据的特征参数; 具体的,特征参数包括量化因子平均值和 P帧平均运动矢 量, 其具体含义同前所述;
质量评价模块 402, 用于将提取到的视频数据的特征参数输入神经网络, 并得到所述视频数据的视频质量评价参数; 其中, 所述神经网络根据输入的视 频数据特征参数, 输出所述视频数据的视频质量评价参数;
控制模块 403 , 用于判断视频质量评价参数是否低于门限, 并在低于门限 时指示视频编码器调整视频数据的特征参数以提高视频质量。
具体的, 所述门限包括第一门限(如 40 )和第二门限(如 60 ), 其中第一 门限低于第二门限。 相应的, 控制模块 403具体用于: 当视频质量评价参数低 于第一门限时,将当前视频数据的量化因子平均值和 P帧平均运动矢量同步给 视频编码器,并指示视频编码器降低视频的量化因子平均值以及降低 P帧平均 运动矢量; 当视频质量评价参数高于第一门限但低于第二门限时,将当前视频 数据的 P 帧平均运动矢量同步给数字视频编码器, 并指示视频编码器降低 P 帧平均运动矢量。
进一步的, 所述门限还包括第三门限(如 80 ), 其中第三门限高于第二门 限。 相应的, 控制模块还用于: 当视频质量评价参数高于第二门限但低于第三 门限时, 缩短视频质量监控周期长度; 当视频质量评价参数高于第三门限时, 延长视频质量监控周期长度。
进一步的, 该装置还可包括神经网络训练模块 404, 用于分别提取各训练 用视频序列的特征参数, 并获取相应训练用视频序列的视频质量评价参数; 将 各训练用视频序列的特征参数以及相应训练用视频序列的视频质量评价参数 输入神经网络, 并以训练用视频序列的特征参数作为输入参数时, 期望的输出 结果为该训练用视频序列的视频质量评价参数为目标, 对神经网络进行训练。
在本发明的另一实施例所提供的视频质量控制装置中,可以不包含神经网 络训练模块 404,训练好的神经网络可通过下载方式下载到该装置中,相应的, 该装置中提供有相应接口模块, 以进行神经网络的下载。
通过以上描述可以看出, 本发明实施例一方面,通过神经网络进行视频质 量评价,从而与主观视频质量评价相比,提高了视频评价效率以及降低了主观 因素的影响; 另一方面, 针对低分辨率视频的特点, 基于视频数据的量化因子 平均值和 P帧平均运动矢量进行神经网络训练和视频质量评价,从而使本发明 实施例适用于小屏幕的数字视频质量分析和控制,如手机电视、手机视频业务; 再一方面, 本发明实施例涵盖数字视频质量监控、将结果及时反馈回业务前端 设备, 并最终调整编码参数, 动态优化视频质量, 为视频业务质量提供了有效 保证。
本领域技术人员可以理解实施例中的装置中的模块可以按照实施例描述 进行分布于实施例的装置中,也可以进行相应变化位于不同于本实施例的一个 或多个装置中。上述实施例的模块可以合并为一个模块,也可以进一步拆分成 多个子模块。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到本发明 可借助软件加必需的通用硬件平台的方式来实现, 当然也可以通过硬件,但很 多情况下前者是更佳的实施方式。基于这样的理解, 本发明的技术方案本质上 或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机 软件产品存储在一个存储介质中, 包括若干指令用以使得一台终端设备(可以 是手机, 个人计算机, 服务器, 或者网络设备等)执行本发明各个实施例所述 的方法。
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通 技术人员来说, 在不脱离本发明原理的前提下, 还可以做出若干改进和润饰, 这些改进和润饰也应视本发明的保护范围。

Claims

权 利 要 求
1、 一种数字视频质量控制方法, 其特征在于, 该方法包括:
按照视频质量监控周期, 提取视频编码器编码后的视频数据的特征参数; 将提取到的视频数据的特征参数输入神经网络,得到作为输出结果的视频 质量评价参数;
判断视频质量评价参数是否满足预置条件, 若是, 则指示视频编码器调整 视频数据的特征参数以提高视频质量。
2、 根据权利要求 1所述的方法, 其特征在于, 所述视频质量评价参数值 越大, 表明视频质量越高;
所述判断视频质量评价参数是否满足预置条件包括:
判断所述视频质量评价参数是否低于门限, 若是, 则确定满足预置条件。
3、 根据权利要求 2所述的方法, 其特征在于, 所述特征参数包括量化因 子平均值和 P帧平均运动矢量; 其中,所述量化因子平均值是指视频流中所有 宏块的量化因子的平均值,所述 P帧平均运动矢量是指 P帧中运动矢量不为 0 的宏块的运动矢量平均值。
4、 根据权利要求 3所述的方法, 其特征在于, 所述门限包括第一门限和 第二门限, 其中第一门限低于第二门限;
所述指示视频编码器调整视频数据的特征参数以提高视频质量, 具体为: 当视频质量评价参数低于第一门限时,将当前视频数据的量化因子平均值 和 P帧平均运动矢量同步给视频编码器,并指示视频编码器降低视频的量化因 子平均值以及降低 P帧平均运动矢量;
当视频质量评价参数高于第一门限但低于第二门限时,将当前视频数据的 P帧平均运动矢量同步给数字视频编码器,并指示视频编码器降低 P帧平均运 动矢量。
5、根据权利要求 4所述的方法, 其特征在于, 所述门限还包括第三门限, 第三门限高于第二门限; 该方法还包括:
当视频质量评价参数高于第二门限但低于第三门限时,缩短视频质量监控 周期长度; 当视频质量评价参数高于第三门限时, 延长视频质量监控周期长度。
6、 根据权利要求 2所述的方法, 其特征在于, 所述神经网络通过以下方 式训练得到:
分别提取各训练用视频序列的特征参数,并获取相应训练用视频序列的视 频质量评价参数;
将各训练用视频序列的特征参数以及相应训练用视频序列的视频质量评 价参数输入神经网络, 并以训练用视频序列的特征参数作为输入参数时, 期望 的输出结果为该训练用视频序列的视频质量评价参数为目标,对神经网络进行 训练。
7、 一种数字视频质量控制装置, 其特征在于, 包括:
监控模块, 用于按照视频质量监控周期,提取视频编码器编码后的视频数 据的特征参数;
质量评价模块, 用于将提取到的视频数据的特征参数输入神经网络, 并得 到所述视频数据的视频质量评价参数; 其中, 所述神经网络根据输入的视频数 据特征参数, 输出所述视频数据的视频质量评价参数;
控制模块, 用于判断视频质量评价参数是否满足预置条件, 若是, 则指示 视频编码器调整视频数据的特征参数以提高视频质量。
8、 根据权利要求 7所述的装置, 其特征在于, 所述视频质量评价参数值 越大, 表明视频质量越高;
所述控制模块具体用于判断所述视频质量评价参数是否低于门限, 若是, 则指示视频编码器调整视频数据的特征参数以提高视频质量。
9、 根据权利要求 8所述的装置, 其特征在于, 所述特征参数包括量化因 子平均值和 P帧平均运动矢量; 其中,所述量化因子平均值是指视频流中所有 宏块的量化因子的平均值,所述 P帧平均运动矢量是指 P帧中运动矢量不为 0 的宏块的运动矢量平均值; 所述门限包括第一门限和第二门限, 其中第一门限 低于第二门限;
所述控制模块具体用于, 当视频质量评价参数低于第一门限时,将当前视 频数据的量化因子平均值和 P帧平均运动矢量同步给视频编码器,并指示视频 编码器降低视频的量化因子平均值以及降低 P帧平均运动矢量;当视频质量评 价参数高于第一门限但低于第二门限时,将当前视频数据的 P帧平均运动矢量 同步给数字视频编码器, 并指示视频编码器降低 P帧平均运动矢量。
10、根据权利要求 9所述的装置,其特征在于,所述门限还包括第三门限, 第三门限高于第二门限;
所述控制模块还用于,当视频质量评价参数高于第二门限但低于第三门限 时, 缩短视频质量监控周期长度; 当视频质量评价参数高于第三门限时, 延长 视频质量监控周期长度。
11、 根据权利要求 8所述的装置, 其特征在于, 该装置还包括:
神经网络训练模块, 用于分别提取各训练用视频序列的特征参数, 并获取 相应训练用视频序列的视频质量评价参数;将各训练用视频序列的特征参数以 及相应训练用视频序列的视频质量评价参数输入神经网络,并以训练用视频序 列的特征参数作为输入参数时,期望的输出结果为该训练用视频序列的视频质 量评价参数为目标, 对神经网络进行训练。
PCT/CN2013/072580 2012-03-28 2013-03-14 一种数字视频质量控制方法及其装置 WO2013143396A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210088123.6A CN103369349B (zh) 2012-03-28 2012-03-28 一种数字视频质量控制方法及其装置
CN201210088123.6 2012-03-28

Publications (1)

Publication Number Publication Date
WO2013143396A1 true WO2013143396A1 (zh) 2013-10-03

Family

ID=49258195

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/072580 WO2013143396A1 (zh) 2012-03-28 2013-03-14 一种数字视频质量控制方法及其装置

Country Status (2)

Country Link
CN (1) CN103369349B (zh)
WO (1) WO2013143396A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104134091A (zh) * 2014-07-25 2014-11-05 海信集团有限公司 一种神经网络训练方法
CN105578185B (zh) * 2015-12-14 2018-08-21 华中科技大学 一种网络视频流的无参考图像质量在线估计方法
US10499069B2 (en) 2015-02-19 2019-12-03 Magic Pony Technology Limited Enhancing visual data using and augmenting model libraries
US10602163B2 (en) 2016-05-06 2020-03-24 Magic Pony Technology Limited Encoder pre-analyser
US10666962B2 (en) 2015-03-31 2020-05-26 Magic Pony Technology Limited Training end-to-end video processes
US10685264B2 (en) 2016-04-12 2020-06-16 Magic Pony Technology Limited Visual data processing using energy networks
US10692185B2 (en) 2016-03-18 2020-06-23 Magic Pony Technology Limited Generative methods of super resolution
WO2022167828A1 (en) * 2021-02-08 2022-08-11 Tencent Cloud Europe (France) Sas Methods and systems for updating an objective quality score of a video flow and for processing a video flow

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103647963A (zh) * 2013-12-04 2014-03-19 北京邮电大学 基于GoP场景复杂度的视频质量评价方法
GB201603144D0 (en) 2016-02-23 2016-04-06 Magic Pony Technology Ltd Training end-to-end video processes
CN107046646B (zh) * 2016-12-30 2020-05-22 上海寒武纪信息科技有限公司 基于深度自动编码器的视频编解码装置及方法
CN107295362B (zh) * 2017-08-10 2020-02-21 上海六界信息技术有限公司 基于图像的直播内容筛选方法、装置、设备及存储介质
CN107493509A (zh) * 2017-09-25 2017-12-19 中国联合网络通信集团有限公司 视频质量监测方法及装置
CN109168049B (zh) * 2018-09-03 2021-03-23 广州虎牙信息科技有限公司 直播节目的等级评价方法、装置、存储介质及服务器
CN111797273B (zh) * 2019-04-08 2024-04-23 百度时代网络技术(北京)有限公司 用于调整参数的方法和装置
CN110121110B (zh) * 2019-05-07 2021-05-25 北京奇艺世纪科技有限公司 视频质量评估方法、设备、视频处理设备及介质
CN110278495B (zh) * 2019-06-25 2020-02-07 重庆紫光华山智安科技有限公司 一种基于mpqm的视频传输网络控制方法及装置
CN110971784B (zh) * 2019-11-14 2022-03-25 北京达佳互联信息技术有限公司 一种视频处理方法、装置、电子设备及存储介质
CN111586413B (zh) * 2020-06-05 2022-07-15 广州繁星互娱信息科技有限公司 视频调整方法、装置、计算机设备及存储介质
CN112351252B (zh) * 2020-10-27 2023-10-20 重庆中星微人工智能芯片技术有限公司 监控视频编解码装置
CN114827617B (zh) * 2022-06-27 2022-10-18 致讯科技(天津)有限公司 一种基于感知模型的视频编解码方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282481A (zh) * 2008-05-09 2008-10-08 中国传媒大学 一种基于人工神经网络的视频质量评价方法
CN101715146A (zh) * 2008-10-08 2010-05-26 中国移动通信集团公司 压缩视频质量评价方法及评价系统
CN101808244A (zh) * 2010-03-24 2010-08-18 北京邮电大学 一种视频传输控制方法及系统
CN102137271A (zh) * 2010-11-04 2011-07-27 华为软件技术有限公司 一种图像质量评价方法及装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2795147B2 (ja) * 1993-12-24 1998-09-10 日本電気株式会社 画質評価装置
JP2008533937A (ja) * 2005-03-25 2008-08-21 アルゴリス インコーポレイテッド Dctコード化されたビデオの品質を、オリジナルビデオシーケンスを用いて、もしくは用いずに客観評価する装置及び方法
CN101621351B (zh) * 2008-06-30 2013-09-11 华为技术有限公司 一种调节多媒体编码速率的方法、装置及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282481A (zh) * 2008-05-09 2008-10-08 中国传媒大学 一种基于人工神经网络的视频质量评价方法
CN101715146A (zh) * 2008-10-08 2010-05-26 中国移动通信集团公司 压缩视频质量评价方法及评价系统
CN101808244A (zh) * 2010-03-24 2010-08-18 北京邮电大学 一种视频传输控制方法及系统
CN102137271A (zh) * 2010-11-04 2011-07-27 华为软件技术有限公司 一种图像质量评价方法及装置

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104134091A (zh) * 2014-07-25 2014-11-05 海信集团有限公司 一种神经网络训练方法
CN104134091B (zh) * 2014-07-25 2017-01-18 海信集团有限公司 一种神经网络训练方法
US10547858B2 (en) 2015-02-19 2020-01-28 Magic Pony Technology Limited Visual processing using temporal and spatial interpolation
US10499069B2 (en) 2015-02-19 2019-12-03 Magic Pony Technology Limited Enhancing visual data using and augmenting model libraries
US10516890B2 (en) 2015-02-19 2019-12-24 Magic Pony Technology Limited Accelerating machine optimisation processes
US10523955B2 (en) 2015-02-19 2019-12-31 Magic Pony Technology Limited Enhancement of visual data
US10904541B2 (en) 2015-02-19 2021-01-26 Magic Pony Technology Limited Offline training of hierarchical algorithms
US10582205B2 (en) 2015-02-19 2020-03-03 Magic Pony Technology Limited Enhancing visual data using strided convolutions
US11528492B2 (en) 2015-02-19 2022-12-13 Twitter, Inc. Machine learning for visual processing
US10623756B2 (en) 2015-02-19 2020-04-14 Magic Pony Technology Limited Interpolating visual data
US10630996B2 (en) 2015-02-19 2020-04-21 Magic Pony Technology Limited Visual processing using temporal and spatial interpolation
US10887613B2 (en) 2015-02-19 2021-01-05 Magic Pony Technology Limited Visual processing using sub-pixel convolutions
US10666962B2 (en) 2015-03-31 2020-05-26 Magic Pony Technology Limited Training end-to-end video processes
CN105578185B (zh) * 2015-12-14 2018-08-21 华中科技大学 一种网络视频流的无参考图像质量在线估计方法
US10692185B2 (en) 2016-03-18 2020-06-23 Magic Pony Technology Limited Generative methods of super resolution
US10685264B2 (en) 2016-04-12 2020-06-16 Magic Pony Technology Limited Visual data processing using energy networks
US10602163B2 (en) 2016-05-06 2020-03-24 Magic Pony Technology Limited Encoder pre-analyser
WO2022167828A1 (en) * 2021-02-08 2022-08-11 Tencent Cloud Europe (France) Sas Methods and systems for updating an objective quality score of a video flow and for processing a video flow

Also Published As

Publication number Publication date
CN103369349B (zh) 2016-04-27
CN103369349A (zh) 2013-10-23

Similar Documents

Publication Publication Date Title
WO2013143396A1 (zh) 一种数字视频质量控制方法及其装置
US9282330B1 (en) Method and apparatus for data compression using content-based features
Li et al. A convolutional neural network-based approach to rate control in HEVC intra coding
CN103124347B (zh) 利用视觉感知特性指导多视点视频编码量化过程的方法
US20220038747A1 (en) Video processing apparatus and processing method of video stream
US20130293725A1 (en) No-Reference Video/Image Quality Measurement with Compressed Domain Features
Romaniak et al. Perceptual quality assessment for H. 264/AVC compression
CN107241607B (zh) 一种基于多域jnd模型的视觉感知编码方法
CN104378636B (zh) 一种视频图像编码方法及装置
CN105430383A (zh) 一种视频流媒体业务的体验质量评估方法
WO2017084256A1 (zh) 一种视频质量评价方法及装置
Yang et al. Optimized-SSIM based quantization in optical remote sensing image compression
KR20130119328A (ko) 비디오 신호의 인코딩 또는 압축 중에 비디오 신호의 품질을 평가하는 방법 및 장치
CN110740316A (zh) 数据编码方法及装置
Xu et al. Consistent visual quality control in video coding
KR20040060980A (ko) 압축되지 않은 디지털 비디오로부터 인트라-코딩된화상들을 검출하고 인트라 dct 정확도 및매크로블록-레벨 코딩 파라메터들을 추출하는 방법 및시스템
CN116055726A (zh) 一种低延迟分层视频编码方法、计算机设备及介质
CN103581688A (zh) 一种视频图像的编码和解码方法及装置
CN110493597B (zh) 一种高效感知视频编码优化方法
WO2013086654A1 (en) Method and apparatus for video quality measurement
Kumar et al. Effcient video compression and improving quality of video in communication for computer endcoding applications
Singam Coding estimation based on rate distortion control of h. 264 encoded videos for low latency applications
Wang et al. Quality assessment for MPEG-2 video streams using a neural network model
US11297353B2 (en) No-reference banding artefact predictor
Jung Comparison of video quality assessment methods

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13769928

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13769928

Country of ref document: EP

Kind code of ref document: A1