WO2020098751A1 - Video data encoding processing method and computer storage medium - Google Patents

Video data encoding processing method and computer storage medium Download PDF

Info

Publication number
WO2020098751A1
WO2020098751A1 PCT/CN2019/118526 CN2019118526W WO2020098751A1 WO 2020098751 A1 WO2020098751 A1 WO 2020098751A1 CN 2019118526 W CN2019118526 W CN 2019118526W WO 2020098751 A1 WO2020098751 A1 WO 2020098751A1
Authority
WO
WIPO (PCT)
Prior art keywords
coding unit
encoding
coding
information
sensing information
Prior art date
Application number
PCT/CN2019/118526
Other languages
French (fr)
Chinese (zh)
Inventor
徐科
宋剑军
宋利
王浩
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Publication of WO2020098751A1 publication Critical patent/WO2020098751A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock

Definitions

  • Embodiments of the present application relate to, but are not limited to, the field of signal processing, and provide a video data encoding processing method and a computer storage medium.
  • HEVC High Efficiency Video Coding
  • video coding standards mainly use the statistical correlation of video signals to eliminate redundant information in the spatial and temporal domains based on coding techniques such as intra-frame and inter-frame prediction, but these coding techniques do not Consider the subjective visual characteristics of the human eye.
  • many video encoding modules use Rate Distortion Optimization (RDO) technology to select the optimal encoding mode.
  • RDO Rate Distortion Optimization
  • the distortion function can better characterize the characteristics of the video signal, and it is easy to calculate. Due to the limited level of knowledge of the Human Visual System (HVS), it is difficult to accurately quantify visual quality very well.
  • mean square error MSE
  • SSE Standard Square Error, and variance
  • MSE or SSE does not consider any human visual characteristics, making the subjective visual quality effect of encoded video not ideal.
  • the human visual system has a large amount of perceptual redundancy.
  • VQA video quality evaluation
  • SSIM Structured Similarity
  • JND Just Noticeable Difference
  • the coding rate consumed by coding efficiency is relatively high, so how to effectively reduce the coding rate is an urgent problem to be solved.
  • the present application provides a video data encoding processing method and a computer storage medium, which can effectively reduce the bit rate of encoding consumption.
  • the present application provides a video data encoding processing method, including:
  • the temporal and spatial domain joint sensing information k pi of each coding unit is calculated;
  • each encoding unit in the object to be encoded is encoded according to the adjustment coefficient ⁇ i and the Lagrange multiplier.
  • the above spatial sensing information k si of each coding unit is determined according to the gradient amplitude k gi and / or the variance value k ⁇ i of each coding unit.
  • the above calculation of the gradient amplitude k gi and / or variance value k ⁇ i of each coding unit requires each pixel value.
  • the pixel value includes the luminance component Y and the chroma Either component U or chrominance component V is calculated, or the weighted average of the three is used for calculation.
  • the above spatial domain sensing information k si of each coding unit is obtained by the following calculation expression:
  • k si (1- ⁇ ) ⁇ k gi + ⁇ ⁇ k ⁇ i ;
  • is a constant weighting coefficient
  • the value range is [0,1].
  • the gradient amplitude k gi of each coding unit is obtained as follows, including:
  • the normalized gradient amplitude value k gi of the i-th coding unit is calculated.
  • the normalized gradient amplitude k gi of the i-th coding unit is obtained by the following calculation expression:
  • G (i) represents the average gradient amplitude of the i-th coding unit
  • N block represents the total number of coding units in the object to be coded, where j is an integer greater than or equal to 1.
  • the variance value k ⁇ i of each coding unit is obtained in the following manner, including:
  • the normalized variance value k ⁇ i of the i-th coding unit is calculated.
  • the normalized variance value k ⁇ i of the i-th coding unit is obtained by the following calculation expression:
  • N block represents the total number of encoding units in the object to be encoded
  • c 2 is a constant coefficient, where j is an integer greater than or equal to 1.
  • the time domain perception information k ti of each coding unit is calculated by the motion vector and motion compensation in the coding unit, where the motion compensation is the difference between the object to be coded and the preset reference frame Vector distance between.
  • each pixel value needed for the calculation of the time domain perception information k ti of each coding unit described above, for the YUV sequence the pixel value includes the luminance component Y, the chrominance component U, and the chrominance One of the components V is calculated, or the weighted average of the three is used for calculation.
  • the time domain perception information k ti of each coding unit is obtained by the following calculation expression:
  • (v x , v y ) represents the motion vector of the coding block in the coding unit
  • d (o, p) represents the distance between the frame corresponding to the current coding unit and the frame corresponding to the reference unit corresponding to the current coding unit, different coding in the same frame
  • the frame of the reference unit corresponding to the unit is different or the same
  • o, p represent the coordinate information of the i-th coding unit above
  • o and p are real numbers.
  • the joint spatio-temporal sensing information k p (i) of each coding unit is obtained by the following calculation expression:
  • c is a constant, which has the same order of magnitude as k ti
  • a s is an adjustment parameter of the spatial domain sensing information k si .
  • the above-described spatial perception information k si adjustment parameter A s by calculating the spatial perceptual information k si mean square error MSE obtained; or by calculating the spatial perceptual information k si of the absolute differences SAD resulting Or, obtained by calculating the SATD of the Hadamard transform algorithm of the spatial domain sensing information k si .
  • the adjustment coefficient ⁇ i corresponding to each coding unit described above is obtained by calculating an expression as follows:
  • N block represents the total number of coding units in the object to be encoded
  • j is an integer greater than or equal to 1.
  • the value of the adjustment coefficient ⁇ i corresponding to each coding unit is calculated as follows:
  • a and b are constant parameters, with the same order of magnitude as k pi .
  • the foregoing encoding each encoding unit in the object to be encoded according to the foregoing adjustment coefficient ⁇ i and Lagrange multiplier includes:
  • the present application provides a computer storage medium for storing a computer program, where the above computer program is executed by a processor to implement any of the above methods.
  • this application includes obtaining the spatial domain sensing information k si and the temporal domain sensing information k ti of each coding unit in the object to be coded before performing the coding of the object to be coded, and then according to the The spatial domain sensing information k si and the time domain sensing information k ti of each coding unit are calculated to obtain the spatial and temporal domain joint sensing information k pi of each coding unit, and the above spatial and temporal domain joint sensing information of each coding unit is used to calculate each code
  • the adjustment coefficient ⁇ i of the Lagrangian multiplier corresponding to the unit and finally encoding each coding unit in the object to be encoded according to the adjustment coefficient ⁇ i and the Lagrange multiplier for adaptive dynamic adjustment Lagrange multiplier in the process of rate distortion optimization, so as to effectively reduce the bit rate of coding consumption while keeping the subjective quality basically unchanged.
  • FIG. 1 is a flowchart of a video data encoding processing method provided by this application.
  • FIG. 2 is a flowchart of a rate-distortion coding optimization method based on the visual masking effect in the space-time domain provided by the present application.
  • FIG. 1 is a flowchart of a video data encoding processing method provided by this application. The method shown in Figure 1 includes:
  • Step 101 Before performing encoding on the object to be coded, obtain spatial domain sensing information k si and time domain sensing information k ti of each coding unit in the object to be encoded, where i is an integer greater than or equal to 1;
  • the object to be encoded may be a certain video frame or a certain area in the video frame; the object to be encoded includes one or at least two encoding units, and the spatial domain sensing information k si and time of each encoding unit are calculated Domain awareness information k ti ;
  • the spatial domain sensing information k si of each coding unit is determined according to the gradient amplitude k gi and / or the variance value k ⁇ i of each coding unit;
  • Step 102 According to the spatial domain sensing information k si of each coding unit and the temporal domain sensing information k ti of each coding unit, calculate the temporal and spatial domain joint sensing information k pi of each coding unit;
  • the spatiotemporal joint sensing information k p (i) of each coding unit is obtained by the following calculation expression:
  • c is a constant, which has the same order of magnitude as k ti
  • a s is an adjustment parameter of the spatial domain sensing information k si .
  • Step 103 Calculate the adjustment coefficient ⁇ i of the Lagrangian multiplier corresponding to each coding unit using the joint sensing information of each coding unit in time and space domains;
  • the adjustment coefficient ⁇ i corresponding to each coding unit is obtained by calculating the expression as follows:
  • N block represents the total number of coding units in the object to be coded
  • j is an integer greater than or equal to 1.
  • Step 104 During the encoding operation of the object to be encoded, encode each encoding unit in the object to be encoded according to the adjustment coefficient ⁇ i and the Lagrange multiplier.
  • the Lagrange multiplier of the i-th coding unit is obtained using the following calculation expression include:
  • Lagrange multiplier using the i-th coding unit Encode the ith encoding unit.
  • the spatial domain sensing information k si and the time domain sensing information k ti of each coding unit in the object to be encoded are obtained, and then according to the spatial domain of each coding unit Perceptual information k si and time-domain perceptual information k ti of each coding unit, the time-space domain joint perceptual information k pi of each coding unit is calculated, and the time-space domain joint perceptual information of each coding unit is used to calculate the correspondence of each coding unit
  • the Lagrangian multiplier effectively reduces the bit rate of coding consumption while keeping the subjective quality basically unchanged.
  • the inventor found that: the method of encoding using objective quality assessment indicators, because there is a large amount of time-domain redundant information between video frames, and SSIM only considers the spatial structural characteristics, so the video quality Evaluation performance is not as effective as image quality assessment. If the encoding processing method using the visual distortion sensitivity is adopted, the content and visual perception characteristics of the time domain and the space domain are not considered, and there is also a problem that the encoding bit rate effect is too high.
  • the present application proposes to calculate the Lagrangian multiplier adjustment coefficient of each coding unit through joint sensing information in the space-time domain, and adaptively adjust the Lagrangian multiplier during the encoding process. Then the adjusted Lagrange multiplier is used for encoding.
  • the calculation of the gradient amplitude k gi and / or variance value k ⁇ i of each coding unit requires each pixel value.
  • the pixel value includes the luminance component Y and the chrominance component Either U or chrominance component V is calculated, or the weighted average of the three is used for calculation.
  • the pixel value information can be one of the three YUV values, or two of the three YUV values can be obtained by weighted average, or the three YUV values can be obtained by weighted average Get the value.
  • the spatial domain sensing information k si of each coding unit is obtained by the following calculation expression:
  • k si (1- ⁇ ) ⁇ k gi + ⁇ ⁇ k ⁇ i ;
  • is a constant weighting coefficient
  • the value range is [0,1].
  • the gradient amplitude k gi of the coding unit and the variance value k ⁇ i can be selected together to determine the spatial sensing information of the coding unit more accurately; when the two values are jointly confirmed, it can be passed Set different weights for the two values to complete the calculation of the airspace perception information.
  • the gradient amplitude k gi of each coding unit is obtained as follows, including:
  • the average gradient amplitude of the i-th coding unit is calculated
  • the normalized gradient amplitude k gi of the i-th coding unit is calculated.
  • the average gradient amplitude of the coding unit can be obtained by the following calculation expression, including:
  • G h and G v respectively represent the gradient of each pixel in the horizontal direction and the vertical direction
  • N pixel represents the number of pixels of the current coding unit
  • r and s are the coordinate positions of the pixels, where r and s are real numbers.
  • the normalized gradient amplitude k gi of the i-th coding unit is obtained by calculating the expression as follows:
  • G (i) represents the average gradient amplitude of the i-th coding unit
  • N block represents the total number of coding units in the object to be coded, where j is an integer greater than or equal to 1.
  • the variance value k ⁇ i of each coding unit is obtained as follows, including:
  • the normalized variance value k ⁇ i of the i-th coding unit is calculated.
  • the normalized variance value k ⁇ i of the i-th coding unit is obtained by the following calculation expression:
  • N block represents the total number of encoding units in the object to be encoded
  • c 2 is a constant coefficient, where j is an integer greater than or equal to 1.
  • the time domain perception information k ti of each coding unit is calculated by the motion vector in the coding unit, where the motion vector is obtained by the motion search minimum variance value.
  • the time domain perception information k ti of each coding unit needs to be calculated for each pixel value.
  • the pixel value includes a luminance component Y, a chroma component U, and a chroma component V, take one of the calculations, or take the weighted average of the three to calculate.
  • the pixel value information can be one of the three YUV values, or two of the three YUV values can be obtained by weighted average, or the three YUV values can be obtained by weighted average Get the value.
  • the time domain perception information k ti of each coding unit is obtained by the following calculation expression:
  • (v x , v y ) represents the motion vector of the coding block in the coding unit
  • d (o, p) represents the distance between the frame corresponding to the current coding unit and the frame corresponding to the reference unit of the current coding unit, and different coding units in the same frame
  • the frames of the corresponding reference units are different or the same
  • o, p represent the coordinate information of the i-th coding unit
  • o and p are real numbers.
  • the spatial perception information k si adjustment parameter A s by calculating the spatial perceptual information k si mean square error MSE obtained; or by calculating the spatial perceptual information k si of absolute difference SAD obtained Or, it is obtained by calculating the SATD of the Hadamard transform algorithm of the spatial domain sensing information k si .
  • the adjustment coefficient ⁇ i corresponding to each coding unit is obtained by calculating the expression as follows:
  • N block represents the total number of coding units in the object to be coded
  • j is an integer greater than or equal to 1.
  • the adjustment factor [eta] i ranges is limited, effective control of the adjustment coefficient [eta] i value is too large or too small, resulting in Lagrange multipliers extreme outliers, to ensure the normal calculation data .
  • a and b are constant parameters, with the same order of magnitude as k pi .
  • the spatio-temporal joint perception information k pi also takes into account the characteristics of video content such as the complexity of the spatial texture and the intensity of temporal motion. For areas with complex textures and intense motion, the spatial sensing information k si and the temporal sensing information k ti will be relatively large, which results in the spatio-temporal joint sensing information k pi becoming smaller. By linearly transforming the spatio-temporal sensing information k pi , The above changes can be eliminated to better apply it in rate distortion optimization.
  • This application mainly uses the visual characteristics of human eyes such as the temporal and spatial domain visual masking effect as a starting point to optimize the visual perception coding.
  • the distortion of the complex texture area is hardly noticeable by the human eye compared to the flat area, that is to say, the human eye is not sensitive to the distortion of the complex texture area. Therefore, these areas can accommodate or hide more visual distortion than flat areas.
  • the time-domain masking effect details and distortions of objects in areas with severe motion are harder to be noticed by the human eye than areas with static or slow motion. As the movement speeds up, the clarity of the object will further decrease. Therefore, the human eye is not sensitive to distortion in areas of intense movement.
  • the spatial and temporal perception factors of each coding unit are first calculated during implementation, and then the Lagrangian multiplication during rate-distortion optimization during encoding is performed according to the synthesized temporal and spatial joint perception factors Sub-adaptive adjustment.
  • FIG. 2 is a flowchart of a rate-distortion coding optimization method based on the visual masking effect in the space-time domain provided by the present application.
  • the method shown in Figure 2 includes:
  • Step 201 Before encoding a video frame, calculate the gradient amplitude values of all encoding units in the object to be encoded, and normalize the gradient values of each encoding unit according to the gradient average values of all encoding units of the current frame to obtain The normalized gradient amplitude k g of each coding unit.
  • the gradient information in the horizontal direction and the vertical direction can be calculated using the Sobel gradient operator.
  • the average gradient amplitude of the coding unit can be obtained by the following calculation expression, including:
  • G h and G v respectively represent the gradient of each pixel in the horizontal direction and the vertical direction
  • N pixel represents the number of pixels of the current coding unit
  • r and s are the coordinate positions of the pixels, where r and s are real numbers.
  • the normalized gradient amplitude k gi of each coding unit is calculated based on the average gradient amplitude of the frame image, as shown in equation (2).
  • G (i) represents the gradient amplitude of the i-th coding unit calculated according to formula (1)
  • N block represents the number of coding units in the object to be coded.
  • Step 202 Calculate the variance of all coding units in the frame before encoding a frame, and normalize the variance of each coding unit according to the average of the variances of all coding units in the current frame.
  • N block represents the number of coding units in the current frame
  • c 2 is a constant coefficient of the SSIM model used to ensure numerical stability.
  • Step 203 According to the results of steps 201 and 202, the gradient value and the variance value of each coding unit are weighted to serve as the spatial domain perception factor, and the spatial domain perception factor of each coding unit is obtained.
  • the spatial domain perception factor k si can be calculated by weighting k gi and k ⁇ i , as shown in equation (4).
  • is a constant weighting coefficient, the value range is [0,1].
  • k si (1- ⁇ ) ⁇ k gi + ⁇ ⁇ k ⁇ i (4)
  • Step 204 Before encoding a video frame, the previous frame is used as a reference frame to perform motion estimation, calculate the motion vectors and residuals of all coding units in the current frame, and calculate the motion vector intensity of all coding units in the current frame for each
  • the motion vector strength of the coding unit is normalized and used as the time domain perception factor k ti .
  • Step 205 First, motion vector estimation is performed on all 16x16 size coding blocks of the current coding unit, and then the motion intensity of the current coding unit is synthesized according to formula (5).
  • (v x , v y ) represents the motion vector of the coding block in the current coding unit
  • d (i, j) represents the distance from the current frame to its reference frame, which can be the POC (picture order count) from the current frame to its reference frame , The image serial number).
  • Step 206 Based on the quality prediction model MOSp, the spatial and temporal perception factors obtained in steps 203 and 205 are synthesized into a joint temporal and spatial perception factor.
  • MOSp is a common video quality prediction model as shown in (6), where k is a preset coefficient.
  • the joint spatial and spatial domain perception factor k pi of each coding unit is as (7) Formula.
  • c is a constant and has the same order of magnitude as k t .
  • Step 207 Calculate the Lagrangian multiplier adjustment coefficient of each coding unit, and perform adaptive dynamic adjustment on the Lagrange multiplier during the encoding process.
  • the spatiotemporal joint perception factor k pi improved based on MOSp takes into account the video content characteristics such as spatial texture complexity and temporal motion intensity. For regions with complex textures and intense movements, the spatial domain perception factor k si and the temporal domain perception factor k ti will be relatively large, resulting in a small spatio-temporal joint perception factor k pi .
  • rate-distortion optimization first define a new distortion index D p related to MSE, as shown in equation (8).
  • a and b are constant parameters, and have the same order of magnitude as k p .
  • equation (8) under the same distortion conditions, the texture of the image area with complex texture and violent motion A larger factor can hide more visual distortion, which is consistent with the visual masking effect in the space and time domains.
  • Equation (13) the Lagrange multiplier corresponding to the new distortion model D p can be obtained, as shown in equation (13), where N block represents the number of coding units, and ⁇ i is an adaptive adjustment coefficient.
  • the Lagrange multiplier of the i-th coding unit is adaptively adjusted according to equation (13) during actual coding.
  • the range of the adaptive coefficient ⁇ i is limited, as shown in equation (14).
  • the method provided in the application example of this application comprehensively considers the content characteristics of space texture complexity and time domain motion intensity, and synthesizes the joint sensory factors of time and space domain based on the subjective quality prediction model of MOSp (perceptual Mean Opinion Score) for adaptive Dynamically adjust the Lagrange multiplier in the process of rate-distortion optimization, so as to effectively reduce the bit rate of coding consumption while keeping the subjective quality basically unchanged.
  • MOSp perceptual Mean Opinion Score
  • the coding rate can be effectively reduced while keeping the subjective quality of the video sequence basically unchanged.
  • the code rate can be saved by 10% compared to the HEVC standard reference model HM, where the code rate The average reduction is 10.32%, and the SSIM average reduction is 0.00253.
  • the present application also provides a computer storage medium for storing a computer program, where the computer program is executed by a processor to implement any of the above methods.
  • computer storage media includes both volatile and nonvolatile implemented in any method or technology for storing information such as computer readable instructions, data structures, program modules, or other data Sex, removable and non-removable media.
  • Computer storage media include but are not limited to RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices, or may Any other medium for storing desired information and accessible by a computer.
  • the communication medium generally contains computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transmission mechanism, and may include any information delivery medium .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Disclosed are a video data encoding processing method and a computer storage medium. The method comprises: before performing encoding on an object to be encoded, acquiring spatial domain perception information ksi and time domain perception information kti of each encoding unit in the object, wherein i is an integer greater than or equal to 1; calculating and obtaining time and spatial domain joint perception information kpi according to the spatial domain perception information ksi of each encoding unit and the time domain perception information kti of each encoding unit; using the time and spatial domain joint perception information of each encoding unit to calculate an adjustment coefficient ηi of a lagrange multiplier corresponding to each encoding unit; and in the process of performing an encoding operation on the object, encoding each encoding unit in the object according to the adjustment coefficient ηi and the lagrange multiplier.

Description

一种视频数据的编码处理方法和计算机存储介质Video data encoding processing method and computer storage medium
本申请要求于2018年11月14日提交中国专利局、申请号为201811353976.1、发明名称“一种视频数据的编码处理方法和计算机存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application requires the priority of the Chinese patent application filed on November 14, 2018 in the Chinese Patent Office with the application number 201811353976.1 and the invention titled "A video data encoding processing method and computer storage medium", the entire contents of which are incorporated by reference In this application.
技术领域Technical field
本申请实施例涉及但不限于信号处理领域,提供一种视频数据的编码处理方法和计算机存储介质。Embodiments of the present application relate to, but are not limited to, the field of signal processing, and provide a video data encoding processing method and a computer storage medium.
背景技术Background technique
HEVC(High Efficiency Video Coding,高效视频编码)视频编码标准主要利用视频信号的统计相关性,基于帧内、帧间预测等编码技术来消除空间域、时间域冗余信息,但这些编码技术并没有考虑人眼主观视觉特性。此外,为了在给定码率下使得重建视频具有较高的编码质量,许多视频编码模块采用了率失真优化(Rate Distortion Optimization,RDO)技术来选择最优的编码模式。在率失真优化过程中需要失真函数能够较好地刻画视频信号的特性,而且易于计算。由于目前对人类视觉系统(Human Visual System,HVS)的认知水平有限,很难非常合适地精确量化视觉质量。因此,一般在率失真优化计算中,均方误差(Mean Square Error,MSE)或SSE(Sum of Square Error,和方差)经常被用作失真度量指标。众所周知,MSE或SSE没有考虑任何人眼视觉特性,使得编码视频的主观视觉质量效果并不理想。与此同时,作为视频图像信息的最终接收对象,人类视觉系统存在着大量的感知冗余。因此,随着对具有主观感知特性的视频质量评价(Video Quality Assessment,VQA)指标和人眼视觉特性的研究,可以将这些基于视觉感知的质量评价指标和人眼视觉特性结合起来应用到视频 编码优化中,设计基于视觉感知的编码优化方案,消除视觉感知冗余,以提高解码视频的主观视觉效果。HEVC (High Efficiency Video Coding) video coding standards mainly use the statistical correlation of video signals to eliminate redundant information in the spatial and temporal domains based on coding techniques such as intra-frame and inter-frame prediction, but these coding techniques do not Consider the subjective visual characteristics of the human eye. In addition, in order to make the reconstructed video have a higher encoding quality at a given code rate, many video encoding modules use Rate Distortion Optimization (RDO) technology to select the optimal encoding mode. In the rate-distortion optimization process, it is necessary that the distortion function can better characterize the characteristics of the video signal, and it is easy to calculate. Due to the limited level of knowledge of the Human Visual System (HVS), it is difficult to accurately quantify visual quality very well. Therefore, in the rate-distortion optimization calculation, mean square error (Mean Square Error, MSE) or SSE (Sum of Square Error, and variance) are often used as distortion measurement indicators. As we all know, MSE or SSE does not consider any human visual characteristics, making the subjective visual quality effect of encoded video not ideal. At the same time, as the final recipient of video image information, the human visual system has a large amount of perceptual redundancy. Therefore, with the research on the video quality evaluation (VQA) indicators with subjective perception characteristics and the visual characteristics of human eyes, these quality evaluation indicators based on visual perception and the visual characteristics of human eyes can be combined and applied to video coding In optimization, a coding optimization scheme based on visual perception is designed to eliminate visual perception redundancy to improve the subjective visual effect of decoded video.
在相关技术中,已经提出了一些通过研究人眼视觉特性来提高编码性能的方法。一类是提出了可以反映视觉感知失真的客观质量评估指标。例如比较常用的结构相似度指标(Structured Similarity,SSIM),考虑了图像的结构信息以及亮度和对比度掩蔽等因素,因其具有较好的主观一致性,被广泛用作视频编码的质量评价指标。因而,提出了基于SSIM的率失真优化方法用于改进帧间编码中的模式决策过程,或建立SSIM相关的失真模型用于调整率失真方程的失真及拉格朗日乘子。另一类是利用视觉失真敏感度,如最小可觉差(Just Noticeable Difference,JND)等模型,来提高感知编码性能。提出了将JND用于自适应运动估计以减少像素域残差中的感知冗余,或是根据JND自适应调整DCT频域变换系数的量化过程。In the related art, some methods have been proposed to improve coding performance by studying human visual characteristics. One is to put forward objective quality assessment indicators that can reflect the distortion of visual perception. For example, the more commonly used structural similarity index (Structured Similarity, SSIM) considers the structural information of the image and the brightness and contrast masking factors. Because of its good subjective consistency, it is widely used as a quality evaluation index for video coding. Therefore, a rate-distortion optimization method based on SSIM is proposed to improve the mode decision-making process in inter-frame coding, or to establish a SSIM-related distortion model for adjusting the distortion and Lagrange multiplier of the rate-distortion equation. The other is to use visual distortion sensitivity, such as minimum noticeable difference (Just Noticeable Difference, JND) and other models to improve the perceptual coding performance. A quantization process that uses JND for adaptive motion estimation to reduce perceptual redundancy in pixel domain residuals or adaptively adjust DCT frequency-domain transform coefficients based on JND is proposed.
鉴于上述方法,编码效率消耗的码率较高,因此如何有效降低编码的码率是亟待解决的问题。In view of the above method, the coding rate consumed by coding efficiency is relatively high, so how to effectively reduce the coding rate is an urgent problem to be solved.
发明内容Summary of the invention
为了解决上述技术问题,本申请提供了一种视频数据的编码处理方法和计算机存储介质,能够有效降低编码消耗的码率。In order to solve the above technical problems, the present application provides a video data encoding processing method and a computer storage medium, which can effectively reduce the bit rate of encoding consumption.
为了达到上述发明目的,本申请提供一种视频数据的编码处理方法,包括:In order to achieve the above object of the invention, the present application provides a video data encoding processing method, including:
在执行对待编码对象进行编码前,获取待编码对象内的每个编码单元的空域感知信息k si和时域感知信息k ti,其中i为大于等于1的整数; Before performing the encoding of the object to be encoded, obtain the spatial domain sensing information k si and the temporal domain sensing information k ti of each coding unit in the object to be encoded, where i is an integer greater than or equal to 1;
根据每个编码单元的空域感知信息k si和每个编码单元的时域感知信息k ti,计算得到每个编码单元的时空域联合感知信息k piAccording to the spatial domain sensing information k si of each coding unit and the temporal domain sensing information k ti of each coding unit, the temporal and spatial domain joint sensing information k pi of each coding unit is calculated;
利用上述每个编码单元的时空域联合感知信息,计算每个编码单元对应的拉格朗日乘子的调整系数η iCalculate the adjustment coefficient η i of the Lagrange multiplier corresponding to each coding unit by using the joint temporal and spatial domain sensing information of each coding unit described above;
在对上述待编码对象执行编码操作过程中,根据上述调整系数η i和拉格朗日乘子,对上述待编码对象中的每个编码单元进行编码。 During the encoding operation on the object to be encoded, each encoding unit in the object to be encoded is encoded according to the adjustment coefficient η i and the Lagrange multiplier.
在一个示例性实施例中,上述每个编码单元的空域感知信息k si是根据每个编码单元的梯度幅值k gi和/或方差数值k σi来确定的。 In an exemplary embodiment, the above spatial sensing information k si of each coding unit is determined according to the gradient amplitude k gi and / or the variance value k σi of each coding unit.
在一个示例性实施例中,上述每个编码单元的梯度幅值k gi和/或方差数值k σi计算需要用到每个像素值,对YUV序列来说,像素值包括亮度分量Y、色度分量U和色度分量V,取其一计算,或者,取三者加权平均进行计算。 In an exemplary embodiment, the above calculation of the gradient amplitude k gi and / or variance value k σi of each coding unit requires each pixel value. For the YUV sequence, the pixel value includes the luminance component Y and the chroma Either component U or chrominance component V is calculated, or the weighted average of the three is used for calculation.
在一个示例性实施例中,上述每个编码单元的空域感知信息k si是通过如下计算表达式得到的: In an exemplary embodiment, the above spatial domain sensing information k si of each coding unit is obtained by the following calculation expression:
k si=(1-τ)·k gi+τ·k σik si = (1-τ) · k gi + τ · k σi ;
其中,τ是一个常量加权系数,取值范围在[0,1]之间。Among them, τ is a constant weighting coefficient, the value range is [0,1].
在一个示例性实施例中,上述每个编码单元的梯度幅值k gi是通过如下方式得到的,包括: In an exemplary embodiment, the gradient amplitude k gi of each coding unit is obtained as follows, including:
计算第i个编码单元中每个像素的水平方向和竖直方向的梯度幅值;Calculate the horizontal and vertical gradient amplitudes of each pixel in the i-th coding unit;
根据上述每个像素的水平方向和竖直方向的梯度幅值,计算得到第i个编码单元的平均梯度幅值;Calculate the average gradient amplitude value of the i-th coding unit according to the gradient amplitude values of each pixel in the horizontal direction and the vertical direction;
在得到上述待编码对象的编码单元的平均梯度幅值后,计算第i个编码单元的归一化的梯度幅值k giAfter obtaining the average gradient amplitude value of the coding unit of the object to be encoded, the normalized gradient amplitude value k gi of the i-th coding unit is calculated.
在一个示例性实施例中,上述第i个编码单元的归一化的梯度幅值k gi是通过如下计算表达式得到的: In an exemplary embodiment, the normalized gradient amplitude k gi of the i-th coding unit is obtained by the following calculation expression:
Figure PCTCN2019118526-appb-000001
Figure PCTCN2019118526-appb-000001
其中,G(i)表示第i个编码单元的平均梯度幅值,N block表示上述待编 码对象中的编码单元的总数,其中,j为大于等于1的整数。 Where G (i) represents the average gradient amplitude of the i-th coding unit, and N block represents the total number of coding units in the object to be coded, where j is an integer greater than or equal to 1.
在一个示例性实施例中,上述每个编码单元的方差数值k σi是通过如下方式得到的,包括: In an exemplary embodiment, the variance value k σi of each coding unit is obtained in the following manner, including:
获取第i个的编码单元的像素值与参考图像的参考编码单元的像素值之间的方差数值;Acquiring the variance value between the pixel value of the i-th coding unit and the pixel value of the reference coding unit of the reference image;
在得到上述待编码对象的编码单元的方差数值后,计算第i个编码单元的归一化的方差数值k σiAfter obtaining the variance value of the coding unit of the object to be coded, the normalized variance value k σi of the i-th coding unit is calculated.
在一个示例性实施例中,上述第i个编码单元的归一化的方差数值k σi是通过如下计算表达式得到的: In an exemplary embodiment, the normalized variance value k σi of the i-th coding unit is obtained by the following calculation expression:
Figure PCTCN2019118526-appb-000002
Figure PCTCN2019118526-appb-000002
其中,
Figure PCTCN2019118526-appb-000003
表示第i个编码单元的方差,N block表示上述待编码对象中的编码单元的总数,c 2是常量系数,其中,j为大于等于1的整数。
among them,
Figure PCTCN2019118526-appb-000003
Represents the variance of the ith encoding unit, N block represents the total number of encoding units in the object to be encoded, c 2 is a constant coefficient, where j is an integer greater than or equal to 1.
在一个示例性实施例中,上述每个编码单元的时域感知信息k ti是编码单元内的运动矢量以及运动补偿计算得到的,其中上述运动补偿为上述待编码对象与预设的参考帧之间的矢量距离。 In an exemplary embodiment, the time domain perception information k ti of each coding unit is calculated by the motion vector and motion compensation in the coding unit, where the motion compensation is the difference between the object to be coded and the preset reference frame Vector distance between.
在一个示例性实施例中,上述每个编码单元的时域感知信息k ti计算需要用到的每个像素值,对YUV序列来说,像素值包括亮度分量Y、色度分量U和色度分量V,取其一计算,或者,取三者加权平均进行计算。 In an exemplary embodiment, each pixel value needed for the calculation of the time domain perception information k ti of each coding unit described above, for the YUV sequence, the pixel value includes the luminance component Y, the chrominance component U, and the chrominance One of the components V is calculated, or the weighted average of the three is used for calculation.
在一个示例性实施例中,上述每个编码单元的时域感知信息k ti是通过如下计算表达式得到的: In an exemplary embodiment, the time domain perception information k ti of each coding unit is obtained by the following calculation expression:
Figure PCTCN2019118526-appb-000004
Figure PCTCN2019118526-appb-000004
其中,(v x,v y)表示编码单元内编码块的运动矢量,d(o,p)表示当前编码单元对应的帧到上述当前编码单元对应参考单元的帧的距离,同一帧中 不同编码单元对应的参考单元的帧不同或者相同,o,p表示上述第i个编码单元的坐标信息,o和p均为实数。 Where (v x , v y ) represents the motion vector of the coding block in the coding unit, d (o, p) represents the distance between the frame corresponding to the current coding unit and the frame corresponding to the reference unit corresponding to the current coding unit, different coding in the same frame The frame of the reference unit corresponding to the unit is different or the same, o, p represent the coordinate information of the i-th coding unit above, and o and p are real numbers.
在一个示例性实施例中,上述每个编码单元的时空域联合感知信息k p(i)是通过如下计算表达式得到的: In an exemplary embodiment, the joint spatio-temporal sensing information k p (i) of each coding unit is obtained by the following calculation expression:
Figure PCTCN2019118526-appb-000005
Figure PCTCN2019118526-appb-000005
其中,c是一个常数,与k ti具有相同的数量级,A s为空域感知信息k si的调整参数。 Among them, c is a constant, which has the same order of magnitude as k ti , and A s is an adjustment parameter of the spatial domain sensing information k si .
在一个示例性实施例中,上述空域感知信息k si的调整参数A s是通过计算空域感知信息k si的均方误差MSE得到的;或者,通过计算空域感知信息k si的绝对误差和SAD得到的;或者,通过计算空域感知信息k si的hadamard变换算法SATD得到的。 In one exemplary embodiment, the above-described spatial perception information k si adjustment parameter A s by calculating the spatial perceptual information k si mean square error MSE obtained; or by calculating the spatial perceptual information k si of the absolute differences SAD resulting Or, obtained by calculating the SATD of the Hadamard transform algorithm of the spatial domain sensing information k si .
在一个示例性实施例中,上述每个编码单元对应的调整系数η i是通过如下方式计算表达式得到的: In an exemplary embodiment, the adjustment coefficient η i corresponding to each coding unit described above is obtained by calculating an expression as follows:
Figure PCTCN2019118526-appb-000006
Figure PCTCN2019118526-appb-000006
其中,
Figure PCTCN2019118526-appb-000007
是k pi的线性变换结果,N block表示上述待编码对象中的编码单元的总数,j为大于等于1的整数。
among them,
Figure PCTCN2019118526-appb-000007
Is the linear transformation result of k pi , N block represents the total number of coding units in the object to be encoded, and j is an integer greater than or equal to 1.
在一个示例性实施例中,上述每个编码单元对应的调整系数η i的取值是按照如下方式计算的: In an exemplary embodiment, the value of the adjustment coefficient η i corresponding to each coding unit is calculated as follows:
Figure PCTCN2019118526-appb-000008
Figure PCTCN2019118526-appb-000008
在一个示例性实施例中,上述
Figure PCTCN2019118526-appb-000009
是通过如下计算表达式得到的:
In an exemplary embodiment, the above
Figure PCTCN2019118526-appb-000009
It is obtained by calculating the expression as follows:
Figure PCTCN2019118526-appb-000010
Figure PCTCN2019118526-appb-000010
其中,a和b均为常量参数,与k pi具有相同的数量级。 Among them, a and b are constant parameters, with the same order of magnitude as k pi .
在一个示例性实施例中,上述根据上述调整系数η i和拉格朗日乘子,对上述待编码对象中的每个编码单元进行编码,包括: In an exemplary embodiment, the foregoing encoding each encoding unit in the object to be encoded according to the foregoing adjustment coefficient η i and Lagrange multiplier includes:
利用如下计算表达式,得到第i个编码单元的拉格朗日乘子
Figure PCTCN2019118526-appb-000011
包括:
Use the following calculation expression to get the Lagrange multiplier of the i-th coding unit
Figure PCTCN2019118526-appb-000011
include:
Figure PCTCN2019118526-appb-000012
Figure PCTCN2019118526-appb-000012
其中,
Figure PCTCN2019118526-appb-000013
表示以和方差SSE作为失真指标的拉格朗日乘子;
among them,
Figure PCTCN2019118526-appb-000013
Represents the Lagrangian multiplier with the sum variance SSE as the distortion index;
利用上述第i个编码单元的拉格朗日乘子
Figure PCTCN2019118526-appb-000014
对第i个编码单元进行编码处理。
Use the Lagrange multiplier of the i-th coding unit above
Figure PCTCN2019118526-appb-000014
Encode the ith encoding unit.
为了达到上述发明目的,本申请提供一种计算机存储介质,用于存储计算机程序,其中上述计算机程序通过处理器执行以实现上文任一上述的方法。In order to achieve the above object of the invention, the present application provides a computer storage medium for storing a computer program, where the above computer program is executed by a processor to implement any of the above methods.
与相关技术相比,本申请包括通过在执行对待编码对象进行编码前,获取待编码对象内的每个编码单元的空域感知信息k si和时域感知信息k ti,再根据每个编码单元的空域感知信息k si和每个编码单元的时域感知信息k ti,计算得到每个编码单元的时空域联合感知信息k pi,利用上述每个编码单元的时空域联合感知信息,计算每个编码单元对应的拉格朗日乘子的调整系数η i,最后根据上述调整系数η i和拉格朗日乘子,对上述待编码对象中的每个编码单元进行编码,用于自适应动态调整率失真优化过程中的拉格朗日乘子,从而在保持主观质量基本不变的情况下,有效降低编码消耗的码率。 Compared with the related art, this application includes obtaining the spatial domain sensing information k si and the temporal domain sensing information k ti of each coding unit in the object to be coded before performing the coding of the object to be coded, and then according to the The spatial domain sensing information k si and the time domain sensing information k ti of each coding unit are calculated to obtain the spatial and temporal domain joint sensing information k pi of each coding unit, and the above spatial and temporal domain joint sensing information of each coding unit is used to calculate each code The adjustment coefficient η i of the Lagrangian multiplier corresponding to the unit, and finally encoding each coding unit in the object to be encoded according to the adjustment coefficient η i and the Lagrange multiplier for adaptive dynamic adjustment Lagrange multiplier in the process of rate distortion optimization, so as to effectively reduce the bit rate of coding consumption while keeping the subjective quality basically unchanged.
本申请的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本申请而了解。本申请的目的和其他优点可通过在说明书、权利要求书以及附图中所特别指出的结构来实现和获得。Other features and advantages of the present application will be explained in the subsequent description, and partly become obvious from the description, or be understood by implementing the present application. The purpose and other advantages of the present application can be realized and obtained by the structures particularly pointed out in the description, claims and drawings.
附图说明BRIEF DESCRIPTION
附图用来提供对本申请技术方案的可选地理解,并且构成说明书的一部分,与本申请的实施例一起用于解释本申请的技术方案,并不构成对本申请技术方案的限制。The drawings are used to provide an optional understanding of the technical solutions of the present application, and form a part of the specification. They are used to explain the technical solutions of the present application together with the embodiments of the present application, and do not constitute a limitation on the technical solutions of the present application.
图1为本申请提供的视频数据的编码处理方法的流程图;1 is a flowchart of a video data encoding processing method provided by this application;
图2为本申请提供的基于时空域视觉掩蔽效应的率失真编码优化方法的流程图。FIG. 2 is a flowchart of a rate-distortion coding optimization method based on the visual masking effect in the space-time domain provided by the present application.
具体实施方式detailed description
为使本申请的目的、技术方案和优点更加清楚明白,下文中将结合附图对本申请的实施例进行详细说明。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互任意组合。To make the objectives, technical solutions, and advantages of the present application clearer, the embodiments of the present application will be described in detail below with reference to the drawings. It should be noted that the embodiments in the present application and the features in the embodiments can be arbitrarily combined with each other without conflict.
在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行。并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。The steps shown in the flowcharts of the figures can be performed in a computer system such as a set of computer-executable instructions. And, although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in an order different from here.
图1为本申请提供的视频数据的编码处理方法的流程图。图1所示方法,包括:FIG. 1 is a flowchart of a video data encoding processing method provided by this application. The method shown in Figure 1 includes:
步骤101、在执行对待编码对象进行编码前,获取待编码对象内的每个编码单元的空域感知信息k si和时域感知信息k ti,其中i为大于等于1的整数; Step 101: Before performing encoding on the object to be coded, obtain spatial domain sensing information k si and time domain sensing information k ti of each coding unit in the object to be encoded, where i is an integer greater than or equal to 1;
在本步骤中,待编码对象可以是某一视频帧,或者是视频帧中的某个区域;待编码对象包括一个或至少两个编码单元,计算每个编码单元的空域感知信息k si和时域感知信息k tiIn this step, the object to be encoded may be a certain video frame or a certain area in the video frame; the object to be encoded includes one or at least two encoding units, and the spatial domain sensing information k si and time of each encoding unit are calculated Domain awareness information k ti ;
在一个示例性实施例中,每个编码单元的空域感知信息k si是根据每个编码单元的梯度幅值k gi和/或方差数值k σi来确定的; In an exemplary embodiment, the spatial domain sensing information k si of each coding unit is determined according to the gradient amplitude k gi and / or the variance value k σi of each coding unit;
步骤102、根据每个编码单元的空域感知信息k si和每个编码单元的时域感知信息k ti,计算得到每个编码单元的时空域联合感知信息k piStep 102: According to the spatial domain sensing information k si of each coding unit and the temporal domain sensing information k ti of each coding unit, calculate the temporal and spatial domain joint sensing information k pi of each coding unit;
在一个示例性实施例中,每个编码单元的时空域联合感知信息k p(i)是通过如下计算表达式得到的: In an exemplary embodiment, the spatiotemporal joint sensing information k p (i) of each coding unit is obtained by the following calculation expression:
Figure PCTCN2019118526-appb-000015
Figure PCTCN2019118526-appb-000015
其中,c是一个常数,与k ti具有相同的数量级,A s为空域感知信息k si的调整参数。 Among them, c is a constant, which has the same order of magnitude as k ti , and A s is an adjustment parameter of the spatial domain sensing information k si .
步骤103、利用每个编码单元的时空域联合感知信息,计算每个编码单元对应的拉格朗日乘子的调整系数η iStep 103: Calculate the adjustment coefficient η i of the Lagrangian multiplier corresponding to each coding unit using the joint sensing information of each coding unit in time and space domains;
其中,每个编码单元对应的调整系数η i是通过如下方式计算表达式得到的: The adjustment coefficient η i corresponding to each coding unit is obtained by calculating the expression as follows:
Figure PCTCN2019118526-appb-000016
Figure PCTCN2019118526-appb-000016
其中,
Figure PCTCN2019118526-appb-000017
是k pi的线性变换结果,N block表示待编码对象中的编码单元的总数,j为大于等于1的整数。
among them,
Figure PCTCN2019118526-appb-000017
Is the linear transformation result of k pi , N block represents the total number of coding units in the object to be coded, and j is an integer greater than or equal to 1.
步骤104、在对待编码对象执行编码操作过程中,根据调整系数η i和拉格朗日乘子,对待编码对象中的每个编码单元进行编码。 Step 104: During the encoding operation of the object to be encoded, encode each encoding unit in the object to be encoded according to the adjustment coefficient η i and the Lagrange multiplier.
在一个示例性的实施例中,利用如下计算表达式,得到第i个编码单元的拉格朗日乘子
Figure PCTCN2019118526-appb-000018
包括:
In an exemplary embodiment, the Lagrange multiplier of the i-th coding unit is obtained using the following calculation expression
Figure PCTCN2019118526-appb-000018
include:
Figure PCTCN2019118526-appb-000019
Figure PCTCN2019118526-appb-000019
其中,
Figure PCTCN2019118526-appb-000020
表示以和方差SSE作为失真指标的拉格朗日乘子;
among them,
Figure PCTCN2019118526-appb-000020
Represents the Lagrangian multiplier with the sum variance SSE as the distortion index;
利用第i个编码单元的拉格朗日乘子
Figure PCTCN2019118526-appb-000021
对第i个编码单元进行编码处理。
Lagrange multiplier using the i-th coding unit
Figure PCTCN2019118526-appb-000021
Encode the ith encoding unit.
本申请提供的方法实施例,通过在执行对待编码对象进行编码前,获取待编码对象内的每个编码单元的空域感知信息k si和时域感知信息k ti,再根据每个编码单元的空域感知信息k si和每个编码单元的时域感知信息 k ti,计算得到每个编码单元的时空域联合感知信息k pi,利用每个编码单元的时空域联合感知信息,计算每个编码单元对应的拉格朗日乘子的调整系数η i,最后根据调整系数η i和拉格朗日乘子,对待编码对象中的每个编码单元进行编码,用于自适应动态调整率失真优化过程中的拉格朗日乘子,从而在保持主观质量基本不变的情况下,有效降低编码消耗的码率。 In the method embodiment provided by the present application, before performing the encoding of the object to be encoded, the spatial domain sensing information k si and the time domain sensing information k ti of each coding unit in the object to be encoded are obtained, and then according to the spatial domain of each coding unit Perceptual information k si and time-domain perceptual information k ti of each coding unit, the time-space domain joint perceptual information k pi of each coding unit is calculated, and the time-space domain joint perceptual information of each coding unit is used to calculate the correspondence of each coding unit The adjustment coefficient η i of the Lagrangian multiplier, and finally, according to the adjustment coefficient η i and the Lagrange multiplier, encode each coding unit in the object to be coded, which is used in the adaptive dynamic adjustment rate distortion optimization process The Lagrangian multiplier effectively reduces the bit rate of coding consumption while keeping the subjective quality basically unchanged.
下面对本申请提供的方法实施例作进一步说明:The method embodiments provided in this application are further described below:
在实现本申请过程中,发明人发现:采用客观质量评估指标进行编码的方法,由于视频帧之间存在大量的时域冗余信息,而SSIM只考虑了空间上的结构特性,因此在视频质量评估方面的表现并不像图像质量评估那样有效。如果采用利用视觉失真敏感度的编码处理方式,没有考虑时域和空域的内容和视觉感知特性,也存在编码码率效果过高的问题。In the process of implementing the present application, the inventor found that: the method of encoding using objective quality assessment indicators, because there is a large amount of time-domain redundant information between video frames, and SSIM only considers the spatial structural characteristics, so the video quality Evaluation performance is not as effective as image quality assessment. If the encoding processing method using the visual distortion sensitivity is adopted, the content and visual perception characteristics of the time domain and the space domain are not considered, and there is also a problem that the encoding bit rate effect is too high.
鉴于发明人分析得到的原因,本申请提出通过时空域联合感知信息,计算每一个编码单元的拉格朗日乘子调整系数,并在编码过程中对拉格朗日乘子进行自适应调整,再进行调整后的拉格朗日乘子进行编码。In view of the reasons obtained by the inventors' analysis, the present application proposes to calculate the Lagrangian multiplier adjustment coefficient of each coding unit through joint sensing information in the space-time domain, and adaptively adjust the Lagrangian multiplier during the encoding process. Then the adjusted Lagrange multiplier is used for encoding.
在一个示例性实施例中,每个编码单元的梯度幅值k gi和/或方差数值k σi计算需要用到每个像素值,对YUV序列来说,像素值包括亮度分量Y、色度分量U和色度分量V,取其一计算,或者,取三者加权平均进行计算。 In an exemplary embodiment, the calculation of the gradient amplitude k gi and / or variance value k σi of each coding unit requires each pixel value. For the YUV sequence, the pixel value includes the luminance component Y and the chrominance component Either U or chrominance component V is calculated, or the weighted average of the three is used for calculation.
在本示例性实施例中,对YUV序列来说,像素值信息可以采用YUV三个数值的一个,或者,YUV三个数值中选择两个进行加权平均得到,或者,YUV三个数值求加权平均值得到。In this exemplary embodiment, for the YUV sequence, the pixel value information can be one of the three YUV values, or two of the three YUV values can be obtained by weighted average, or the three YUV values can be obtained by weighted average Get the value.
在一个示例性的实施例中,每个编码单元的空域感知信息k si是通过如下计算表达式得到的: In an exemplary embodiment, the spatial domain sensing information k si of each coding unit is obtained by the following calculation expression:
k si=(1-τ)·k gi+τ·k σik si = (1-τ) · k gi + τ · k σi ;
其中,τ是一个常量加权系数,取值范围在[0,1]之间。Among them, τ is a constant weighting coefficient, the value range is [0,1].
在本示例性实施例中,可以选择编码单元的梯度幅值k gi和方差数值 k σi共同来确定,以更加精确地确定编码单元的空域感知信息;在由两个数值共同确认时,可以通过为两个数值设置不同的权重完成对空域感知信息的计算。 In this exemplary embodiment, the gradient amplitude k gi of the coding unit and the variance value k σi can be selected together to determine the spatial sensing information of the coding unit more accurately; when the two values are jointly confirmed, it can be passed Set different weights for the two values to complete the calculation of the airspace perception information.
在一个示例性实施例中,每个编码单元的梯度幅值k gi是通过如下方式得到的,包括: In an exemplary embodiment, the gradient amplitude k gi of each coding unit is obtained as follows, including:
计算第i个编码单元中每个像素的水平方向和竖直方向的梯度幅值;Calculate the horizontal and vertical gradient amplitudes of each pixel in the i-th coding unit;
根据每个像素的水平方向和竖直方向的梯度幅值,计算得到第i个编码单元的平均梯度幅值;According to the horizontal and vertical gradient amplitudes of each pixel, the average gradient amplitude of the i-th coding unit is calculated;
在得到待编码对象的编码单元的平均梯度幅值后,计算第i个编码单元的归一化的梯度幅值k giAfter the average gradient amplitude of the coding unit of the object to be encoded is obtained, the normalized gradient amplitude k gi of the i-th coding unit is calculated.
在一个示例性实施例中,编码单元的平均梯度幅值可以通过如下计算表达式来获得,包括:In an exemplary embodiment, the average gradient amplitude of the coding unit can be obtained by the following calculation expression, including:
Figure PCTCN2019118526-appb-000022
Figure PCTCN2019118526-appb-000022
其中,G h和G v分别表示每个像素水平方向和竖直方向的梯度,N pixel表示当前编码单元的像素数,r和s为像素的坐标位置,其中,r和s为实数。 Wherein, G h and G v respectively represent the gradient of each pixel in the horizontal direction and the vertical direction, N pixel represents the number of pixels of the current coding unit, r and s are the coordinate positions of the pixels, where r and s are real numbers.
在一个示例性实施例中,第i个编码单元的归一化的梯度幅值k gi是通过如下计算表达式得到的: In an exemplary embodiment, the normalized gradient amplitude k gi of the i-th coding unit is obtained by calculating the expression as follows:
Figure PCTCN2019118526-appb-000023
Figure PCTCN2019118526-appb-000023
其中,G(i)表示第i个编码单元的平均梯度幅值,N block表示待编码对象中的编码单元的总数,其中,j为大于等于1的整数。 Where G (i) represents the average gradient amplitude of the i-th coding unit, and N block represents the total number of coding units in the object to be coded, where j is an integer greater than or equal to 1.
在一个示例性实施例中,每个编码单元的方差数值k σi是通过如下方式得到的,包括: In an exemplary embodiment, the variance value k σi of each coding unit is obtained as follows, including:
获取第i个的编码单元的像素值与参考图像的参考编码单元的像素值 之间的方差数值;Acquiring the variance value between the pixel value of the i-th coding unit and the pixel value of the reference coding unit of the reference image;
在得到待编码对象的编码单元的方差数值后,计算第i个编码单元的归一化的方差数值k σiAfter obtaining the variance value of the coding unit of the object to be coded, the normalized variance value k σi of the i-th coding unit is calculated.
在一个示例性实施例中,第i个编码单元的归一化的方差数值k σi是通过如下计算表达式得到的: In an exemplary embodiment, the normalized variance value k σi of the i-th coding unit is obtained by the following calculation expression:
Figure PCTCN2019118526-appb-000024
Figure PCTCN2019118526-appb-000024
其中,
Figure PCTCN2019118526-appb-000025
表示第i个编码单元的方差,N block表示待编码对象中的编码单元的总数,c 2是常量系数,其中,j为大于等于1的整数。
among them,
Figure PCTCN2019118526-appb-000025
Represents the variance of the ith encoding unit, N block represents the total number of encoding units in the object to be encoded, c 2 is a constant coefficient, where j is an integer greater than or equal to 1.
在一个示例性实施例中,每个编码单元的时域感知信息k ti是编码单元内的运动矢量计算得到的,其中运动矢量是运动搜索最小方差值得到的。 In an exemplary embodiment, the time domain perception information k ti of each coding unit is calculated by the motion vector in the coding unit, where the motion vector is obtained by the motion search minimum variance value.
在一个示例性实施例中,每个编码单元的时域感知信息k ti计算需要用到的每个像素值,对YUV序列来说,像素值包括亮度分量Y、色度分量U和色度分量V,取其一计算,或者,取三者加权平均进行计算。 In an exemplary embodiment, the time domain perception information k ti of each coding unit needs to be calculated for each pixel value. For the YUV sequence, the pixel value includes a luminance component Y, a chroma component U, and a chroma component V, take one of the calculations, or take the weighted average of the three to calculate.
在本示例性实施例中,对YUV序列来说,像素值信息可以采用YUV三个数值的一个,或者,YUV三个数值中选择两个进行加权平均得到,或者,YUV三个数值求加权平均值得到。In this exemplary embodiment, for the YUV sequence, the pixel value information can be one of the three YUV values, or two of the three YUV values can be obtained by weighted average, or the three YUV values can be obtained by weighted average Get the value.
在一个示例性实施例中,每个编码单元的时域感知信息k ti是通过如下计算表达式得到的: In an exemplary embodiment, the time domain perception information k ti of each coding unit is obtained by the following calculation expression:
Figure PCTCN2019118526-appb-000026
Figure PCTCN2019118526-appb-000026
其中,(v x,v y)表示编码单元内编码块的运动矢量,d(o,p)表示当前编码单元对应的帧到当前编码单元对应参考单元的帧的距离,同一帧中不同编码单元对应的参考单元的帧不同或者相同,o,p表示第i个编码单元的坐标信息,o和p均为实数。 Where (v x , v y ) represents the motion vector of the coding block in the coding unit, d (o, p) represents the distance between the frame corresponding to the current coding unit and the frame corresponding to the reference unit of the current coding unit, and different coding units in the same frame The frames of the corresponding reference units are different or the same, o, p represent the coordinate information of the i-th coding unit, and o and p are real numbers.
在一个示例性实施例中,空域感知信息k si的调整参数A s是通过计算空 域感知信息k si的均方误差MSE得到的;或者,通过计算空域感知信息k si的绝对误差和SAD得到的;或者,通过计算空域感知信息k si的hadamard变换算法SATD得到的。 In one exemplary embodiment, the spatial perception information k si adjustment parameter A s by calculating the spatial perceptual information k si mean square error MSE obtained; or by calculating the spatial perceptual information k si of absolute difference SAD obtained Or, it is obtained by calculating the SATD of the Hadamard transform algorithm of the spatial domain sensing information k si .
在一个示例性实施例中,每个编码单元对应的调整系数η i是通过如下方式计算表达式得到的: In an exemplary embodiment, the adjustment coefficient η i corresponding to each coding unit is obtained by calculating the expression as follows:
Figure PCTCN2019118526-appb-000027
Figure PCTCN2019118526-appb-000027
其中,
Figure PCTCN2019118526-appb-000028
是k pi的线性变换结果,N block表示待编码对象中的编码单元的总数,j为大于等于1的整数。
among them,
Figure PCTCN2019118526-appb-000028
Is the linear transformation result of k pi , N block represents the total number of coding units in the object to be coded, and j is an integer greater than or equal to 1.
在,每个编码单元对应的调整系数η i的取值是按照如下方式计算的: Here, the value of the adjustment coefficient η i corresponding to each coding unit is calculated as follows:
Figure PCTCN2019118526-appb-000029
Figure PCTCN2019118526-appb-000029
在上述计算表达式中,对调整系数η i的取值范围进行了限制,有效控制调整系数η i数值过大或过小,造成拉格朗日乘子出现极端异常值,保证数据的正常计算。 In the above calculation expression, the adjustment factor [eta] i ranges is limited, effective control of the adjustment coefficient [eta] i value is too large or too small, resulting in Lagrange multipliers extreme outliers, to ensure the normal calculation data .
在一个示例性实施例中,
Figure PCTCN2019118526-appb-000030
是通过如下计算表达式得到的:
In an exemplary embodiment,
Figure PCTCN2019118526-appb-000030
It is obtained by calculating the expression as follows:
Figure PCTCN2019118526-appb-000031
Figure PCTCN2019118526-appb-000031
其中,a和b均为常量参数,与k pi具有相同的数量级。 Among them, a and b are constant parameters, with the same order of magnitude as k pi .
时空域联合感知信息k pi同时考虑了空域质地复杂度和时域运动强度等视频内容特性。对于质地复杂和运动剧烈区域,空域感知信息k si和时域感知信息k ti会相对比较大,从而导致时空域联合感知信息k pi变小,通过对时空域联合感知信息k pi进行线性转换,可以消除上述变化,以更好地将其应用在率失真优化中。 The spatio-temporal joint perception information k pi also takes into account the characteristics of video content such as the complexity of the spatial texture and the intensity of temporal motion. For areas with complex textures and intense motion, the spatial sensing information k si and the temporal sensing information k ti will be relatively large, which results in the spatio-temporal joint sensing information k pi becoming smaller. By linearly transforming the spatio-temporal sensing information k pi , The above changes can be eliminated to better apply it in rate distortion optimization.
本申请主要利用时空域视觉掩蔽效应这类人眼视觉特性,作为出发点 进行视觉感知编码优化。可选地,对于空域掩蔽效应,质地复杂区域的失真相比平坦区域很难被人眼察觉到,也就是说人眼对质地复杂区域的失真并不敏感。因此,这些区域能够比平坦区域容纳或隐藏更多的视觉失真。类似地,对于时域掩蔽效应而言,运动剧烈区域物体的细节以及失真相比静止或运动缓慢区域难以被人眼察觉到。随着运动的加快,物体清晰度会进一步下降。因此,人眼对运动剧烈区域的失真并不敏感。所以,在引入同样的失真时,质地复杂或运动剧烈的区域能够比平坦或静止的区域产生较高的主观视觉质量。根据上述空域和时域掩蔽效应,在实施时首先计算每个编码单元的空域和时域感知因子,然后根据合成的时空域联合感知因子在编码时对率失真优化过程中的拉格朗日乘子进行自适应调整。This application mainly uses the visual characteristics of human eyes such as the temporal and spatial domain visual masking effect as a starting point to optimize the visual perception coding. Optionally, for the spatial masking effect, the distortion of the complex texture area is hardly noticeable by the human eye compared to the flat area, that is to say, the human eye is not sensitive to the distortion of the complex texture area. Therefore, these areas can accommodate or hide more visual distortion than flat areas. Similarly, for the time-domain masking effect, details and distortions of objects in areas with severe motion are harder to be noticed by the human eye than areas with static or slow motion. As the movement speeds up, the clarity of the object will further decrease. Therefore, the human eye is not sensitive to distortion in areas of intense movement. Therefore, when the same distortion is introduced, areas with complex textures or violent motions can produce higher subjective visual quality than flat or still areas. According to the above-mentioned spatial and temporal masking effects, the spatial and temporal perception factors of each coding unit are first calculated during implementation, and then the Lagrangian multiplication during rate-distortion optimization during encoding is performed according to the synthesized temporal and spatial joint perception factors Sub-adaptive adjustment.
下面以本申请提供的实施例作进一步说明:The following provides further explanation with the examples provided in this application:
图2为本申请提供的基于时空域视觉掩蔽效应的率失真编码优化方法的流程图。图2所示方法包括:FIG. 2 is a flowchart of a rate-distortion coding optimization method based on the visual masking effect in the space-time domain provided by the present application. The method shown in Figure 2 includes:
步骤201、在编码一视频帧前,计算待编码对象内所有编码单元的梯度幅值,并根据当前帧的所有编码单元的梯度平均值,对每个编码单元的梯度值进行归一化,得到每个编码单元归一化的梯度幅值k gStep 201: Before encoding a video frame, calculate the gradient amplitude values of all encoding units in the object to be encoded, and normalize the gradient values of each encoding unit according to the gradient average values of all encoding units of the current frame to obtain The normalized gradient amplitude k g of each coding unit.
在本示例性实施例中,水平方向和竖直方向的梯度信息可以利用Sobel梯度算子进行计算。In the present exemplary embodiment, the gradient information in the horizontal direction and the vertical direction can be calculated using the Sobel gradient operator.
在一个示例性的实施例中,编码单元的平均梯度幅值可以通过如下计算表达式来获得,包括:In an exemplary embodiment, the average gradient amplitude of the coding unit can be obtained by the following calculation expression, including:
Figure PCTCN2019118526-appb-000032
Figure PCTCN2019118526-appb-000032
其中,G h和G v分别表示每个像素水平方向和竖直方向的梯度,N pixel表示当前编码单元的像素数,r和s为像素的坐标位置,其中,r和s为实数。 Wherein, G h and G v respectively represent the gradient of each pixel in the horizontal direction and the vertical direction, N pixel represents the number of pixels of the current coding unit, r and s are the coordinate positions of the pixels, where r and s are real numbers.
在得到每个编码单元的梯度幅值后,再基于该帧图像平均梯度幅值计算每个编码单元归一化的梯度幅值k gi,如(2)式所示。 After obtaining the gradient amplitude of each coding unit, the normalized gradient amplitude k gi of each coding unit is calculated based on the average gradient amplitude of the frame image, as shown in equation (2).
Figure PCTCN2019118526-appb-000033
Figure PCTCN2019118526-appb-000033
其中,G(i)表示根据(1)式计算得到的第i个编码单元的梯度幅值,N block表示待编码对象内编码单元的数目。 Where G (i) represents the gradient amplitude of the i-th coding unit calculated according to formula (1), and N block represents the number of coding units in the object to be coded.
步骤202、在编码一帧前计算帧内所有编码单元的方差,并根据当前帧所有编码单元的方差平均值对每个编码单元的方差进行归一化。Step 202: Calculate the variance of all coding units in the frame before encoding a frame, and normalize the variance of each coding unit according to the average of the variances of all coding units in the current frame.
每个编码单元归一化后的方差值如(3)式所示。The normalized variance value of each coding unit is shown in equation (3).
Figure PCTCN2019118526-appb-000034
Figure PCTCN2019118526-appb-000034
其中,
Figure PCTCN2019118526-appb-000035
表示第i个编码单元的方差,N block表示当前帧编码单元的数目,c 2是SSIM模型的一个常量系数用于保证数值稳定性。
among them,
Figure PCTCN2019118526-appb-000035
Represents the variance of the i-th coding unit, N block represents the number of coding units in the current frame, and c 2 is a constant coefficient of the SSIM model used to ensure numerical stability.
步骤203、根据步骤201和步骤202的结果,将每个编码单元的梯度值和方差值进行加权,作为空域感知因子,得到每个编码单元的空域感知因子。Step 203: According to the results of steps 201 and 202, the gradient value and the variance value of each coding unit are weighted to serve as the spatial domain perception factor, and the spatial domain perception factor of each coding unit is obtained.
结合(2)和(3)式的结果,空域感知因子k si可以通过k gi和k σi加权计算得到,如(4)式所示。其中,τ是一个常量加权系数,取值范围在[0,1]之间。 Combining the results of equations (2) and (3), the spatial domain perception factor k si can be calculated by weighting k gi and k σi , as shown in equation (4). Among them, τ is a constant weighting coefficient, the value range is [0,1].
k si=(1-τ)·k gi+τ·k σi        (4) k si = (1-τ) · k gi + τ · k σi (4)
步骤204、在编码一视频帧前,以前一帧作为参考帧进行运动估计,计算当前帧内所有编码单元的运动矢量和残差,并根据当前帧所有编码单元的运动矢量强度平均值对每个编码单元的运动矢量强度进行归一化,作为时域感知因子k tiStep 204: Before encoding a video frame, the previous frame is used as a reference frame to perform motion estimation, calculate the motion vectors and residuals of all coding units in the current frame, and calculate the motion vector intensity of all coding units in the current frame for each The motion vector strength of the coding unit is normalized and used as the time domain perception factor k ti .
步骤205,首先,对当前编码单元所有16x16大小的编码块进行运动矢量估计,然后根据(5)式合成当前编码单元的运动强度。Step 205: First, motion vector estimation is performed on all 16x16 size coding blocks of the current coding unit, and then the motion intensity of the current coding unit is synthesized according to formula (5).
Figure PCTCN2019118526-appb-000036
Figure PCTCN2019118526-appb-000036
其中,(v x,v y)表示当前编码单元内编码块的运动矢量,d(i,j)表示当 前帧到其参考帧的距离,可以为当前帧到其参考帧的POC(picture order count,图像序列号)之差。 Among them, (v x , v y ) represents the motion vector of the coding block in the current coding unit, d (i, j) represents the distance from the current frame to its reference frame, which can be the POC (picture order count) from the current frame to its reference frame , The image serial number).
步骤206、基于质量预测模型MOSp将步骤203和步骤205得到的空域和时域感知因子合成为时空域联合感知因子。Step 206: Based on the quality prediction model MOSp, the spatial and temporal perception factors obtained in steps 203 and 205 are synthesized into a joint temporal and spatial perception factor.
MOSp是一种常见的视频质量预测模型如(6)式所示,其中k为预设的系数。MOSp is a common video quality prediction model as shown in (6), where k is a preset coefficient.
MOSp=1-k·MSE         (6)MOSp = 1-k · MSE (6)
基于(6)式中MOSp的数学模型,在通过步骤203和步骤205得到空域感知因子k si和和时域感知因子k ti后,每个编码单元的时空域联合感知因子k pi如(7)式所示。 Based on the mathematical model of MOSp in (6), after obtaining the spatial domain perception factor k si and the temporal domain perception factor k ti through steps 203 and 205, the joint spatial and spatial domain perception factor k pi of each coding unit is as (7) Formula.
Figure PCTCN2019118526-appb-000037
Figure PCTCN2019118526-appb-000037
其中,c是一个常数,与k t具有相同的数量级。 Where c is a constant and has the same order of magnitude as k t .
步骤207、计算每一个编码单元的拉格朗日乘子调整系数,并在编码过程中对拉格朗日乘子进行自适应动态调整。Step 207: Calculate the Lagrangian multiplier adjustment coefficient of each coding unit, and perform adaptive dynamic adjustment on the Lagrange multiplier during the encoding process.
根据MOSp改进而来的时空域联合感知因子k pi同时考虑了空域质地复杂度和时域运动强度等视频内容特性。对于质地复杂和运动剧烈区域,空域感知因子k si和和时域感知因子k ti会相对比较大,从而导致时空域联合感知因子k pi变小。为了更好地将其应用在率失真优化中,首先定义一个与MSE相关的新的失真指标D p,如(8)式所示。 The spatiotemporal joint perception factor k pi improved based on MOSp takes into account the video content characteristics such as spatial texture complexity and temporal motion intensity. For regions with complex textures and intense movements, the spatial domain perception factor k si and the temporal domain perception factor k ti will be relatively large, resulting in a small spatio-temporal joint perception factor k pi . To better apply it to rate-distortion optimization, first define a new distortion index D p related to MSE, as shown in equation (8).
Figure PCTCN2019118526-appb-000038
Figure PCTCN2019118526-appb-000038
其中,
Figure PCTCN2019118526-appb-000039
是k p的线性变换结果,如(9)式所示,a和b均为常量参数,与k p具有相同的数量级。根据(8)式,在引入相同失真条件下,质地复杂和运动剧烈图像区域的
Figure PCTCN2019118526-appb-000040
因子较大,能够隐藏更多的视觉失真,这与空域和时域的视觉掩蔽效应相一致。
among them,
Figure PCTCN2019118526-appb-000039
Is the linear transformation result of k p , as shown in equation (9), a and b are constant parameters, and have the same order of magnitude as k p . According to equation (8), under the same distortion conditions, the texture of the image area with complex texture and violent motion
Figure PCTCN2019118526-appb-000040
A larger factor can hide more visual distortion, which is consistent with the visual masking effect in the space and time domains.
Figure PCTCN2019118526-appb-000041
Figure PCTCN2019118526-appb-000041
然后,将新定义的失真指标D p替换掉原来率失真方程的失真D,可得到如下关系: Then, by replacing the newly defined distortion index D p with the distortion D of the original rate-distortion equation, the following relationship can be obtained:
Figure PCTCN2019118526-appb-000042
Figure PCTCN2019118526-appb-000042
可以进一步简化得到:Can be further simplified to:
Figure PCTCN2019118526-appb-000043
Figure PCTCN2019118526-appb-000043
从(11)式可以看出,对失真D的改变已经转移到了拉格朗日乘子上。此外,在通常情况下,编码单元消耗的码率rate与产生的失真distortion之间具有如下关系模型:It can be seen from equation (11) that the change to the distortion D has been transferred to the Lagrange multiplier. In addition, under normal circumstances, the code rate consumed by the coding unit and the resulting distortion distortion have the following relationship model:
Figure PCTCN2019118526-appb-000044
Figure PCTCN2019118526-appb-000044
其中,r(d)表示该编码单元消耗的码率,d表示该编码单元的失真SSE,σ 2表示该编码单元编码失真的方差,α是一个常量系数,N pixel表示当前编码单元的像素数。根据上述码率失真模型,可以得到在新的失真模型D p下所对应的拉格朗日乘子,如(13)式所示,其中N block表示编码单元的数目,η i为自适应调整系数。 Where r (d) represents the code rate consumed by the coding unit, d represents the distortion SSE of the coding unit, σ 2 represents the variance of the coding distortion of the coding unit, α is a constant coefficient, and N pixel represents the number of pixels of the current coding unit . According to the above rate distortion model, the Lagrange multiplier corresponding to the new distortion model D p can be obtained, as shown in equation (13), where N block represents the number of coding units, and η i is an adaptive adjustment coefficient.
Figure PCTCN2019118526-appb-000045
Figure PCTCN2019118526-appb-000045
根据上述分析,对于质地复杂和运动剧烈区域,计算得到的
Figure PCTCN2019118526-appb-000046
会相对较大。而根据视觉掩蔽效应可知,这些区域可以隐藏更多的视觉失真,在率失真优化过程中应倾向于为这些区域分配较少的比特,等价于在编码过程中为这些区域选取较大的拉格朗日乘子。所以,在实际编码时第i个编 码单元的拉格朗日乘子按(13)式进行自适应调整。此外,为了防止出现极端异常值,对自适应系数η i的取值范围进行了限制,如(14)式所示。
According to the above analysis, for the area with complex texture and intense movement, the calculated
Figure PCTCN2019118526-appb-000046
It will be relatively large. According to the visual masking effect, these regions can hide more visual distortion. In the process of rate distortion optimization, they should tend to allocate fewer bits to these regions, which is equivalent to choosing a larger pull for these regions in the encoding process Granger multiplier. Therefore, the Lagrange multiplier of the i-th coding unit is adaptively adjusted according to equation (13) during actual coding. In addition, in order to prevent the occurrence of extreme abnormal values, the range of the adaptive coefficient η i is limited, as shown in equation (14).
Figure PCTCN2019118526-appb-000047
Figure PCTCN2019118526-appb-000047
本申请应用实例提供的方法,综合考虑了空域质地复杂度和时域运动强度等内容特性,并基于MOSp(perceptual Mean Opinion Score)的主观质量预测模型合成了时空域联合感知因子,用于自适应动态调整率失真优化过程中的拉格朗日乘子,从而在保持主观质量基本不变的情况下有效降低编码消耗的码率。The method provided in the application example of this application comprehensively considers the content characteristics of space texture complexity and time domain motion intensity, and synthesizes the joint sensory factors of time and space domain based on the subjective quality prediction model of MOSp (perceptual Mean Opinion Score) for adaptive Dynamically adjust the Lagrange multiplier in the process of rate-distortion optimization, so as to effectively reduce the bit rate of coding consumption while keeping the subjective quality basically unchanged.
与相关技术相比,在保持视频序列主观质量基本不变的情况,能够有效降低编码码率。可选地,在主观感知质量基本不变的情况下,对于存在全局运动的标准测试序列(以HEVC CTC序列为例),能够相比HEVC标准参考模型HM节省码率10%以,其中码率平均降低10.32%,SSIM平均降低0.00253。Compared with the related art, the coding rate can be effectively reduced while keeping the subjective quality of the video sequence basically unchanged. Optionally, under the condition that the subjective perceived quality is basically unchanged, for the standard test sequence with global motion (take HEVC and CTC sequence as an example), the code rate can be saved by 10% compared to the HEVC standard reference model HM, where the code rate The average reduction is 10.32%, and the SSIM average reduction is 0.00253.
本申请还提供一种计算机存储介质,用于存储计算机程序,其中计算机程序通过处理器执行以实现上文任一的方法。The present application also provides a computer storage medium for storing a computer program, where the computer program is executed by a processor to implement any of the above methods.
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些组件或所有组件可以被实施为由处理器,如数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失 性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。Those of ordinary skill in the art may understand that all or some of the steps, systems, and functional modules / units in the method disclosed above may be implemented as software, firmware, hardware, and appropriate combinations thereof. In a hardware implementation, the division between the functional modules / units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical The components are executed in cooperation. Some or all components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer-readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). As is well known to those of ordinary skill in the art, the term computer storage media includes both volatile and nonvolatile implemented in any method or technology for storing information such as computer readable instructions, data structures, program modules, or other data Sex, removable and non-removable media. Computer storage media include but are not limited to RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices, or may Any other medium for storing desired information and accessible by a computer. In addition, it is well known to those of ordinary skill in the art that the communication medium generally contains computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transmission mechanism, and may include any information delivery medium .
工业实用性Industrial applicability
本申请的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本申请而了解。本申请的目的和其他优点可通过在说明书、权利要求书以及附图中所特别指出的结构来实现和获得。Other features and advantages of the present application will be explained in the subsequent description, and partly become obvious from the description, or be understood by implementing the present application. The purpose and other advantages of the present application can be realized and obtained by the structures particularly pointed out in the description, claims and drawings.

Claims (18)

  1. 一种视频数据的编码处理方法,包括:A video data encoding processing method, including:
    在执行对待编码对象进行编码前,获取待编码对象内的每个编码单元的空域感知信息k si和时域感知信息k ti,其中i为大于等于1的整数; Before performing the encoding of the object to be encoded, obtain the spatial domain sensing information k si and the temporal domain sensing information k ti of each coding unit in the object to be encoded, where i is an integer greater than or equal to 1;
    根据每个编码单元的空域感知信息k si和每个编码单元的时域感知信息k ti,计算得到每个编码单元的时空域联合感知信息k piAccording to the spatial domain sensing information k si of each coding unit and the temporal domain sensing information k ti of each coding unit, the temporal and spatial domain joint sensing information k pi of each coding unit is calculated;
    利用所述每个编码单元的时空域联合感知信息,计算每个编码单元对应的拉格朗日乘子的调整系数η iCalculate the adjustment coefficient η i of the Lagrangian multiplier corresponding to each coding unit by using the joint temporal and spatial domain sensing information of each coding unit;
    在对所述待编码对象执行编码操作过程中,根据所述调整系数η i和拉格朗日乘子,对所述待编码对象中的每个编码单元进行编码。 During the encoding operation on the object to be encoded, each encoding unit in the object to be encoded is encoded according to the adjustment coefficient η i and the Lagrange multiplier.
  2. 根据权利要求1所述的方法,其中,所述每个编码单元的空域感知信息k si是根据每个编码单元的梯度幅值k gi和/或方差数值k σi来确定的。 The method according to claim 1, wherein the spatial domain sensing information k si of each coding unit is determined according to the gradient amplitude k gi and / or the variance value k σi of each coding unit.
  3. 根据权利要求2所述的方法,其中,所述每个编码单元的梯度幅值k gi和/或方差数值k σi计算需要用到每个像素值,对YUV序列来说,像素值包括亮度分量Y、色度分量U和色度分量V,取其一计算,或者,取三者加权平均进行计算。 The method according to claim 2, wherein the calculation of the gradient amplitude k gi and / or variance value k σi of each coding unit requires each pixel value, and for the YUV sequence, the pixel value includes the luminance component Y, chrominance component U and chrominance component V, one of them is calculated, or the weighted average of the three is used for calculation.
  4. 根据权利要求2所述的方法,其中,所述每个编码单元的空域感知信息k si是通过如下计算表达式得到的: The method according to claim 2, wherein the spatial domain sensing information k si of each coding unit is obtained by calculating an expression as follows:
    k si=(1-τ)·k gi+τ·k σik si = (1-τ) · k gi + τ · k σi ;
    其中,τ是一个常量加权系数,取值范围在[0,1]之间。Among them, τ is a constant weighting coefficient, the value range is [0,1].
  5. 根据权利要求2至4任一所述的方法,其中,所述每个编码单元的梯度幅值k gi是通过如下方式得到的,包括: The method according to any one of claims 2 to 4, wherein the gradient amplitude k gi of each coding unit is obtained as follows, including:
    计算第i个编码单元中每个像素的水平方向和竖直方向的梯度幅值;Calculate the horizontal and vertical gradient amplitudes of each pixel in the i-th coding unit;
    根据所述每个像素的水平方向和竖直方向的梯度幅值,计算得到第i个编码单元的平均梯度幅值;Calculate the average gradient amplitude of the i-th coding unit according to the gradient amplitude of the horizontal direction and the vertical direction of each pixel;
    在得到所述待编码对象的编码单元的平均梯度幅值后,计算第i个编码单元的归一化的梯度幅值k giAfter obtaining the average gradient amplitude value of the coding unit of the object to be coded, the normalized gradient amplitude value k gi of the i-th coding unit is calculated.
  6. 根据权利要求5所述的方法,其中,所述第i个编码单元的归一化的梯度幅值k gi是通过如下计算表达式得到的: The method according to claim 5, wherein the normalized gradient amplitude k gi of the i-th coding unit is obtained by the following calculation expression:
    Figure PCTCN2019118526-appb-100001
    Figure PCTCN2019118526-appb-100001
    其中,G(i)表示第i个编码单元的平均梯度幅值,N block表示所述待编码对象中的编码单元的总数,其中,j为大于等于1的整数。 Where G (i) represents the average gradient amplitude of the i-th coding unit, and N block represents the total number of coding units in the object to be coded, where j is an integer greater than or equal to 1.
  7. 根据权利要求2或3所述的方法,其中,所述每个编码单元的方差数值k σi是通过如下方式得到的,包括: The method according to claim 2 or 3, wherein the variance value k σi of each coding unit is obtained as follows, including:
    获取第i个的编码单元的像素值与参考图像的参考编码单元的像素值之间的方差数值;Acquiring the variance value between the pixel value of the i-th coding unit and the pixel value of the reference coding unit of the reference image;
    在得到所述待编码对象的编码单元的方差数值后,计算第i个编码单元的归一化的方差数值k σiAfter obtaining the variance value of the coding unit of the object to be coded, the normalized variance value k σi of the i-th coding unit is calculated.
  8. 根据权利要求7所述的方法,其中,所述第i个编码单元的归一化的方差数值k σi是通过如下计算表达式得到的: The method according to claim 7, wherein the normalized variance value k σi of the i-th coding unit is obtained by the following calculation expression:
    Figure PCTCN2019118526-appb-100002
    Figure PCTCN2019118526-appb-100002
    其中,
    Figure PCTCN2019118526-appb-100003
    表示第i个编码单元的方差,N block表示所述待编码对象中的编码单元的总数,c 2是常量系数,其中,j为大于等于1的整数。
    among them,
    Figure PCTCN2019118526-appb-100003
    Represents the variance of the ith encoding unit, N block represents the total number of encoding units in the object to be encoded, c 2 is a constant coefficient, where j is an integer greater than or equal to 1.
  9. 根据权利要求1或2所述的方法,其中,所述每个编码单元的时域 感知信息k ti是编码单元内的运动矢量以及运动补偿计算得到的,其中所述运动补偿为所述待编码对象与预设的参考帧之间的矢量距离。 The method according to claim 1 or 2, wherein the time domain perception information k ti of each coding unit is calculated by a motion vector and motion compensation in the coding unit, wherein the motion compensation is the to-be-encoded The vector distance between the object and the preset reference frame.
  10. 根据权利要求9所述的方法,其中,所述每个编码单元的时域感知信息k ti计算需要用到的每个像素值,对YUV序列来说,像素值包括亮度分量Y、色度分量U和色度分量V,取其一计算,或者,取三者加权平均进行计算。 The method according to claim 9, wherein the time domain perception information k ti of each coding unit calculates each pixel value needed, and for the YUV sequence, the pixel value includes a luminance component Y and a chrominance component Either U or the chrominance component V is calculated, or the weighted average of the three is used for calculation.
  11. 根据权利要求9所述的方法,其中,所述每个编码单元的时域感知信息k ti是通过如下计算表达式得到的: The method according to claim 9, wherein the time domain perception information k ti of each coding unit is obtained by calculating an expression as follows:
    Figure PCTCN2019118526-appb-100004
    Figure PCTCN2019118526-appb-100004
    其中,(v x,v y)表示编码单元内编码块的运动矢量,d(o,p)表示当前编码单元对应的帧到所述当前编码单元对应参考单元的帧的距离,同一帧中不同编码单元对应的参考单元的帧不同或者相同,o,p表示所述第i个编码单元的坐标信息,o和p均为实数。 Where (v x , v y ) represents the motion vector of the coding block in the coding unit, and d (o, p) represents the distance between the frame corresponding to the current coding unit and the frame corresponding to the reference unit of the current coding unit, which are different in the same frame The frames of the reference unit corresponding to the coding unit are different or the same, o, p represent coordinate information of the i-th coding unit, and o and p are real numbers.
  12. 根据权利要求1所述的方法,其中,所述每个编码单元的时空域联合感知信息k p(i)是通过如下计算表达式得到的: The method according to claim 1, wherein the spatio-temporal joint sensing information k p (i) of each coding unit is obtained by the following calculation expression:
    Figure PCTCN2019118526-appb-100005
    Figure PCTCN2019118526-appb-100005
    其中,c是一个常数,与k ti具有相同的数量级,A s为空域感知信息k si的调整参数。 Among them, c is a constant, which has the same order of magnitude as k ti , and A s is an adjustment parameter of the spatial domain sensing information k si .
  13. 根据权利要求12所述的方法,其中,所述空域感知信息k si的调整参数A s是通过计算空域感知信息k si的均方误差MSE得到的;或者,通过计算空域感知信息k si的绝对误差和SAD得到的;或者,通过计算空域感知信息k si的hadamard变换算法SATD得到的。 The method of claim 12, wherein the spatial perception information adjustment parameter k si A s is calculated by spatial perceptual information k si mean square error MSE obtained; or by calculating the spatial perceptual information k si absolute and SAD error obtained; or perceptual information k si is calculated by spatial hadamard transform algorithm SATD obtained.
  14. 根据权利要求1或11或12所述的方法,其中,所述每个编码单元对应的调整系数η i是通过如下方式计算表达式得到的: The method according to claim 1 or 11 or 12, wherein the adjustment coefficient η i corresponding to each coding unit is obtained by calculating an expression as follows:
    Figure PCTCN2019118526-appb-100006
    Figure PCTCN2019118526-appb-100006
    其中,
    Figure PCTCN2019118526-appb-100007
    是k pi的线性变换结果,N block表示所述待编码对象中的编码单元的总数,j为大于等于1的整数。
    among them,
    Figure PCTCN2019118526-appb-100007
    Is the linear transformation result of k pi , N block represents the total number of coding units in the object to be coded, and j is an integer greater than or equal to 1.
  15. 根据权利要求14所述的方法,其中,所述每个编码单元对应的调整系数η i的取值是按照如下方式计算的: The method according to claim 14, wherein the value of the adjustment coefficient η i corresponding to each coding unit is calculated as follows:
    Figure PCTCN2019118526-appb-100008
    Figure PCTCN2019118526-appb-100008
  16. 根据权利要求14所述的方法,其中,所述
    Figure PCTCN2019118526-appb-100009
    是通过如下计算表达式得到的:
    The method according to claim 14, wherein said
    Figure PCTCN2019118526-appb-100009
    It is obtained by calculating the expression as follows:
    Figure PCTCN2019118526-appb-100010
    Figure PCTCN2019118526-appb-100010
    其中,a和b均为常量参数,与k pi具有相同的数量级。 Among them, a and b are constant parameters, with the same order of magnitude as k pi .
  17. 根据权利要求1所述的方法,其中,所述根据所述调整系数η i和拉格朗日乘子,对所述待编码对象中的每个编码单元进行编码,包括: The method according to claim 1, wherein the encoding each encoding unit in the object to be encoded according to the adjustment coefficient η i and the Lagrange multiplier includes:
    利用如下计算表达式,得到第i个编码单元的拉格朗日乘子
    Figure PCTCN2019118526-appb-100011
    包括:
    Use the following calculation expression to get the Lagrange multiplier of the i-th coding unit
    Figure PCTCN2019118526-appb-100011
    include:
    Figure PCTCN2019118526-appb-100012
    Figure PCTCN2019118526-appb-100012
    其中,
    Figure PCTCN2019118526-appb-100013
    表示以和方差SSE作为失真指标的拉格朗日乘子;
    among them,
    Figure PCTCN2019118526-appb-100013
    Represents the Lagrangian multiplier with the sum variance SSE as the distortion index;
    利用所述第i个编码单元的拉格朗日乘子
    Figure PCTCN2019118526-appb-100014
    对第i个编码单元进行编码处理。
    Lagrange multiplier using the i-th coding unit
    Figure PCTCN2019118526-appb-100014
    Encode the ith encoding unit.
  18. 一种计算机存储介质,用于存储计算机程序,其中所述计算机程序通过处理器执行以实现如权利要求1至17任一所述的方法。A computer storage medium for storing a computer program, wherein the computer program is executed by a processor to implement the method according to any one of claims 1 to 17.
PCT/CN2019/118526 2018-11-14 2019-11-14 Video data encoding processing method and computer storage medium WO2020098751A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811353976.1A CN111193931B (en) 2018-11-14 2018-11-14 Video data coding processing method and computer storage medium
CN201811353976.1 2018-11-14

Publications (1)

Publication Number Publication Date
WO2020098751A1 true WO2020098751A1 (en) 2020-05-22

Family

ID=70710451

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/118526 WO2020098751A1 (en) 2018-11-14 2019-11-14 Video data encoding processing method and computer storage medium

Country Status (2)

Country Link
CN (1) CN111193931B (en)
WO (1) WO2020098751A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111970511B (en) * 2020-07-21 2023-05-19 上海交通大学 VMAF-based perceptual video rate distortion coding optimization method and device
US11895330B2 (en) 2021-01-25 2024-02-06 Lemon Inc. Neural network-based video compression with bit allocation
CN113099226B (en) * 2021-04-09 2023-01-20 杭州电子科技大学 Multi-level perception video coding algorithm optimization method for smart court scene
CN114554219A (en) * 2022-02-21 2022-05-27 翱捷科技股份有限公司 Rate distortion optimization method and device based on motion detection
CN114915789B (en) * 2022-04-13 2023-03-14 中南大学 Method, system, device and medium for optimizing Lagrange multiplier between frames
CN117651148B (en) * 2023-11-01 2024-07-19 广东联通通信建设有限公司 Terminal management and control method for Internet of things

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007329780A (en) * 2006-06-09 2007-12-20 Nippon Telegr & Teleph Corp <Ntt> Method, apparatus and program for encoding moving image and recording medium recording the program
JP2008283599A (en) * 2007-05-14 2008-11-20 Nippon Telegr & Teleph Corp <Ntt> Method, apparatus and program for encoding parameter selection, and recording medium for the program
CN103096076A (en) * 2012-11-29 2013-05-08 中国科学院研究生院 Video encoding method
CN103607590A (en) * 2013-11-28 2014-02-26 北京邮电大学 High efficiency video coding sensing rate-distortion optimization method based on structural similarity
CN104539962A (en) * 2015-01-20 2015-04-22 北京工业大学 Layered video coding method fused with visual perception features
CN106303547A (en) * 2015-06-08 2017-01-04 中国科学院深圳先进技术研究院 3 d video encoding method and apparatus
CN107222742A (en) * 2017-07-05 2017-09-29 中南大学 Video coding Merge mode quick selecting methods and device based on time-space domain correlation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101778275B (en) * 2009-01-09 2012-05-02 深圳市融创天下科技股份有限公司 Image processing method of self-adaptive time domain and spatial domain resolution ratio frame
CN104301724B (en) * 2014-10-17 2017-12-01 华为技术有限公司 Method for processing video frequency, encoding device and decoding device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007329780A (en) * 2006-06-09 2007-12-20 Nippon Telegr & Teleph Corp <Ntt> Method, apparatus and program for encoding moving image and recording medium recording the program
JP2008283599A (en) * 2007-05-14 2008-11-20 Nippon Telegr & Teleph Corp <Ntt> Method, apparatus and program for encoding parameter selection, and recording medium for the program
CN103096076A (en) * 2012-11-29 2013-05-08 中国科学院研究生院 Video encoding method
CN103607590A (en) * 2013-11-28 2014-02-26 北京邮电大学 High efficiency video coding sensing rate-distortion optimization method based on structural similarity
CN104539962A (en) * 2015-01-20 2015-04-22 北京工业大学 Layered video coding method fused with visual perception features
CN106303547A (en) * 2015-06-08 2017-01-04 中国科学院深圳先进技术研究院 3 d video encoding method and apparatus
CN107222742A (en) * 2017-07-05 2017-09-29 中南大学 Video coding Merge mode quick selecting methods and device based on time-space domain correlation

Also Published As

Publication number Publication date
CN111193931B (en) 2023-04-07
CN111193931A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
WO2020098751A1 (en) Video data encoding processing method and computer storage medium
JP6698077B2 (en) Perceptual optimization for model-based video coding
US10097851B2 (en) Perceptual optimization for model-based video encoding
US10091507B2 (en) Perceptual optimization for model-based video encoding
US10616594B2 (en) Picture encoding device and picture encoding method
CN103179394B (en) A kind of based on area video quality stable I frame bit rate control method
JP6615346B2 (en) Method, terminal, and non-volatile computer-readable storage medium for real-time video noise reduction in encoding process
WO2016011796A1 (en) Adaptive inverse-quantization method and apparatus in video coding
CN108063944B (en) Perception code rate control method based on visual saliency
WO2014139396A1 (en) Video coding method using at least evaluated visual quality and related video coding apparatus
CN109068137A (en) The Video coding of interest region perception
CN111970511B (en) VMAF-based perceptual video rate distortion coding optimization method and device
CN110139112B (en) Video coding method based on JND model
CN103124347A (en) Method for guiding multi-view video coding quantization process by visual perception characteristics
US20200068200A1 (en) Methods and apparatuses for encoding and decoding video based on perceptual metric classification
WO2022021422A1 (en) Video coding method and system, coder, and computer storage medium
KR101007381B1 (en) apparatus for video encoding considering region of interest
US10110893B2 (en) Method and device for calculating distortion of a video being affected by compression artifacts and channel artifacts
CN115567712A (en) Screen content video coding perception code rate control method and device based on just noticeable distortion by human eyes
CN112738518B (en) Code rate control method for CTU (China train unit) level video coding based on perception
CN111757112B (en) HEVC (high efficiency video coding) perception code rate control method based on just noticeable distortion
CN115967806B (en) Data frame coding control method, system and electronic equipment
KR100950196B1 (en) Method for video encoding
Cai et al. AVS encoding optimization with perceptual just noticeable distortion model
Hoffmann et al. Modelling image completion distortions in texture analysis-synthesis coding

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19885178

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19885178

Country of ref document: EP

Kind code of ref document: A1