WO2020082485A1 - 预测量化编码方法和视频压缩系统 - Google Patents

预测量化编码方法和视频压缩系统 Download PDF

Info

Publication number
WO2020082485A1
WO2020082485A1 PCT/CN2018/117216 CN2018117216W WO2020082485A1 WO 2020082485 A1 WO2020082485 A1 WO 2020082485A1 CN 2018117216 W CN2018117216 W CN 2018117216W WO 2020082485 A1 WO2020082485 A1 WO 2020082485A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
residual
quantization
processed
component
Prior art date
Application number
PCT/CN2018/117216
Other languages
English (en)
French (fr)
Inventor
岳庆冬
冉文方
李雯
Original Assignee
西安科锐盛创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 西安科锐盛创新科技有限公司 filed Critical 西安科锐盛创新科技有限公司
Publication of WO2020082485A1 publication Critical patent/WO2020082485A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation

Definitions

  • the invention belongs to the technical field of compression coding, and in particular relates to a predictive quantization coding method and a video compression system.
  • the predictive quantization coding method is a common method of compression coding.
  • the existing predictive quantization coding method mainly has the following problems: the predicted pixel component is easy to misjudge, affect the prediction result, and the correlation between pixel textures is not fully utilized, and the theory cannot be further reduced Extreme entropy and computational complexity cannot further reduce the data compression ratio and distortion loss after predictive quantization and compression.
  • the present invention proposes a predictive quantization coding method and a video compression system, which can effectively reduce the transmission bandwidth of the code stream, make full use of texture correlation for predictive coding, and adaptively perform quantization coding to further reduce the theoretical limit entropy and complexity .
  • a predictive quantization coding method proposed by an embodiment of the present invention includes the steps of:
  • step (f) Repeat step (b) to step (e), and use each pixel component of the several pixel components as the pixel component to be processed to obtain a corresponding prediction residual to form a prediction residual code stream;
  • dividing the pixel to be processed into a plurality of pixel components includes dividing the pixel to be processed into an R pixel component, a G pixel component, and a B pixel component.
  • step (d) includes the following steps:
  • step (d2) includes the following steps:
  • the positional relationship between the pixel component to be processed and the remaining pixel components includes: a pixel component that is closer to the pixel component to be processed has a larger positional relationship weight, and vice versa.
  • step (h) includes:
  • step (h2) includes:
  • step (h3) includes:
  • the fluctuation coefficient k satisfies:
  • lossres i is the value of the i-th bit of the first residual loss
  • pixnum none0 is the number of non-zeros in the first residual loss
  • Another embodiment of the present invention provides a video compression system, including: a memory and at least one processor coupled to the memory, the at least one processor configured to perform prediction as described in any of the above embodiments Quantization coding method.
  • the predictive quantization coding method and video compression system of the present invention can effectively reduce the transmission bandwidth of the code stream, make full use of texture correlation for predictive coding, and adaptively perform quantization coding, further reducing theoretical limit entropy and complexity.
  • FIG. 1 is a schematic flowchart of a predictive quantization coding method according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of the principle of a predictive quantization coding method provided by an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a pixel R component in a predictive quantization coding method provided by an embodiment of the present invention
  • FIG. 4 is a schematic diagram of a calculation principle of a texture direction gradient of a pixel component to be processed in a predictive quantization coding method according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of a calculation principle of a reference direction in a predictive quantization coding method according to an embodiment of the present invention
  • FIG. 6 is a schematic structural diagram of a video compression system according to an embodiment of the present invention.
  • FIG. 1 is a schematic diagram of a predictive quantization coding method according to an embodiment of the present invention.
  • the method may include steps:
  • the predictive quantization method of the present invention effectively reduces the transmission bandwidth of the code stream, makes full use of texture correlation for predictive coding, and adaptively performs quantization coding to further reduce the theoretical limit entropy and complexity.
  • FIG. 2 is a schematic diagram of the principle of a predictive quantization coding method according to an embodiment of the present invention.
  • This embodiment includes all the contents of the first embodiment on the basis of the foregoing embodiments, and focuses on describing the predictive quantization coding method in detail.
  • the predictive quantization coding method includes the following steps:
  • S01 Obtain any pixel of the image to be processed as the pixel to be processed; specifically, the pixels of the image to be processed may be sequentially acquired from the left to the right as pixels to be processed.
  • the pixel to be processed may also be divided into four pixel components of RGBY, or four pixel components of RGBW, etc., and the component splitting method is not specifically limited here.
  • the texture direction gradient is a vector value, including two features of the vector direction of the texture direction gradient and the size of the texture direction gradient.
  • the texture direction gradient is determined by the pixel components around the pixel component to be processed, and for the surrounding components of the pixel component to be processed, N texture direction gradients G1 to GN of the pixel component to be processed are determined;
  • FIG. 3 is a schematic diagram of R pixel components in a predictive quantization coding method provided by an embodiment of the present invention
  • FIG. 4 is a diagram of pixel components to be processed in a predictive quantization coding method according to an embodiment of the present invention. Schematic diagram of texture direction gradient calculation.
  • one embodiment is to obtain pixel components N pixel component, H pixel component, I pixel component, and J pixel component with a pixel distance of 0 around the O pixel component; from O pixel component to J pixel component, I pixel component, respectively ,
  • the vector line of the H pixel component and the N pixel component, the direction of the vector line from the O pixel component to the J pixel component is taken as the vector direction of the first texture gradient, and the absolute value of the difference between the J pixel component and the O pixel component is the first texture gradient
  • the size of the first texture gradient (45 °); similarly, the second texture direction gradient (90 °) and the third texture direction gradient (135 °) can be obtained according to the I pixel component, the H pixel component, and the N pixel component, respectively ),
  • the fourth texture direction gradient (180 °).
  • another implementation manner is: acquiring pixel components with a pixel distance of 1 around the O pixel component are an M pixel component, a G pixel component, an A pixel component, a B pixel component, a C pixel component, a D pixel component, an E pixel component , F pixel component.
  • the corresponding 8 texture direction gradients can also be obtained.
  • N texture direction gradients corresponding to the G component and the B component of the pixel to be processed can be obtained respectively.
  • the N texture direction gradients G1 to GN of the texture reference component of the pixel component to be processed are vector-weighted to obtain the first weighted gradient BG after N texture direction gradient weights.
  • the weighting formula is as follows:
  • BG w1 * G1 + w2 * G2 +... + wN * GN
  • w1, w2 ... wN are weighting coefficients, which may be the same or different;
  • w1, w2 ... wN may be fixed values set in advance. Furthermore, when configuring the relative sizes of w1, w2 ... wN, empirical values can be considered. For example, from past experience, the direction of the texture direction gradient G1 may be more suitable for the actual situation of the prediction of this image. Then, w1 can be configured with a value that is more suitable for the actual situation of the image for prediction (for example, w1 can be configured to be small) to increase the weight in the direction of the texture direction gradient G1.
  • multiple values of w1, w2 ... wN are selected to obtain multiple first weighted gradients, and the first weighted gradient corresponding to the minimum value of the vector size among the multiple first weighted gradients is the first A weighted gradient optimal value BGbstR.
  • the first weighted gradient optimal values BGbstG and BGbstB of the G component and the B component of the pixel to be processed can be obtained respectively.
  • S052 Obtain a second weighted gradient optimal value according to the first weighted gradient optimal value and the positional relationship between the pixel component to be processed and the remaining pixel components;
  • the second weighted gradient optimal value of the R component of the pixel to be processed can be obtained by vector addition according to the first weighted gradient optimal value of the R component, G component, and B component obtained in step S051, and the following formula is satisfied:
  • BG R t1 R ⁇ BGbst R + t2 R ⁇ BGbst G + t3 R ⁇ BGbst B
  • BG R is the optimal value of the second weighted gradient of the R component of the pixel to be processed
  • t1 R , t2 R , and t3 R are the optimal weighted coefficients of the first weighted gradient of the R component, G component, and B component, which can be the same Can also be different;
  • the distance to the R component of the pixel to be processed is determined according to the order of division of the pixel component of the pixel to be processed, for example, the order of dividing the pixel component of the pixel to be processed is R component, G component, B component, then the distance from the R component to the G component is less than The distance between the R and B components.
  • the second weighted gradient optimal value BG G of the G component of the pixel to be processed and the second weighted gradient optimal value BG B of the B component of the pixel to be processed can be obtained.
  • the second weighted gradient optimal values BG R , BG G , and BG B respectively satisfy:
  • BG R 0.5 ⁇ BGbst R + 0.3 ⁇ BGbst G + 0.2 ⁇ BGbst B
  • BG G 0.3 ⁇ BGbst R + 0.4 ⁇ BGbst G + 0.3 ⁇ BGbst B
  • BG B 0.2 ⁇ BGbst R + 0.3 ⁇ BGbst G + 0.5 ⁇ BGbst B
  • the vector direction of the second weighted gradient optimal value BG R of the to-be-processed pixel R component obtained in step S052 is taken as the reference direction.
  • the reference pixel value is scalar weighted to obtain the reference value Ref.
  • the weighting formula is as follows:
  • Ref R r1 ⁇ cpt1 + r2 ⁇ cpt2 +... + rN ⁇ cptN
  • r1, r2 ... rN are reference pixel weighting coefficients, which may be the same or different;
  • cpt1 ⁇ cptN are N available pixel component values in the reference direction of the R component;
  • FIG. 5 is a schematic diagram of a calculation principle of a reference direction in a predictive quantization coding method according to an embodiment of the present invention.
  • BG, BGbst R , and BG R are all vectors that use the texture reference component O as the origin of the vector.
  • the vector direction of the second weighted gradient optimal value BG R is as shown in this figure, when calculating the reference value Ref, the pixels to be processed need to be used.
  • CUR is used as the origin of the vector
  • the vector direction of BG R is used as the reference direction. All available pixels in the reference direction, namely the K pixel component and the F pixel component, are used as reference pixels.
  • cpt K is the pixel component value of the R component of the pixel K to be processed
  • cpt F is the pixel component value of the R component of the pixel F to be processed.
  • the reference value is 0.8 * G + 0.2A; if it is a 180 degree reference, then the reference value is 0.8 * K + 0.2J. The closer the pixel component value is to the current pixel, the greater the configuration coefficient.
  • the prediction residuals Dif G and Dif B of the G component and the B component can be obtained.
  • the acquisition process of the prediction residuals of the R component, the G component, and the B component in the above embodiments may be processed in parallel or serially, which is specifically set according to the needs of the scene, and this embodiment does not make too many restrictions.
  • the quantization unit size can be set to 8 ⁇ 1.
  • the quantization parameter QP is obtained, and all quantization units use the same quantization parameter.
  • the quantization parameter QP is 2.
  • QPRES i is the quantization residual of the i-th pixel of the quantization unit
  • PRES i is the prediction residual of the i-th pixel of the quantization unit
  • QP is the quantization parameter
  • the ">>" formula indicates that if there is an expression a >> m, it means that the integer a is moved by m bits to the right according to the binary bit. After the low displacement is shifted out, the high bit is filled with 0.
  • S092 Perform a first inverse quantization process and a first compensation process on the quantized residual in order to obtain the first inverse quantized residual and the first rate-distortion optimization;
  • the first inverse quantization process is a process of inversely reducing the quantized residual obtained in step S091, and the first compensation process is to compensate each bit of the quantized residual according to preset compensation parameters, so that the The dequantized residual is closer to the original predicted residual.
  • IQPRES_1 i is the first inverse quantization residual of the i-th pixel of the quantization unit
  • CP i is the compensation parameter of the first compensation process of the i-th pixel of the quantization unit.
  • the first compensation parameter satisfies:
  • the first residual loss is obtained according to the first inverse quantized residual and the predicted residual, satisfying:
  • LOSS_1 i is the first residual loss of the i-th pixel of the quantization unit.
  • RDO 1 is the first rate distortion optimization
  • pixnum is the length of the quantization unit
  • a1 and a2 are the weight parameters.
  • S093 Perform a second compensation process on the first inverse quantization residual to obtain a second inverse quantization residual and second rate-distortion optimization.
  • the fluctuation coefficient k satisfies:
  • LOSS_1 i is the first residual loss of the i-th pixel of the quantization unit
  • pixnum none0 is the number of non-zeros in the first residual loss LOSS_1
  • round represents the rounding operator.
  • S0932 Perform a second compensation process on the first inverse quantization residual according to the fluctuation coefficient and the fluctuation state to obtain a second inverse quantization residual;
  • the second compensation process is to perform a second compensation for each bit of the first inverse quantization residual according to the fluctuation coefficient and the fluctuation state, so that the compensated inverse quantization residual is closer to the prediction residual.
  • the fixed fluctuation state can be set as:
  • CT ⁇ 1,0, -1,0,1,0, -1,0 ⁇
  • the second inverse quantization residual meets:
  • IQPRES_2 i IQPRES_1 i + k ⁇ c i
  • IQPRES_2 i is the second inverse quantization residual of the i-th pixel of the quantization unit
  • k ⁇ c i is the compensation coefficient of the second compensation process
  • the second residual loss is obtained according to the second inverse quantization residual and the prediction residual of the quantization unit, satisfying:
  • LOSS_2 i is the second residual loss of the i-th pixel of the quantization unit.
  • RDO 1 is optimized for the second rate distortion.
  • the first rate distortion optimization is less than the second rate distortion optimization, it means that without the second compensation process, the loss after inverse quantization is smaller and the effect is better, then you need to set the compensation flag to no compensation; otherwise, it means to perform The second compensation processing loss is smaller and the effect is better, you need to set the compensation flag to compensation;
  • step S094 write the compensation flag and the quantization residual into the quantization residual code stream;
  • step S094 If the result of step S094 is no compensation, the compensation flag, the fluctuation coefficient and the quantization residual are written into the quantization residual code stream.
  • FIG. 6 is a schematic structural diagram of a video compression system according to an embodiment of the present invention. It should be noted that each step in the above embodiment may be implemented by one or more processors 20 executing instructions stored in one or more memories 10.
  • the predictive quantization method and the video compression system of the present invention can effectively reduce the transmission bandwidth of the code stream, make full use of texture correlation for predictive coding, and adaptively perform quantization coding, further reducing the theoretical limit entropy and complexity.
  • the predictive quantization coding method of the present invention effectively reduces the transmission bandwidth of the code stream, makes full use of texture correlation for predictive coding, and adaptively performs quantization coding to further reduce the theoretical limit entropy and complexity.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本发明涉及一种预测量化编码方法及一种视频压缩系统。该方法包括:将待处理像素分成若干像素分量:获得待处理像素分量及纹理方向梯度;获得参考像素及所述待处理像素分量的预测残差;形成预测残差码流;将所述预测残差码流划分为多个量化单元;以及获得量化残差码流。本发明能够减少传输带宽,降低理论极限熵及复杂度。

Description

预测量化编码方法和视频压缩系统 技术领域
本发明属于压缩编码技术领域,具体涉及一种预测量化编码方法以及一种视频压缩系统。
背景技术
图像数据中存在很大的冗余度,一般通过压缩编码去相关,即通过较少序列之间的相关性,用较少的比特数来表示视频内容,降低视频内容中的冗余,从而实现对视频或者图像的压缩。
在压缩编码过程中,允许图像编码有一定的失真也是视频可以压缩的一个重要原因。在许多应用场合,并不要求压缩后的图像复原后和原图完全一致,而是允许有一定的失真,因为这些失真可以利用人的视觉特性,在图像变化不被觉察的条件下减少量化信号的灰度级,来提高数据压缩比。
预测量化编码方法是压缩编码的常用手段,现有的预测量化编码方法主要存在以下问题:预测像素分量容易误判,影响预测结果,且没有充分利用像素纹理之间的相关性,无法进一步降低理论极限熵以及运算复杂度,无法进一步降低预测量化压缩后数据压缩比和失真损失。
因此,如何提供一种高数据压缩比且失真损失小的预测量化编码方法是研究的热点问题。
发明内容
因此,本发明提出一种预测量化编码方法以及一种视频压缩系统,其可以有效减少码流传输带宽,充分利用纹理相关性进行预测编码,自适应进行量化编码,进一步降低理论极限熵及复杂度。
具体地,本发明实施例提出的一种预测量化编码方法,包括步骤:
(a)将待处理像素分成若干像素分量:
(b)从所述若干像素分量中获得待处理像素分量;
(c)获得所述待处理像素分量的纹理方向梯度;
(d)根据所述纹理方向梯度、所述待处理像素分量与其余所述像素分量之间的位置关系获得参考像素;
(e)根据所述参考像素获得所述待处理像素分量的预测残差;
(f)重复步骤(b)~步骤(e),将所述若干像素分量的每一像素分量作为待处理像素分量获得对应的预测残差以形成预测残差码流;
(g)将所述预测残差码流划分为多个量化单元;
(h)获取所述多个量化单元对应的第一率失真优化和第二率失真优化,以获得量化残差码流。
在本发明的一个实施例中,将待处理像素分成多个像素分量包括将所述待处理像素分成R像素分量、G像素分量、B像素分量。
在本发明的一个实施例中,步骤(d)包括如下步骤:
(d1)根据所述纹理方向梯度获得第一加权梯度最优值;
(d2)根据所述第一加权梯度最优值、所述待处理像素分量与所述若干像素分量中的其余所述像素分量之间的位置关系获得第二加权梯度最优值;
(d3)根据所述第二加权梯度最优值获得所述参考值。
在本发明的一个实施例中,步骤(d2)包括如下步骤:
(d21)根据所述待处理像素分量与其余所述像素分量的位置关系获得位置关系权重;
(d22)根据所述位置关系权重和所述第一加权梯度最优值获得所述第二加权梯度最优值。
在本发明的一个实施例中,所述待处理像素分量与其余所述像素分量的位置关系包括:与所述待处理像素分量距离越近的像素分量其位置关系权重 越大,反之越小。
在本发明的一个实施例中,步骤(h)包括:
(h1)对每个所述量化单元的预测残差进行量化处理获得量化残差;
(h2)对所述量化残差依次进行第一反量化处理、第一补偿处理,以获得第一反量化残差和第一率失真优化;
(h3)对所述第一反量化残差进行第二补偿处理,以获得第二反量化残差和第二率失真优化;
(h4)比较所述第一率失真优化和第二率失真优化,若所述第一率失真优化小于第二率失真优化,则将补偿标志位设置为补偿;否则将所述补偿标志位设置为不补偿;
(h5)将所述补偿标志位和所述量化残差写入所述量化残差码流。
在本发明的一个实施例中,步骤(h2)包括:
(h21)对所述量化残差依次进行所述第一反量化处理、所述第一补偿处理,以获得所述第一反量化残差;
(h22)根据所述第一反量化残差、所述预测残差、所述量化残差获得所述第一率失真优化。
在本发明的一个实施例中,步骤(h3)包括:
(h31)根据所述第一残差损失获得波动系数;
(h32)根据所述波动系数、波动状态对所述第一反量化残差进行所述第二补偿处理,以获得所述第二反量化残差;
(h33)根据所述第二反量化残差、所述预测残差、所述量化残差获得所述第二率失真优化。
在本发明的一个实施例中,所述波动系数k满足:
Figure PCTCN2018117216-appb-000001
其中,lossres i为所述第一残差损失的第i位的值,pixnum none0为所述第一残差损失内非0的数量。
本发明的另一个实施例提供了一种视频压缩系统,包括:存储器以及耦合至所述存储器的至少一个处理器,所述至少一个处理器被配置成执行如上述任一实施例所述的预测量化编码方法。
与现有技术相比,本发明的有益效果:
本发明的预测量化编码方法及视频压缩系统能够有效减少码流传输带宽,充分利用纹理相关性进行预测编码,自适应进行量化编码,进一步降低理论极限熵及复杂度。
上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技术手段,而可依照说明书的内容予以实施,并且为了让本发明的上述和其他目的、特征和优点能够更明显易懂,以下特举较佳实施例,并配合附图,详细说明如下。
附图概述
图1为本发明实施例提供的一种预测量化编码方法的流程示意图;
图2为本发明实施例提供的一种预测量化编码方法的原理示意图;
图3为本发明实施例提供的一种预测量化编码方法中像素R分量示意图;
图4为本发明实施例提供的一种预测量化编码方法中待处理像素分量的纹理方向梯度计算原理示意图;
图5为本发明实施例提供的一种预测量化编码方法中参考方向计算原理示意图;
图6为本发明实施例提供的一种视频压缩系统的结构示意图。
具体实施方式
为更进一步阐述本发明为达成预定发明目的所采取的技术手段及功效,以下结合附图及较佳实施例,对依据本发明实施例提出的预测量化编码方法 其具体实施方式、方法、步骤及功效,详细说明如后。
有关本发明的前述及其他技术内容、特点及功效,在以下配合参考图式的较佳实施例详细说明中将可清楚的呈现。通过具体实施方式的说明,当可对本发明为达成预定目的所采取的技术手段及功效得以更加深入且具体的了解,然而所附图式仅是提供参考与说明之用,并非用来对本发明加以限制。
请参见图1,图1为本发明实施例提供的一种预测量化编码方法示意图。该方法可以包括步骤:
(a)将待处理像素分成若干像素分量:
(b)从所述若干像素分量中获得待处理像素分量;
(c)获得所述待处理像素分量的纹理方向梯度;
(d)根据所述纹理方向梯度、所述待处理像素分量与其余所述像素分量之间的位置关系获得参考像素;
(e)根据所述参考像素获得所述待处理像素分量的预测残差;
(f)重复步骤(b)~步骤(e),将任一像素分量作为待处理像素分量获得对应的预测残差以形成预测残差码流;
(g)将所述预测残差码流划分为多个量化单元;
(h)获取所述量化单元对应的第一率失真优化和第二率失真优化以获得量化残差码流。
需要说明的是,上述各个步骤可以由一个或多个处理器20执行存储在一个或多个存储器10内的指令来实现(参见图6)。
本发明的预测量化方法有效减少码流传输带宽,充分利用纹理相关性进行预测编码,自适应进行量化编码,进一步降低理论极限熵及复杂度。
以下重点对预测量化编码方法的具体实施例进行详细描述。
【实施例一】
请参见图2,图2为本发明实施例提供的一种预测量化编码方法的原理 示意图。本实施例在上述实施例的基础上包括实施例一的全部内容,重点对该预测量化编码方法进行详细描述。具体地,该预测量化编码方法包括如下步骤:
S01:获取待处理图像的任一像素作为待处理像素;具体地,可按照待处理图像的像素矩阵从左到右的顺序,依次获取像素作为待处理像素。
S02:将上述待处理像素分成待处理像素的R像素分量、G像素分量、B像素分量;对应地,所述待处理图像的像素矩阵中任一像素都可分成对应的R像素分量、G像素分量、B像素分量。
其中,也可以将待处理像素分成RGBY四个像素分量,或者RGBW四个像素分量等等,此处分量拆分方式不做具体限制。
S03:获取待处理像素分量;
将待处理像素的任一像素分量作为待处理像素分量。
S04:获得待处理像素分量的纹理方向梯度;
其中,纹理方向梯度为矢量值,包括纹理方向梯度的矢量方向和纹理方向梯度的大小两个特征。
其中,纹理方向梯度通过待处理像素分量周围的像素分量确定,对于待处理像素分量的周围分量,确定待处理像素分量的N个纹理方向梯度G1~GN;
请参考图3和图4,图3为本发明实施例提供的一种预测量化编码方法中R像素分量示意图;图4为本发明实施例提供的一种预测量化编码方法中待处理像素分量的纹理方向梯度计算原理图。
获取待处理像素的R分量作为待处理像素分量CUR;其中,CUR为待处理像素的R分量,A~O为待处理像素之前已经预测编码过的像素的R分量。
首先,找到待处理像素分量CUR紧邻的O像素分量作为纹理参考分量;
其中,一种实施方式为:获取O像素分量周围像素距离为0的像素分量 N像素分量、H像素分量、I像素分量、J像素分量;分别做由O像素分量到J像素分量、I像素分量、H像素分量、N像素分量的矢量线,则O像素分量到J像素分量的矢量线方向作为第一纹理梯度的矢量方向,J像素分量与O像素分量的差值绝对值为第一纹理梯度的大小,从而得到第一纹理梯度(45°);同理,分别根据I像素分量、H像素分量、N像素分量可以获得第二纹理方向梯度(90°)、第三纹理方向梯度(135°)、第四纹理方向梯度(180°)。
其中,另一种实施方式为:获取O像素分量周围像素距离为1的像素分量分别为M像素分量、G像素分量、A像素分量、B像素分量、C像素分量、D像素分量、E像素分量、F像素分量。同理,也可以得到对应的8个纹理方向梯度。
同理,可以分别获得待处理像素的G分量和B分量各自对应的N个纹理方向梯度。
S05:根据所述纹理方向梯度、所述待处理像素分量与其余所述像素分量之间的位置关系获得参考像素;
S051:根据所述纹理方向梯度获得第一加权梯度最优值;
以R分量为例,将待处理像素分量纹理参考分量的N个纹理方向梯度G1~GN进行矢量加权得到N个纹理方向梯度加权后的第一加权梯度BG,加权公式如下:
BG=w1*G1+w2*G2+…+wN*GN
其中,w1、w2…wN为加权系数,可以相同也可以不同;
其中,w1、w2…wN可以是预先设定的固定值。更进一步的,并且,配置w1、w2…wN的相对大小时,可以考虑经验值,例如,从以往的经验得知,在纹理方向梯度G1的这个方向可能更加适合本图像做预测的实际情况,则可以将w1配置一个更加适合本图像做预测的实际情况的值(例如,可以将w1配置很小),以增加在纹理方向梯度G1的这个方向的权重。当然,w1、w2…wN也可以是自适应的,即可以根据早期预测处理的实际情况,灵活调 整w1、w2…wN的相对大小,具体地w1+w2+…+wN=1。
其中,选取多组w1、w2…wN的值,得到多个第一加权梯度,取多个第一加权梯度中矢量大小的最小值对应的第一加权梯度,即为待处理像素R分量的第一加权梯度最优值BGbstR。
同理,可以分别得到待处理像素G分量和B分量的第一加权梯度最优值BGbstG和BGbstB。
S052:根据所述第一加权梯度最优值、所述待处理像素分量与其余所述像素分量之间的位置关系获得第二加权梯度最优值;
根据步骤S051获得的R分量、G分量、B分量的第一加权梯度最优值进行矢量相加可以得到待处理像素R分量的第二加权梯度最优值,满足如下公式:
BG R=t1 R×BGbst R+t2 R×BGbst G+t3 R×BGbst B
其中,BG R为待处理像素R分量的第二加权梯度最优值,t1 R、t2 R、t3 R分别为R分量、G分量、B分量的第一加权梯度最优值加权系数,可以相同也可以不同;
优选地,待处理像素R分量下的第一加权梯度最优值加权系数值最大,与待处理像素R分量距离逐渐增加的其它分量下的第一加权梯度最优值加权系数值逐渐减小,且第一加权梯度最优值加权系数值的总和为1,具体为t1 R+t2 R+t3 R=1。
其中,与待处理像素R分量的距离根据待处理像素像素分量的划分顺序进行判定,比如待处理像素划分像素分量的顺序为R分量、G分量、B分量,则R分量到G分量的距离小于R分量与B分量的距离。
同理,可以获得待处理像素G分量的第二加权梯度最优值BG G和待处理像素B分量的第二加权梯度最优值BG B
请再次参考图2,第二加权梯度最优值BG R、BG G、BG B分别满足:
BG R=0.5×BGbst R+0.3×BGbst G+0.2×BGbst B
BG G=0.3×BGbst R+0.4×BGbst G+0.3×BGbst B
BG B=0.2×BGbst R+0.3×BGbst G+0.5×BGbst B
S053:根据所述第二加权梯度最优值获得所述参考值。
获得步骤S052获得的待处理像素R分量的第二加权梯度最优值BG R的矢量方向作为参考方向。
以待处理像素R分量为矢量原点,参考方向上所有可用的像素分量即为参考像素。将参考像素值进行标量加权,得到参考值Ref,加权公式如下所示:
Ref R=r1×cpt1+r2×cpt2+…+rN×cptN
其中,r1、r2…rN为参考像素加权系数,可以相同也可以不同;cpt1~cptN为R分量的参考方向上N个可用的像素分量值;
请参考图5,图5为本发明实施例提供的一种预测量化编码方法中参考方向计算原理示意图。
BG、BGbst R、BG R均是以纹理参考分量O作为矢量原点的矢量,假设第二加权梯度最优值BG R的矢量方向如图,此时在计算参考值Ref时,需要以待处理像素CUR作为矢量原点,以BG R的矢量方向为参考方向,获得参考方向所有可用的像素即K像素分量和F像素分量作为参考像素,进行加权计算得到:
Ref R=r1×cpt K+r2×cpt F
其中,cpt K为待处理像素K的R分量的像素分量值,cpt F为待处理像素F的R分量的像素分量值。
优选地,对于任意分量,若为45度参考,那么参考值
Ref R=0.8×cpt K+0.2×cpt F
若为135度参考,那么参考值为0.8*G+0.2A;若为180度参考,那么参考值为0.8*K+0.2J,像素分量值离当前像素越近,配置系数越大。
S06:根据所述参考值获得所述待处理像素分量的预测残差;
将待处理像素R分量的像素值Cur R减去参考值,可以得到待处理像素R分量的预测残差Dif R,计算如下:
Dif R=Cur R-Ref R
同理,可以得到G分量和B分量的预测残差Dif G和Dif B
S07:重复步骤(S03)~步骤(S06),将任一像素分量作为待处理像素分量获得对应的预测残差以形成预测残差码流;
其中,以上实施方式中R分量、G分量和B分量的预测残差的获取过程可以并行处理,也可以串行处理,具体根据场景需要进行设置,本实施例不做过多限制。
S08:将所述预测残差码流划分为多个量化单元;
优选地,量化单元大小可设定为8×1。
S09:获取每个所述量化单元对应的第一率失真优化和第二率失真优化以获得量化残差码流。
S091:对每个所述量化单元的预测残差进行量化处理获得量化残差;
首先获得量化参数QP,所有量化单元采用相同的量化参数。优选地,量化参数QP为2。
采用量化参数QP对量化单元进行量化处理,获得第一量化残差,满足:
QPRES i=[PRES i>>QP]
其中,QPRES i为量化单元第i位像素的量化残差,PRES i为量化单元第i位像素的预测残差,QP为量化参数。
其中,“>>”算式表示,若有表达式a>>m则表示将整型数a按二进制位向右移动m位,低位移出后,高位补0。
S092:对所述量化残差依次进行第一反量化处理、第一补偿处理,以获得第一反量化残差和第一率失真优化;
S0921:首先,对所述量化残差依次进行所述第一反量化处理、所述第一补偿处理,以获得所述第一反量化残差;
其中,第一反量化处理为对步骤S091获得的量化残差进行反还原的过程,第一补偿处理为对量化残差的每一位根据预设的补偿参数进行补偿,以使得返还原后的反量化残差更接近于原始预测残差。
满足:
IQPRES_1 i=QPRES i<<QP i+CP i
其中,IQPRES_1 i为量化单元第i位像素的第一反量化残差,CP i为量化单元第i位像素第一补偿处理的补偿参数。
优选地,第一补偿参数满足:
CP i=(1<<QP i)/2
S0922:根据所述第一反量化残差、所述预测残差、所述量化残差获得所述第一率失真优化。
根据第一反量化残差和预测残差获得第一残差损失,满足:
LOSS_1 i=IQPRES_1 i-PRES i
其中,LOSS_1 i为量化单元第i位像素的第一残差损失。
计算第一率失真优化,满足:
Figure PCTCN2018117216-appb-000002
其中,RDO 1为第一率失真优化,pixnum为量化单元的长度,a1和a2为权重参数。
优选地,a1=a2=1。
S093:对所述第一反量化残差进行第二补偿处理,以获得第二反量化残差和第二率失真优化。
S0931:根据所述第一残差损失获得波动系数;
其中,所述波动系数k满足:
Figure PCTCN2018117216-appb-000003
其中,LOSS_1 i为量化单元第i位像素的第一残差损失,pixnum none0为第一残差损失LOSS_1内非0的数量,round表示四舍五入运算符。
S0932:根据所述波动系数、波动状态对所述第一反量化残差进行第二补偿处理,以获得第二反量化残差;
其中,第二补偿处理即为对第一反量化残差的每一位根据波动系数和波动状态进行第二次补偿,以使补偿后的反量化残差更接近预测残差。
获得波动状态,其中,波动状态为同时存储在解码端和编码端的序列,满足:
CT={c 0,c 1,c i,…,c m},其中,ci=0或1或-1,m=量化单元长度,
优选地,可设置固定的波动状态为:
CT={1,0,-1,0,1,0,-1,0}
根据波动状态和波动系数对第一反量化残差进行第二补偿处理以计算第二反量化残差,满足:
其中,第二反量化残差满足:
IQPRES_2 i=IQPRES_1 i+k×c i
其中,IQPRES_2 i为量化单元第i位像素的第二反量化残差,k×c i为第二补偿处理的补偿系数。
S0933:根据所述第二反量化残差、所述预测残差、所述量化残差获得所述第二率失真优化。
根据第二反量化残差和量化单元的预测残差获得第二残差损失,满足:
LOSS_2 i=IQPRES_2 i-PRES i
其中,LOSS_2 i为量化单元第i位像素的第二残差损失。
计算第二率失真优化,满足:
Figure PCTCN2018117216-appb-000004
其中,RDO 1为第二率失真优化。
S094:比较所述第一率失真优化和第二率失真优化,若所述第一率失真优化小于第二率失真优化,则将补偿标志位设置为不补偿;否则将补偿标志位设置为补偿;
其中,若第一率失真优化小于第二率失真优化则说明不进行第二补偿处理则反量化后损失更小、效果更优,则需要将补偿标志位设置为不补偿;反之,则说明进行第二补偿处理损失更小,效果更优,则需要将补偿标志位设置为补偿;
S095:将所述补偿标志位和所述量化残差写入所述量化残差码流。
若步骤S094结果为补偿,则将补偿标志位和量化残差写入量化残差码流;
若步骤S094结果为不补偿,则将补偿标志、波动系数和量化残差写入量化残差码流。此处,也可以将仅将补偿标志位和量化残差写入量化残差码流,在解码端按照实施例中计算公式计算波动系数,然后进行第二次补偿处理。
请参考图6,图6为本发明实施例提供的一种视频压缩系统的结构示意图。需要说明的是,上述实施例各个步骤可以由一个或多个处理器20执行存储在一个或多个存储器10内的指令来实现。
本发明的预测量化方法及视频压缩系统能够有效减少码流传输带宽,充分利用纹理相关性进行预测编码,自适应进行量化编码,进一步降低理论极限熵及复杂度。
以上,仅是本发明的较佳实施例而已,并非对本发明作任何形式上的限制,虽然本发明已以较佳实施例揭露如上,然而并非用以限定本发明,任何熟 悉本专业的技术人员,在不脱离本发明技术方案范围内,当可利用上述揭示的技术内容作出些许更动或修饰为等同变化的等效实施例,但凡是未脱离本发明技术方案内容,依据本发明的技术实质对以上实施例所作的任何简单修改、等同变化与修饰,均仍属于本发明技术方案的范围内。
工业实用性
本发明的预测量化编码方法有效减少码流传输带宽,充分利用纹理相关性进行预测编码,自适应进行量化编码,进一步降低理论极限熵及复杂度。

Claims (10)

  1. 一种预测量化编码方法,包括步骤:
    (a)将待处理像素分成若干像素分量;
    (b)从所述若干像素分量中获得待处理像素分量;
    (c)获得所述待处理像素分量的纹理方向梯度;
    (d)根据所述纹理方向梯度、所述待处理像素分量与其余所述像素分量之间的位置关系获得参考像素;
    (e)根据所述参考像素获得所述待处理像素分量的预测残差;
    (f)重复步骤(b)~步骤(e),将所述若干像素分量中的每一像素分量作为待处理像素分量获得对应的预测残差以形成预测残差码流;
    (g)将所述预测残差码流划分为多个量化单元;
    (h)获取所述多个量化单元对应的第一率失真优化和第二率失真优化以获得量化残差码流。
  2. 根据要求1所述的预测量化编码方法,其中,将待处理像素分成多个像素分量包括将所述待处理像素分成R像素分量、G像素分量和B像素分量。
  3. 根据要求1所述的预测量化编码方法,其中,步骤(d)包括如下分步骤:
    (d1)根据所述纹理方向梯度获得第一加权梯度最优值;
    (d2)根据所述第一加权梯度最优值、所述待处理像素分量与所述若干像素分量中的其余所述像素分量之间的位置关系获得第二加权梯度最优值;(d3)根据所述第二加权梯度最优值获得所述参考值。
  4. 根据要求1所述的预测量化编码方法,其中,分步骤(d2)包括如下子步骤:
    (d21)根据所述待处理像素分量与所述若干像素分量中的其余所述像素分量的位置关系获得位置关系权重;
    (d22)根据所述位置关系权重和所述第一加权梯度最优值获得所述第二 加权梯度最优值。
  5. 根据要求1所述的预测量化编码方法,其中,所述待处理像素分量与其余所述像素分量的位置关系包括:与所述待处理像素分量距离越近的像素分量其位置关系权重越大,反之越小。
  6. 根据要求1所述的预测量化编码方法,其中,步骤(h)包括分步骤:
    (h1)对每个所述量化单元的预测残差进行量化处理获得量化残差;
    (h2)对所述量化残差依次进行第一反量化处理、第一补偿处理,以获得第一反量化残差和第一率失真优化;
    (h3)对所述第一反量化残差进行第二补偿处理,以获得第二反量化残差和第二率失真优化;
    (h4)比较所述第一率失真优化和第二率失真优化,若所述第一率失真优化小于第二率失真优化,则将补偿标志位设置为补偿;否则将将补偿标志位设置为不补偿;
    (h5)将所述补偿标志位和所述量化残差写入所述量化残差码流。
  7. 根据要求1所述的预测量化编码方法,其中,分步骤(h2)包括:
    (h21)对所述量化残差依次进行所述第一反量化处理、所述第一补偿处理,以获得所述第一反量化残差;
    (h22)根据所述第一反量化残差、所述预测残差、所述量化残差获得所述第一率失真优化。
  8. 根据要求7所述的预测量化编码方法,其中,分步骤(h3)包括:
    (h31)根据所述第一残差损失获得波动系数;
    (h32)根据所述波动系数、波动状态对所述第一反量化残差进行第二补偿处理,以获得第二反量化残差;
    (h33)根据所述第二反量化残差、所述预测残差、所述量化残差获得所述第二率失真优化。
  9. 根据要求8所述的预测量化编码方法,其中,所述波动系数k满足:
    Figure PCTCN2018117216-appb-100001
    其中,lossres i为所述第一残差损失的第i位的值,pixnum none0为所述第一残差损失内非0的数量。
  10. 一种视频压缩系统,包括:存储器以及耦合至所述存储器的至少一个处理器20,所述至少一个处理器被配置成执行如权利要求1~9任意一项所述的预测量化编码方法。
PCT/CN2018/117216 2018-10-26 2018-11-23 预测量化编码方法和视频压缩系统 WO2020082485A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811260531.9A CN109361922B (zh) 2018-10-26 2018-10-26 预测量化编码方法
CN201811260531.9 2018-10-26

Publications (1)

Publication Number Publication Date
WO2020082485A1 true WO2020082485A1 (zh) 2020-04-30

Family

ID=65347110

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/117216 WO2020082485A1 (zh) 2018-10-26 2018-11-23 预测量化编码方法和视频压缩系统

Country Status (3)

Country Link
US (1) US10645387B1 (zh)
CN (1) CN109361922B (zh)
WO (1) WO2020082485A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024007253A1 (zh) * 2022-07-07 2024-01-11 Oppo广东移动通信有限公司 点云率失真优化方法及属性压缩方法、装置和存储介质
CN116489373A (zh) * 2022-07-26 2023-07-25 杭州海康威视数字技术股份有限公司 一种图像解码方法、编码方法及装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101160970A (zh) * 2005-04-18 2008-04-09 三星电子株式会社 运动图像编码和解码方法以及设备
CN103517069A (zh) * 2013-09-25 2014-01-15 北京航空航天大学 一种基于纹理分析的hevc帧内预测快速模式选择方法
CN108063947A (zh) * 2017-12-14 2018-05-22 西北工业大学 一种基于像素纹理的无损参考帧压缩方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101590511B1 (ko) * 2009-01-23 2016-02-02 에스케이텔레콤 주식회사 움직임 벡터 부호화/복호화 장치 및 방법과 그를 이용한 영상 부호화/복호화 장치 및 방법
TWI487381B (zh) * 2011-05-19 2015-06-01 Nat Univ Chung Cheng Predictive Coding Method for Multimedia Image Texture
WO2016043637A1 (en) * 2014-09-19 2016-03-24 Telefonaktiebolaget L M Ericsson (Publ) Methods, encoders and decoders for coding of video sequences
CN105208387B (zh) * 2015-10-16 2018-03-13 浙江工业大学 一种hevc帧内预测模式快速选择方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101160970A (zh) * 2005-04-18 2008-04-09 三星电子株式会社 运动图像编码和解码方法以及设备
CN103517069A (zh) * 2013-09-25 2014-01-15 北京航空航天大学 一种基于纹理分析的hevc帧内预测快速模式选择方法
CN108063947A (zh) * 2017-12-14 2018-05-22 西北工业大学 一种基于像素纹理的无损参考帧压缩方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MATSUO, S. ET AL.: "Intra Angular Prediction with Weight Function and Modification Filter", 2013 PICTURE CODING SYMPOSIUM, 8 December 2013 (2013-12-08), pages 77 - 80, XP032567022 *
MATSUO, S. ET AL.: "Intra Prediction with Spatial Gradients and Multiple Reference Lines", 2009 PICTURE CODING SYMPOSIUM, 6 May 2009 (2009-05-06), pages 1 - 4, XP031491705 *

Also Published As

Publication number Publication date
CN109361922A (zh) 2019-02-19
CN109361922B (zh) 2020-10-30
US10645387B1 (en) 2020-05-05
US20200137392A1 (en) 2020-04-30

Similar Documents

Publication Publication Date Title
JP6858277B2 (ja) 方向性イントラ予測コーディング
US10992939B2 (en) Directional intra-prediction coding
US9407915B2 (en) Lossless video coding with sub-frame level optimal quantization values
US10887365B2 (en) System and methods for bit rate control
US20140098855A1 (en) Lossless intra-prediction video coding
EP3571841B1 (en) Dc coefficient sign coding scheme
US20180302643A1 (en) Video coding with degradation of residuals
CN110753225A (zh) 一种视频压缩方法、装置及终端设备
US20210021821A1 (en) Video encoding and decoding method and apparatus
US20120033886A1 (en) Image processing systems employing image compression
WO2020082485A1 (zh) 预测量化编码方法和视频压缩系统
WO2023279961A1 (zh) 视频图像的编解码方法及装置
DE202016008191U1 (de) Adaptive Überlappungsblockprädiktion bei Videokodierung mit variabler Blockgröße
CN107079156B (zh) 用于交替块约束决策模式代码化的方法
WO2017213699A1 (en) Adaptive overlapped block prediction in variable block size video coding
US10455253B1 (en) Single direction long interpolation filter
JP7125559B2 (ja) ビットレート削減のためのビデオストリーム適応フィルタリング
WO2012118569A1 (en) Visually optimized quantization
CN109255770B (zh) 一种图像变换域降采样方法
CN110234011B (zh) 一种视频压缩方法及系统
CN114127746A (zh) 卷积神经网络的压缩
US8971407B2 (en) Detection of skip mode
US20230119747A1 (en) Adaptive wavelet denoising
US20220321879A1 (en) Processing image data
Dobrovolný et al. Asymmetric image compression for embedded devices based on singular value decomposition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18937836

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18937836

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 18937836

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14/04/2022)