US20200137392A1 - Predictive quantization coding method and video compression system - Google Patents

Predictive quantization coding method and video compression system Download PDF

Info

Publication number
US20200137392A1
US20200137392A1 US16/236,236 US201816236236A US2020137392A1 US 20200137392 A1 US20200137392 A1 US 20200137392A1 US 201816236236 A US201816236236 A US 201816236236A US 2020137392 A1 US2020137392 A1 US 2020137392A1
Authority
US
United States
Prior art keywords
pixel
residual
quantization
processed
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US16/236,236
Other versions
US10645387B1 (en
Inventor
Qingdong Yue
Wenfang Ran
Wen Li
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ip3 2023 Series 923 Of Allied Security Trust I
Original Assignee
Xian Keruisheng Innovative Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Keruisheng Innovative Technology Co Ltd filed Critical Xian Keruisheng Innovative Technology Co Ltd
Assigned to Xi'an Creation Keji Co., Ltd. reassignment Xi'an Creation Keji Co., Ltd. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, WEN, RAN, WENFANG, YUE, Qingdong
Publication of US20200137392A1 publication Critical patent/US20200137392A1/en
Application granted granted Critical
Publication of US10645387B1 publication Critical patent/US10645387B1/en
Assigned to IP3 2023, SERIES 923 OF ALLIED SECURITY TRUST I reassignment IP3 2023, SERIES 923 OF ALLIED SECURITY TRUST I ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Xi'an Creation Keji Co., Ltd.
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation

Definitions

  • the present invention belongs to the technical field of compression coding technologies, and in particular to a predictive quantization coding method and a video compression system.
  • the image coding is allowed to have certain distortions, which is also an important reason why the videos can be compressed. In many application occasions, it is not required that the compressed image is completely identical to the original image after restoration. Certain distortions are allowed since these distortions can utilize the visual characteristics of people to reduce the gray level of a quantized signal under the condition that the image change is not perceived, so as to increase a data compression ratio.
  • the predictive quantization coding method is a common method of compression coding.
  • the existing predictive quantization coding method mainly has the following problems: the prediction pixel components are easily misjudged, which affects the prediction result, the correlation between pixel textures is not fully utilized, the theoretic limit entropy and computational complexity cannot be further reduced, and the data compression ratio and distortion loss after predictive quantization and compression cannot be further reduced.
  • the present invention provides a predictive-quantization coding method and a video compression system, which can effectively reduce the code stream transmission bandwidth, fully utilize the texture correlation for predictive coding, adaptively perform quantization coding, and further reduce the theoretical limit entropy and complexity.
  • a predictive quantization coding method includes steps of: (a) dividing a pixel to be processed into a plurality of pixel components; (b) obtaining one pixel component to be processed from the plurality of pixel components; (c) obtaining texture direction gradients of the pixel component to be processed; (d) obtaining reference pixels according to the texture direction gradients and positional relationships between the pixel component to be processed and the remaining of the plurality of pixel components; (e) obtaining a prediction residual of the pixel component to be processed according to the reference pixels; repeating steps (b) to (e), and taking each pixel component of the plurality of pixel components and obtaining the prediction residual corresponding thereto, to thereby form a prediction residual code stream; (g) dividing the prediction residual code stream into a plurality of quantization units; and (h) obtaining first rate distortion optimizations and second rate distortion optimizations corresponding to the plurality of quantization units to obtain a quantization residual code stream.
  • the step (a) of dividing the pixel to be processed into a plurality of pixel components includes: dividing the pixel to be processed into a R pixel component, a G pixel component, and a B pixel component.
  • the step (d) includes following substeps of: (d1) obtaining a first weighting gradient optimal value according to the texture direction gradients; (d2) obtaining a second weighting gradient optimal value according to the first weighting gradient optimal value and the positional relationships between the pixel component to be processed and the remaining of the plurality of pixel components; and (d3) obtaining the reference pixels according to the second weighting gradient optimal value.
  • the substep (d2) includes: (d21) obtaining positional relationship weights according to the positional relationships between the pixel component to be processed and the remaining of the plurality of pixel components; and (d22) obtaining the second weighting gradient optimal value according to the positional relationship weights and the first weighting gradient optimal value.
  • the positional relationships between the pixel components to be processed and the remaining of the plurality of pixel components satisfy that: the closer the pixel component to the pixel component to be processed is, the greater the positional relationship weight is, otherwise is smaller.
  • the step (h) includes substeps of: (h1) performing quantization processing on a prediction residual of each of the quantization units to obtain a quantization residual; (h2) sequentially performing a first inverse quantization processing and a first compensation processing on the quantization residual, to obtain a first inverse quantization residual and a first rate distortion optimization; (h3) performing a second compensation processing on the first inverse quantization residual, to obtain a second inverse quantization residual and a second rate distortion optimization; (h4) comparing the first rate distortion optimization and the second rate distortion optimization, setting a compensation flag bit to be no compensation if the first rate distortion optimization is less than the second rate distortion optimization;, otherwise, setting the compensation flag bit to be compensation; and (h5) writing the compensation flag bit and the quantization residual into the quantization residual code stream.
  • the substep (h2) includes: (h21) sequentially performing the first inverse quantization processing and the first compensation processing on the quantization residual, to obtain the first inverse quantization residual; and (h22) obtaining the first rate distortion optimization according to the first inverse quantization residual, the prediction residual, and the quantization residual.
  • the substep (h3) includes: (h31) obtaining a fluctuation coefficient according to a first residual loss; (h32) performing the second compensation processing on the first inverse quantization residual according to the fluctuation coefficient and a fluctuation state, to obtain the second inverse quantization residual; and (h33) obtaining the second rate distortion optimization according to the second inverse quantization residual, the prediction residual, and the quantization residual.
  • the present invention further provides a video compression system including: a memory and at least one processor coupled to the memory.
  • the at least one processor is configured to perform the predictive quantization coding method according to any one of the above embodiments.
  • the predictive quantization coding method and the video compression system of the present invention can effectively reduce the code stream transmission bandwidth, fully utilize the texture correlation for predictive coding, adaptively perform quantization coding, and further reduce the theoretical limit entropy and complexity.
  • FIG. 1 is a schematic flowchart of a predictive quantization coding method according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of a principle of a predictive quantization coding method according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of an R pixel component in a predictive quantization coding method according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a texture direction gradient calculation principle for a pixel component to be processed in a predictive quantization coding method according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a reference direction calculation principle in a predictive quantization coding method according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a video compression system according to an embodiment of the present invention.
  • FIG. 1 is a schematic diagram of a predictive quantization coding method according to an embodiment of the present invention.
  • the method may include the steps of: (a) dividing a pixel to be processed into a plurality of pixel components; (b) obtaining one pixel component to be processed from the plurality of pixel components; (c) obtaining texture direction gradients of the pixel component to be processed; (d) obtaining reference pixels according to the texture direction gradients and positional relationships between the pixel component to be processed and the remaining of the plurality of pixel components; (e) obtaining a prediction residual of the pixel component to be processed according to the reference pixels; (f) repeating steps (b) to (e), and taking each pixel component of the plurality of pixel components and obtaining the prediction residual corresponding thereto, to thereby form a prediction residual code stream; (g) dividing the prediction residual code stream into a plurality of quantization units; and (h) obtaining first rate distortion optimizations and second rate distortion optimizations corresponding to the quant
  • the predictive quantization coding method of the present invention can effectively reduce the code stream transmission bandwidth, fully utilize the texture correlation for predictive coding, adaptively perform quantization coding, and further reduce the theoretical limit entropy and complexity.
  • FIG. 2 is a schematic diagram of a principle of a predictive quantization coding method according to an embodiment of the present invention.
  • the present embodiment includes all content of embodiment 1 based on the above embodiment, and the predictive quantization coding method will be described in detail emphatically.
  • the predictive quantization coding method includes the following steps.
  • any pixel of an image to be processed is obtained as the pixel to be processed.
  • the pixel can be obtained in sequence as the pixel to be processed according to the sequence of a pixel matrix of the image to be processed from left to right.
  • the above pixel to be processed is divided into a R pixel component, a G pixel component and a B pixel component of the pixel to be processed.
  • any pixel in the pixel matrix of the image to be processed can be divided into the corresponding R pixel component, G pixel component and B pixel component.
  • the pixel to be processed may also be divided into RGBY four pixel components, or RGBW four pixel components, and the component dividing manner is not specifically limited.
  • Any pixel component of the pixel to be processed is used as the pixel component to be processed.
  • the texture direction gradients are vectors including two characteristics: the vector direction of the texture direction gradient and the size of the texture direction gradient.
  • the texture direction gradients are determined by pixel components around the pixel component to be processed, and for the surrounding components of the pixel component to be processed, N texture direction gradients G 1 to GN of the pixel component to be processed are determined.
  • FIG. 3 is a schematic diagram of a R pixel component in a predictive quantization coding method according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a texture direction gradient calculation principle for a pixel component to be processed in a predictive quantization coding method according to an embodiment of the present invention.
  • the R component of the pixel to be processed is obtained as the component CUR of the pixel to be processed, wherein CUR is the R component of the pixel to be processed, and A to O are the R components of the pixels that have been predicted and coded before the pixel to be processed.
  • the O pixel component next to the component CUR of the pixel to be processed is found as the texture reference component.
  • One embodiment is that an N pixel component, an H pixel component, an I pixel component, and a J pixel component with a pixel distance of 0 around the O pixel component are obtained.
  • a vector line from the O pixel component to the J pixel component, the I pixel component, the H pixel component and the N pixel component is formed respectively, then the vector line direction of the O pixel component to the J pixel component is taken as the vector direction of the first texture gradient, and the absolute value of the difference value between the J pixel component and the O pixel component is the size of the first texture gradient, thereby obtaining the first texture gradient (45°).
  • a second texture direction gradient (90°), a third texture direction gradient (135°) and a fourth texture direction gradient (180°) can be obtained according to the I pixel component, the H pixel component, and the N pixel component respectively.
  • the pixel components having a pixel distance of 1 around the O pixel component are an M pixel component, a G pixel component, an A pixel component, a B pixel component, a C pixel component, a D pixel component, an E pixel component and an F Pixel component respectively.
  • the corresponding eight texture direction gradients can also be obtained.
  • N texture direction gradients corresponding to the G component and the B component of the pixel to be processed respectively can be respectively obtained.
  • reference pixels are obtained according to the texture direction gradients and positional relationships between the pixel component to be processed and the remaining of the plurality of pixel components.
  • the R component is taken as an example, the N texture direction gradients G 1 to GN of the texture reference component of the pixel component to be processed are subjected to vector weighting to obtain a first weighting gradient BG of the N texture direction gradients after weighting.
  • the weighting formula is as follows:
  • BG w 1* G 1+ w 2* G 2+ . . . + wN * GN
  • w1, w2 . . . wN are weighting coefficients, which may be the same or different, and w1, w2, . . . wN may be preset fixed values. Further, when the relative sizes of w1, w2, . . . wN are configured, empirical values may be considered. For example, it has been known from past experience that the direction of the texture direction gradient G 1 may be more suitable for the actual situation, in which the image is predicted. Then a value more suitable for the actual situation in which the image is predicted may be configured for w1 (for example, the w1 may be configured to be very small), to increase the weighting in the direction of the texture direction gradient G 1 . Of course, w1, w2, . . wN are weighting coefficients, which may be the same or different, and w1, w2, . . . wN may be preset fixed values. Further, when the relative sizes of w1, w2, . . . wN are configured,
  • the values of a plurality of groups w1, w2, . . . , wN are selected to obtain a plurality of first weighting gradients.
  • the first weighting gradient corresponding to the minimum value of the vector sizes in the plurality of first weighting gradients is taken as the first weighting gradient optimal value BGbstR of the R component of the pixel to be processed.
  • the first weighting gradient optimal values BGbstG, BGbstB of the G component and the B component respectively of the pixel to be processed can be obtained.
  • a second weighting gradient optimal value is obtained according to the first weighting gradient, optimal values and the positional relationships between the pixel component to be processed and the remaining pixel components.
  • BG R t 1 R ⁇ BGbst R +t 2 R ⁇ BGbst G +t 3 R ⁇ BGbst B ,
  • BG R is the second weighting gradient optimal value of the R component of the pixel to be processed.
  • t1 R ,t2 R and t3 R are respectively the weighting coefficients of the first weighting gradient optimal values of the R component, G component and B component, and may be the same or different.
  • the weighting coefficient of the first weighting gradient optimal value of the R component of the pixel to be processed is the largest, and the weighting coefficient values of the first weighting gradient optimal values of other components of which the distances from the R component of the pixel to be processed are gradually increased are gradually decreased.
  • the distance from the R component of the pixel to be processed is determined according to the dividing order of the pixel component of the pixel to be processed.
  • the dividing order of the pixel components of the pixel to be processed is the R component, the G component, and the B component, and then the distance from the R component to the G component is less than the distance between the R component and the B component.
  • the second weighting gradient optimal value BG G of the G component of the pixel to be processed and the second weighting gradient optimal value BG B of the B component of the pixel to be processed can be obtained.
  • the vector direction of the second weighting gradient optimal value BG R of the R component of the pixel to be processed obtained in step S052 is obtained as a reference direction.
  • Ref R r 1 ⁇ cpt 1 +r 2 ⁇ cpt 2 + . . . +rN ⁇ cptN,
  • r1, r2, rN are weighting coefficients of the reference pixels, and may be the same or different.
  • cpt1 to cptN are N available pixel component values in the reference direction of the R component.
  • FIG. 5 is a schematic diagram of a reference direction calculation principle in a predictive quantization coding method according to an embodiment of the present invention.
  • the reference value Ref R 0.8 ⁇ cpt K +0.2 ⁇ cpt F .
  • the reference value is 0.8*G+0.2A. If the reference is 180 degrees, then the reference value is 0.8*K+0.2J, the closer the pixel component value to the current pixel is, the larger the configuration coefficient is.
  • the prediction residuals Dif G and Dif B of the G component and the B component can be obtained.
  • steps (S03) to (S06) are repeated, and corresponding prediction residuals are obtained after taking each pixel component of the pixel to be processed as the pixel component to be processed, to thereby form a prediction residual code stream.
  • the obtaining process for the prediction residuals of the R component, the G component, and the B component in the above embodiment may be processed in parallel or in a serial manner, and may be set according to scenario needs, which is not excessively limited by the present embodiment.
  • the size of the quantization unit may be set to 8 ⁇ 1.
  • the quantization parameter QP is firstly obtained, and all quantization units use the same quantization parameter.
  • the quantization parameter QP is 2.
  • the “>>” formula indicates that if there is an expression a>>m, it means that the integer number a is shifted to the right by m bits according to the binary bits. After the shifting of the low bits, the high bits are complemented by 0.
  • the first inverse quantization processing is a process of performing inverse restoration on the quantization residual obtained in step S091.
  • the first compensation processing is to compensate each bit of the quantization residual according to a preset compensation parameter, so as cause the inverse quantization residual subjected to inverse restoration to be closer to the original prediction residual.
  • IQPRES_1 i QPRES i ⁇ QP i +CP i is satisfied, wherein IQPRES_1 i is the first inverse quantization residual of the ith pixel of the quantization unit, and CP i is the compensation parameter of the first compensation processing of the ith pixel of the quantization unit.
  • LOSS_1 i is the first residual loss of the ith bit pixel of the quantization unit
  • pixnum none0 is the number of non-zeros in the first residual loss LOSS_1
  • round represents a rounding operator
  • the second compensation processing is to perform second compensation on each bit of the first inverse quantization residual according to the fluctuation coefficient and the fluctuation state, so that the compensated inverse quantization residual is closer to the prediction residual.
  • the second compensation processing is performed on the first inverse quantization residual according to the fluctuation state and the fluctuation coefficient to calculate the second inverse quantization residual, which satisfies:
  • the first rate distortion optimization is less than the second rate distortion optimization, it is indicated that the loss after the inverse quantization is smaller and the effect is better if the second compensation processing is not performed, and the compensation flag bit needs to be set to no compensation. Otherwise, it is indicated that the loss of the second compensation processing is smaller and the effect is better, and then the compensation flag bit needs to be set to compensation.
  • step S094 If the result of step S094 is no compensation, then the compensation flag bit and the quantization residual are written into the quantized residual code stream.
  • step S094 If the result of step S094 is compensation, then the compensation flag, the fluctuation coefficient, and the quantization residual are written into the quantization residual code stream.
  • the compensation flag bit and the quantization residual may be written into the quantization residual code stream, the fluctuation coefficient is calculated at the decoding end according to the calculation formula in the embodiment, and then the second compensation processing is performed.
  • FIG. 6 is a schematic structural diagram of a video compression system according to an embodiment of the present invention. It should be noted that the above various steps may be implemented by executing instructions stored in one or more memories 10 through one or more processors 20 .
  • the predictive quantization method and the video compression system according to the present invention can effectively reduce the code stream transmission bandwidth, fully utilize the texture correlation for predictive coding, adaptively perform quantization coding, and further reduce the theoretical limit entropy and complexity.
  • the predictive quantization coding method according to the present invention can effectively reduce the code stream transmission bandwidth, fully utilize the texture correlation for predictive coding, adaptively perform quantization coding, and further reduce the theoretical limit entropy and complexity.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to a predictive quantization coding method and a video compression system. The method includes: dividing a pixel to be processed into a plurality of pixel components; obtaining one pixel component to be processed and texture direction gradients thereof; obtaining reference pixels and a prediction residual of the pixel component to be processed; forming a prediction residual code stream; dividing the prediction residual code stream into multiple quantization units; and obtaining a quantization residual code stream. The present invention can reduce the transmission bandwidth, and reduce the theoretical limit entropy and complexity.

Description

    BACKGROUND OF THE INVENTION 1. Field of the Invention
  • The present invention belongs to the technical field of compression coding technologies, and in particular to a predictive quantization coding method and a video compression system.
  • 2. Description of Related Art
  • There is great redundancy in image data, decorrelation is generally performed by compression coding, that is, by less correlation between sequences, the video content is represented with a less number of bits, so as to reduce the redundancy in the video content. Therefore, the compression of videos or images is achieved.
  • In the compression coding process, the image coding is allowed to have certain distortions, which is also an important reason why the videos can be compressed. In many application occasions, it is not required that the compressed image is completely identical to the original image after restoration. Certain distortions are allowed since these distortions can utilize the visual characteristics of people to reduce the gray level of a quantized signal under the condition that the image change is not perceived, so as to increase a data compression ratio.
  • The predictive quantization coding method is a common method of compression coding. The existing predictive quantization coding method mainly has the following problems: the prediction pixel components are easily misjudged, which affects the prediction result, the correlation between pixel textures is not fully utilized, the theoretic limit entropy and computational complexity cannot be further reduced, and the data compression ratio and distortion loss after predictive quantization and compression cannot be further reduced.
  • Therefore, how to provide a predictive quantization coding method with high data compression ratio and small distortion loss is a hot topic of research.
  • SUMMARY OF THE INVENTION
  • Therefore, the present invention provides a predictive-quantization coding method and a video compression system, which can effectively reduce the code stream transmission bandwidth, fully utilize the texture correlation for predictive coding, adaptively perform quantization coding, and further reduce the theoretical limit entropy and complexity.
  • A predictive quantization coding method, includes steps of: (a) dividing a pixel to be processed into a plurality of pixel components; (b) obtaining one pixel component to be processed from the plurality of pixel components; (c) obtaining texture direction gradients of the pixel component to be processed; (d) obtaining reference pixels according to the texture direction gradients and positional relationships between the pixel component to be processed and the remaining of the plurality of pixel components; (e) obtaining a prediction residual of the pixel component to be processed according to the reference pixels; repeating steps (b) to (e), and taking each pixel component of the plurality of pixel components and obtaining the prediction residual corresponding thereto, to thereby form a prediction residual code stream; (g) dividing the prediction residual code stream into a plurality of quantization units; and (h) obtaining first rate distortion optimizations and second rate distortion optimizations corresponding to the plurality of quantization units to obtain a quantization residual code stream.
  • In an embodiment of the present invention, the step (a) of dividing the pixel to be processed into a plurality of pixel components includes: dividing the pixel to be processed into a R pixel component, a G pixel component, and a B pixel component.
  • In an embodiment of the present invention, the step (d) includes following substeps of: (d1) obtaining a first weighting gradient optimal value according to the texture direction gradients; (d2) obtaining a second weighting gradient optimal value according to the first weighting gradient optimal value and the positional relationships between the pixel component to be processed and the remaining of the plurality of pixel components; and (d3) obtaining the reference pixels according to the second weighting gradient optimal value.
  • In an embodiment of the present invention, the substep (d2) includes: (d21) obtaining positional relationship weights according to the positional relationships between the pixel component to be processed and the remaining of the plurality of pixel components; and (d22) obtaining the second weighting gradient optimal value according to the positional relationship weights and the first weighting gradient optimal value.
  • In an embodiment of the present invention, the positional relationships between the pixel components to be processed and the remaining of the plurality of pixel components satisfy that: the closer the pixel component to the pixel component to be processed is, the greater the positional relationship weight is, otherwise is smaller.
  • In an embodiment of the present invention, the step (h) includes substeps of: (h1) performing quantization processing on a prediction residual of each of the quantization units to obtain a quantization residual; (h2) sequentially performing a first inverse quantization processing and a first compensation processing on the quantization residual, to obtain a first inverse quantization residual and a first rate distortion optimization; (h3) performing a second compensation processing on the first inverse quantization residual, to obtain a second inverse quantization residual and a second rate distortion optimization; (h4) comparing the first rate distortion optimization and the second rate distortion optimization, setting a compensation flag bit to be no compensation if the first rate distortion optimization is less than the second rate distortion optimization;, otherwise, setting the compensation flag bit to be compensation; and (h5) writing the compensation flag bit and the quantization residual into the quantization residual code stream.
  • In an embodiment of the present invention, the substep (h2) includes: (h21) sequentially performing the first inverse quantization processing and the first compensation processing on the quantization residual, to obtain the first inverse quantization residual; and (h22) obtaining the first rate distortion optimization according to the first inverse quantization residual, the prediction residual, and the quantization residual.
  • In an embodiment of the present invention, the substep (h3) includes: (h31) obtaining a fluctuation coefficient according to a first residual loss; (h32) performing the second compensation processing on the first inverse quantization residual according to the fluctuation coefficient and a fluctuation state, to obtain the second inverse quantization residual; and (h33) obtaining the second rate distortion optimization according to the second inverse quantization residual, the prediction residual, and the quantization residual.
  • In an embodiment of the present invention, the fluctuation coefficient k satisfies: k=round (Σi=0 i=pixnum none0 abs lossresi)/pixnumnone0, where lossresi is a value of an ith bit of the first residual loss, and pixnumnone0 is a number of non-zeros in the first residual loss.
  • The present invention further provides a video compression system including: a memory and at least one processor coupled to the memory. The at least one processor is configured to perform the predictive quantization coding method according to any one of the above embodiments.
  • The predictive quantization coding method and the video compression system of the present invention can effectively reduce the code stream transmission bandwidth, fully utilize the texture correlation for predictive coding, adaptively perform quantization coding, and further reduce the theoretical limit entropy and complexity.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic flowchart of a predictive quantization coding method according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of a principle of a predictive quantization coding method according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of an R pixel component in a predictive quantization coding method according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a texture direction gradient calculation principle for a pixel component to be processed in a predictive quantization coding method according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a reference direction calculation principle in a predictive quantization coding method according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of a video compression system according to an embodiment of the present invention.
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • In the following, with reference to accompanying drawings of embodiments of the invention, technical solutions in the embodiments of the invention will be clearly and completely described. Apparently, the embodiments of the invention described below only are a part of embodiments of the invention, but not all embodiments. Based on the described embodiments of the invention, all other embodiments obtained by ordinary skill in the art without creative effort belong to the scope of protection of the invention.
  • Referring to FIG. 1, FIG. 1 is a schematic diagram of a predictive quantization coding method according to an embodiment of the present invention. The method may include the steps of: (a) dividing a pixel to be processed into a plurality of pixel components; (b) obtaining one pixel component to be processed from the plurality of pixel components; (c) obtaining texture direction gradients of the pixel component to be processed; (d) obtaining reference pixels according to the texture direction gradients and positional relationships between the pixel component to be processed and the remaining of the plurality of pixel components; (e) obtaining a prediction residual of the pixel component to be processed according to the reference pixels; (f) repeating steps (b) to (e), and taking each pixel component of the plurality of pixel components and obtaining the prediction residual corresponding thereto, to thereby form a prediction residual code stream; (g) dividing the prediction residual code stream into a plurality of quantization units; and (h) obtaining first rate distortion optimizations and second rate distortion optimizations corresponding to the quantization units to obtain a quantization residual code stream.
  • It should be noted that the above various steps may be implemented by executing instructions stored in one or more memories 10 through one or more processors 20 (referring to FIG. 6).
  • The predictive quantization coding method of the present invention can effectively reduce the code stream transmission bandwidth, fully utilize the texture correlation for predictive coding, adaptively perform quantization coding, and further reduce the theoretical limit entropy and complexity.
  • The specific embodiments of the predictive quantization coding method will be described in detail emphatically below.
  • Embodiment 1
  • Referring to FIG. 2, FIG. 2 is a schematic diagram of a principle of a predictive quantization coding method according to an embodiment of the present invention. The present embodiment includes all content of embodiment 1 based on the above embodiment, and the predictive quantization coding method will be described in detail emphatically. Specifically, the predictive quantization coding method includes the following steps.
  • S01: any pixel of an image to be processed is obtained as the pixel to be processed. Specifically, the pixel can be obtained in sequence as the pixel to be processed according to the sequence of a pixel matrix of the image to be processed from left to right.
  • S02: the above pixel to be processed is divided into a R pixel component, a G pixel component and a B pixel component of the pixel to be processed. Correspondingly, any pixel in the pixel matrix of the image to be processed can be divided into the corresponding R pixel component, G pixel component and B pixel component.
  • The pixel to be processed may also be divided into RGBY four pixel components, or RGBW four pixel components, and the component dividing manner is not specifically limited.
  • S03: the pixel component to be processed is obtained.
  • Any pixel component of the pixel to be processed is used as the pixel component to be processed.
  • S04: texture direction gradients of the pixel component to be processed are obtained.
  • The texture direction gradients are vectors including two characteristics: the vector direction of the texture direction gradient and the size of the texture direction gradient.
  • The texture direction gradients are determined by pixel components around the pixel component to be processed, and for the surrounding components of the pixel component to be processed, N texture direction gradients G1 to GN of the pixel component to be processed are determined.
  • Referring to FIG. 3 and FIG. 4, FIG. 3 is a schematic diagram of a R pixel component in a predictive quantization coding method according to an embodiment of the present invention. FIG. 4 is a schematic diagram of a texture direction gradient calculation principle for a pixel component to be processed in a predictive quantization coding method according to an embodiment of the present invention.
  • The R component of the pixel to be processed is obtained as the component CUR of the pixel to be processed, wherein CUR is the R component of the pixel to be processed, and A to O are the R components of the pixels that have been predicted and coded before the pixel to be processed.
  • Firstly, the O pixel component next to the component CUR of the pixel to be processed is found as the texture reference component.
  • One embodiment is that an N pixel component, an H pixel component, an I pixel component, and a J pixel component with a pixel distance of 0 around the O pixel component are obtained. A vector line from the O pixel component to the J pixel component, the I pixel component, the H pixel component and the N pixel component is formed respectively, then the vector line direction of the O pixel component to the J pixel component is taken as the vector direction of the first texture gradient, and the absolute value of the difference value between the J pixel component and the O pixel component is the size of the first texture gradient, thereby obtaining the first texture gradient (45°). Similarly, a second texture direction gradient (90°), a third texture direction gradient (135°) and a fourth texture direction gradient (180°) can be obtained according to the I pixel component, the H pixel component, and the N pixel component respectively.
  • Another embodiment is that the pixel components having a pixel distance of 1 around the O pixel component are an M pixel component, a G pixel component, an A pixel component, a B pixel component, a C pixel component, a D pixel component, an E pixel component and an F Pixel component respectively. Similarly, the corresponding eight texture direction gradients can also be obtained.
  • Similarly, N texture direction gradients corresponding to the G component and the B component of the pixel to be processed respectively can be respectively obtained.
  • S05: reference pixels are obtained according to the texture direction gradients and positional relationships between the pixel component to be processed and the remaining of the plurality of pixel components.
  • S051: obtaining a first weighting gradient optimal value according to the texture direction gradients.
  • The R component is taken as an example, the N texture direction gradients G1 to GN of the texture reference component of the pixel component to be processed are subjected to vector weighting to obtain a first weighting gradient BG of the N texture direction gradients after weighting. The weighting formula is as follows:

  • BG=w1*G1+w2*G2+ . . . +wN*GN,
  • wherein w1, w2 . . . wN are weighting coefficients, which may be the same or different, and w1, w2, . . . wN may be preset fixed values. Further, when the relative sizes of w1, w2, . . . wN are configured, empirical values may be considered. For example, it has been known from past experience that the direction of the texture direction gradient G1 may be more suitable for the actual situation, in which the image is predicted. Then a value more suitable for the actual situation in which the image is predicted may be configured for w1 (for example, the w1 may be configured to be very small), to increase the weighting in the direction of the texture direction gradient G1. Of course, w1, w2, . . . wN may also be adaptive, that is, the relative sizes of w1, w2, . . . , wN can be flexibly adjusted according to the actual situation of the early prediction processing, specifically w1+w2+ . . . +wN=1.
  • The values of a plurality of groups w1, w2, . . . , wN are selected to obtain a plurality of first weighting gradients. The first weighting gradient corresponding to the minimum value of the vector sizes in the plurality of first weighting gradients is taken as the first weighting gradient optimal value BGbstR of the R component of the pixel to be processed.
  • Similarly, the first weighting gradient optimal values BGbstG, BGbstB of the G component and the B component respectively of the pixel to be processed can be obtained.
  • S052: a second weighting gradient optimal value is obtained according to the first weighting gradient, optimal values and the positional relationships between the pixel component to be processed and the remaining pixel components.
  • Vector addition is performed according to the first weighting gradient optimal values of the R component, the G component, and the B component obtained in step S051 to obtain the second weighting gradient optimal value of the R component of the pixel to be processed, and the following formula is satisfied,

  • BG R =t1R ×BGbst R +t2R ×BGbst G +t3R ×BGbst B,
  • wherein BGR is the second weighting gradient optimal value of the R component of the pixel to be processed. t1R,t2R and t3R are respectively the weighting coefficients of the first weighting gradient optimal values of the R component, G component and B component, and may be the same or different.
  • Preferably, the weighting coefficient of the first weighting gradient optimal value of the R component of the pixel to be processed is the largest, and the weighting coefficient values of the first weighting gradient optimal values of other components of which the distances from the R component of the pixel to be processed are gradually increased are gradually decreased. The sum of the weighting coefficient values of the first weighting gradient optimal values is 1, specifically t1R+t2R+t3R=1.
  • The distance from the R component of the pixel to be processed is determined according to the dividing order of the pixel component of the pixel to be processed. For example, the dividing order of the pixel components of the pixel to be processed is the R component, the G component, and the B component, and then the distance from the R component to the G component is less than the distance between the R component and the B component.
  • Similarly, the second weighting gradient optimal value BGG of the G component of the pixel to be processed and the second weighting gradient optimal value BGB of the B component of the pixel to be processed can be obtained.
  • Referring to FIG. 2 again, the second weighting gradient optimal values BGR, BGG and BGB respectively satisfy: BGR=0.5×BGbstR+0.3×BGbstG+0.2×BGbstB, BGG=0.3×BGbstR+0.4×BGbstG+0.3×BGbstB, BGB=0.2×BGbstR+0.3×BGbstG+0.5×x BGbstB.
  • S053: a reference value is obtained according to the second weighting gradient optimal values.
  • The vector direction of the second weighting gradient optimal value BGR of the R component of the pixel to be processed obtained in step S052 is obtained as a reference direction.
  • With the R component of the pixel to be processed as a vector origin point, all available pixel components in the reference direction are the reference pixels. The reference pixel values are subjected to scalar weighting to obtain the reference value Ref, and the weighting formula is as follows:

  • Ref R =r1×cpt1+r2×cpt2+ . . . +rN×cptN,
  • wherein r1, r2, rN are weighting coefficients of the reference pixels, and may be the same or different. cpt1 to cptN are N available pixel component values in the reference direction of the R component.
  • Referring to FIG. 5. FIG. 5 is a schematic diagram of a reference direction calculation principle in a predictive quantization coding method according to an embodiment of the present invention.
  • BG, BGbstR, BGR are vectors with the texture reference component O as the vector origin point. It is assumed that the vector direction of the second weighting gradient optimal value BGR is as shown in the figure, when the reference value Ref is calculated, the CUR of the pixel to be processed needs to be used as the vector origin point, the vector direction of BGR is taken as the reference direction, all available pixels in the reference direction, that is, the K pixel component and the F pixel component are obtained as the reference pixels, and weighting calculation is performed to obtain: RefR=r1×cptk+r2×cptF, wherein cptK is the pixel component value of the R component of the pixel K to be processed, and cptF is the pixel component value of the R component of the pixel F to be processed.
  • Preferably, for any component, if the reference is 45 degrees, then the reference value RefR=0.8×cptK+0.2×cptF.
  • If the reference is 135 degrees, then the reference value is 0.8*G+0.2A. If the reference is 180 degrees, then the reference value is 0.8*K+0.2J, the closer the pixel component value to the current pixel is, the larger the configuration coefficient is.
  • S06: a prediction residual of the pixel component to be processed is obtained according to the reference pixels.
  • By subtracting, the reference value from the pixel value CurR of the R component of the pixel to be processed, the prediction residual DifR of the R component of the pixel to be processed can be obtained, and the calculation is as follows: DifR=CurR-RefR.
  • Similarly, the prediction residuals DifG and DifB of the G component and the B component can be obtained.
  • S07: steps (S03) to (S06) are repeated, and corresponding prediction residuals are obtained after taking each pixel component of the pixel to be processed as the pixel component to be processed, to thereby form a prediction residual code stream.
  • The obtaining process for the prediction residuals of the R component, the G component, and the B component in the above embodiment may be processed in parallel or in a serial manner, and may be set according to scenario needs, which is not excessively limited by the present embodiment.
  • S08: the prediction residual code stream is divided into a plurality of quantization units.
  • Preferably, the size of the quantization unit may be set to 8×1.
  • S09: a first rate distortion optimization and a second rate distortion optimization corresponding to each quantization unit are obtained to obtain a quantization residual code stream.
  • S091: a quantization processing is performed on a prediction residual of each of the quantization units to obtain a quantization residual.
  • The quantization parameter QP is firstly obtained, and all quantization units use the same quantization parameter. Preferably, the quantization parameter QP is 2.
  • The quantization unit is quantized by using the quantization parameter QP to obtain the first quantization residual, which satisfies: QPRESi=[PRESi>>QP], wherein QPRESi is the quantization residual of the ith pixel of the quantization unit, PRESi is the prediction residual of the ith pixel of the quantization unit, and QP is the quantization parameter.
  • The “>>” formula indicates that if there is an expression a>>m, it means that the integer number a is shifted to the right by m bits according to the binary bits. After the shifting of the low bits, the high bits are complemented by 0.
  • S092: a first inverse quantization processing and a first compensation processing are sequentially performed on the quantization residual to obtain a first inverse quantization residual and a first rate distortion optimization.
  • S0921: firstly, the first inverse quantization processing and the first compensation processing are sequentially performed on the quantization residual to obtain the first inverse quantization residual.
  • The first inverse quantization processing is a process of performing inverse restoration on the quantization residual obtained in step S091. The first compensation processing is to compensate each bit of the quantization residual according to a preset compensation parameter, so as cause the inverse quantization residual subjected to inverse restoration to be closer to the original prediction residual.
  • IQPRES_1i=QPRESi<<QPi+CPi is satisfied, wherein IQPRES_1i is the first inverse quantization residual of the ith pixel of the quantization unit, and CPi is the compensation parameter of the first compensation processing of the ith pixel of the quantization unit.
  • Preferably, the first compensation parameter satisfies: CPi=(1<<QPi)/2
  • S0922: the first rate distortion optimization is obtained according to the first inverse quantization residual, the prediction residual, and the quantization residual.
  • A first residual loss is obtained based on the first inverse quantization residual and the prediction residual, LOSS_1i=IQPRES_1i-PRESi is satisfied, wherein LOSS_1i is the first residual loss of the ith pixel of the quantization unit.
  • The first rate distortion optimization is calculated to satisfy: RDO1=a1 Σi=0 i=pixnum-1abs(QPRESi)+a2Σi=0 i=pixnum-1LOSS_1i, wherein RDO1 is the first rate distortion optimization, pixnum is the length of the quantization unit, and a1 and a2 are the weighting parameters:
  • Preferably, a1=a2=1.
  • S093: a second compensation processing is performed on the first inverse quantization residual to obtain a second inverse quantization residual and a second rate distortion optimization.
  • S0931: a fluctuation coefficient is obtained according to the first residual loss, wherein the fluctuation coefficient k satisfies: k=round(Σi=0 i=pixnum none0 abs(lossresi)/pixnumnone0);
  • where LOSS_1i is the first residual loss of the ith bit pixel of the quantization unit, pixnumnone0 is the number of non-zeros in the first residual loss LOSS_1, and round represents a rounding operator.
  • S0932: the second compensation processing is performed on the first inverse quantization residual according to the fluctuation coefficient and a fluctuation state to obtain a second inverse quantization residual.
  • The second compensation processing is to perform second compensation on each bit of the first inverse quantization residual according to the fluctuation coefficient and the fluctuation state, so that the compensated inverse quantization residual is closer to the prediction residual.
  • The fluctuation state is obtained, wherein the fluctuation state is a sequence stored at a decoding end and a coding end simultaneously, satisfying: CT={c0, c1, ci, . . . , cm}, wherein ci=0 or 1 or −1, m=quantization unit length.
  • Preferably, the fixed fluctuation state can be set as: CT=(1,0,−1,0,1,0,−1,0).
  • The second compensation processing is performed on the first inverse quantization residual according to the fluctuation state and the fluctuation coefficient to calculate the second inverse quantization residual, which satisfies:
  • The second inverse quantization residual satisfies: IQPRES_2i=QPRES_1i+k×ci, wherein IQPRES_2i is the second inverse quantization residual of the ith pixel of the quantization unit, and k×ci is the compensation coefficient of the second compensation processing.
  • S0933: the second rate distortion optimization is obtained according to the second inverse quantization residual, the prediction residual, and the quantization residual.
  • A second residual loss is obtained according to the second inverse quantization residual and the prediction residual of the quantization unit, satisfying: LOSS_2i=IQPRES_2i-PRESi, wherein LOSS_2i is the second residual loss of the ith pixel of the quantization unit.
  • The second rate distortion optimization is calculated to satisfy: RDO2=a1Σi=0 i=pixnum-1abs (QPRESi)+a2Σi=0 i=pixnum-1LOSS_2i, wherein RDO1 is the second rate distortion optimization.
  • S094: the first rate distortion optimization and the second rate distortion optimization are compared, if the first rate distortion optimization is less than the second rate distortion optimization, a compensation flag bit is set to be no compensation. Otherwise, the compensation flag bit is set to be compensation.
  • If the first rate distortion optimization is less than the second rate distortion optimization, it is indicated that the loss after the inverse quantization is smaller and the effect is better if the second compensation processing is not performed, and the compensation flag bit needs to be set to no compensation. Otherwise, it is indicated that the loss of the second compensation processing is smaller and the effect is better, and then the compensation flag bit needs to be set to compensation.
  • S095: the compensation flag bit and the quantization residual are written into the quantization residual code stream.
  • If the result of step S094 is no compensation, then the compensation flag bit and the quantization residual are written into the quantized residual code stream.
  • If the result of step S094 is compensation, then the compensation flag, the fluctuation coefficient, and the quantization residual are written into the quantization residual code stream. Herein, only the compensation flag bit and the quantization residual may be written into the quantization residual code stream, the fluctuation coefficient is calculated at the decoding end according to the calculation formula in the embodiment, and then the second compensation processing is performed.
  • Referring to FIG. 6, FIG. 6 is a schematic structural diagram of a video compression system according to an embodiment of the present invention. It should be noted that the above various steps may be implemented by executing instructions stored in one or more memories 10 through one or more processors 20.
  • The predictive quantization method and the video compression system according to the present invention can effectively reduce the code stream transmission bandwidth, fully utilize the texture correlation for predictive coding, adaptively perform quantization coding, and further reduce the theoretical limit entropy and complexity.
  • While the invention has been described in terms of what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention needs not be limited to the disclosed embodiment. On the contrary, it is intended to cover various modifications and similar arrangements included within the spirit and scope of the appended claims which are to be accorded with the broadest interpretation so as to encompass all such modifications and similar structures.
  • INDUSTRIAL APPLICABILITY
  • The predictive quantization coding method according to the present invention can effectively reduce the code stream transmission bandwidth, fully utilize the texture correlation for predictive coding, adaptively perform quantization coding, and further reduce the theoretical limit entropy and complexity.

Claims (10)

1. A predictive quantization coding method, comprising steps of:
(a) dividing a pixel to be processed into a plurality of pixel components, wherein pixels of an image are sequentially taken as the pixel to be processed;
(b) obtaining one pixel component to be processed from the plurality of pixel components;
(c) obtaining texture direction gradients of the pixel component to be processed;
(d) obtaining reference pixels according to the texture direction gradients and positional relationships between the pixel component to be processed and a remaining of the plurality of pixel components;
(e) obtaining a prediction residual of the pixel component to be processed according to the reference pixels;
(f) repeating steps (b) to (e), and taking each pixel component of the plurality of pixel components and obtaining the prediction residual corresponding thereto, and forming a prediction residual code stream including the prediction residuals of the pixel to be processed of the image;
(g) dividing the prediction residual code stream into a plurality of quantization units each including a predetermined number of prediction residuals divided from the prediction residual code stream; and
(h) obtaining first rate distortion optimizations and second rate distortion optimizations corresponding to the plurality of quantization units to obtain a quantization residual code stream.
2. The predictive quantization coding method according to claim 1, wherein the step (a) of dividing the pixel to be processed into a plurality of pixel components comprises: dividing the pixel to be processed into a R pixel component, a G pixel component, and a B pixel component.
3. The predictive quantization coding method according to claim 1, wherein the step (d) comprises following sub-steps of:
(d1) obtaining a first weighting gradient optimal value according to the texture direction gradients;
(d2) obtaining a second weighting gradient optimal value according to the first weighting gradient optimal value and the positional relationships between the pixel component to be processed and the remaining of the plurality of pixel components; and
(d3) obtaining the reference pixels according to the second weighting gradient optimal value.
4. The predictive quantization coding method according to claim 1, wherein the sub-step (d2) comprises:
(d21) obtaining positional relationship weights according to the positional relationships between the pixel component to be processed and the remaining of the plurality of pixel components; and
(d22) obtaining the second weighting gradient optimal value according to the positional relationship weights and the first weighting gradient optimal value.
5. The predictive quantization coding method according to claim 1, wherein the positional relationships between the pixel components to be processed and the remaining of the plurality of pixel components satisfy that: the closer the pixel component to the pixel component to be processed is, the greater the positional relationship weight is, otherwise is smaller.
6. The predictive quantization coding method according to claim 1, wherein the step (h) comprises sub-steps of:
(h1) performing quantization processing on a prediction residual of each of the quantization units to obtain a quantization residual;
(h2) sequentially performing a first inverse quantization processing and a first compensation processing on the quantization residual, to obtain a first inverse quantization residual and a first rate distortion optimization;
(h3) performing a second compensation processing on the first inverse quantization residual, to obtain a second inverse quantization residual and a second rate distortion optimization;
(h4) comparing the first rate distortion optimization and the second rate distortion optimization, setting a compensation flag bit to be no compensation if the first rate distortion optimization is less than the second rate distortion optimization; otherwise, setting the compensation flag bit to be compensation; and
(h5) writing the compensation flag bit and the quantization residual into the quantization residual code stream.
7. The predictive quantization coding method according to claim 6, wherein the sub-step (h2) comprises:
(h21) sequentially performing the first inverse quantization processing and the first compensation processing on the quantization residual, to obtain the first inverse quantization residual; and
(h22) obtaining the first rate distortion optimization according to the first inverse quantization residual, the prediction residuals, and the quantization residual.
8. The predictive quantization coding method according to claim 7, wherein the sub-step (h3) comprises:
(h31) obtaining a fluctuation coefficient according to a first residual loss;
(h32) performing the second compensation processing on the first inverse quantization residual according to the fluctuation coefficient and a fluctuation state, to obtain the second inverse quantization residual; and
(h33) obtaining the second rate distortion optimization according to the second inverse quantization residual, the prediction residual, and the quantization residual.
9. The predictive quantization coding method according to claim 8, wherein the fluctuation coefficient k satisfies:
k = round ( i = 0 i = pixnum none 0 abs ( lossres i ) / pixnum none 0 )
wherein, lossresi is a value of an ith bit of the first residual loss, and pixnumnone0 is a number of non-zeros in the first residual loss.
10. A video compression system, comprising: a memory and at least one processor coupled to the memory, wherein the at least one processor is configured to perform the predictive quantization coding method according to claim 1.
US16/236,236 2018-10-26 2018-12-28 Predictive quantization coding method and video compression system Active US10645387B1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201811260531 2018-10-26
CN201811260531.9A CN109361922B (en) 2018-10-26 2018-10-26 Predictive quantization coding method
CN201811260531.9 2018-10-26

Publications (2)

Publication Number Publication Date
US20200137392A1 true US20200137392A1 (en) 2020-04-30
US10645387B1 US10645387B1 (en) 2020-05-05

Family

ID=65347110

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/236,236 Active US10645387B1 (en) 2018-10-26 2018-12-28 Predictive quantization coding method and video compression system

Country Status (3)

Country Link
US (1) US10645387B1 (en)
CN (1) CN109361922B (en)
WO (1) WO2020082485A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024007253A1 (en) * 2022-07-07 2024-01-11 Oppo广东移动通信有限公司 Point cloud rate-distortion optimization method, attribute compression method and apparatus, and storage medium
CN116489373A (en) * 2022-07-26 2023-07-25 杭州海康威视数字技术股份有限公司 Image decoding method, encoding method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101246915B1 (en) * 2005-04-18 2013-03-25 삼성전자주식회사 Method and apparatus for encoding or decoding moving picture
KR101590511B1 (en) * 2009-01-23 2016-02-02 에스케이텔레콤 주식회사 / / Motion Vector Coding Method and Apparatus
TWI487381B (en) * 2011-05-19 2015-06-01 Nat Univ Chung Cheng Predictive Coding Method for Multimedia Image Texture
CN103517069B (en) * 2013-09-25 2016-10-26 北京航空航天大学 A kind of HEVC intra-frame prediction quick mode selection method based on texture analysis
WO2016043637A1 (en) * 2014-09-19 2016-03-24 Telefonaktiebolaget L M Ericsson (Publ) Methods, encoders and decoders for coding of video sequences
CN105208387B (en) * 2015-10-16 2018-03-13 浙江工业大学 A kind of HEVC Adaptive Mode Selection Method for Intra-Prediction
CN108063947B (en) * 2017-12-14 2021-07-13 西北工业大学 Lossless reference frame compression method based on pixel texture

Also Published As

Publication number Publication date
CN109361922A (en) 2019-02-19
CN109361922B (en) 2020-10-30
US10645387B1 (en) 2020-05-05
WO2020082485A1 (en) 2020-04-30

Similar Documents

Publication Publication Date Title
US20230021055A1 (en) Video encoding and decoding method
US20200213587A1 (en) Method and apparatus for filtering with mode-aware deep learning
US10834425B2 (en) Image compression/decompression method and device, and image processing system
US10992939B2 (en) Directional intra-prediction coding
JP6858277B2 (en) Directional intra-predictive coding
NO303479B1 (en) Transformation coding device
US20180124431A1 (en) In-loop post filtering for video encoding and decoding
US8792738B2 (en) Image decoding apparatus and image coding apparatus
US9832477B2 (en) Data encoding with sign data hiding
US20120307889A1 (en) Video decoder with dynamic range adjustments
US9641843B1 (en) Video coding using adaptive source variance based partitioning
US20220188633A1 (en) Low displacement rank based deep neural network compression
US20120177299A1 (en) Image coding device and image decoding device
US11159814B2 (en) Image coding/decoding method, coder, decoder, and storage medium
US10645387B1 (en) Predictive quantization coding method and video compression system
EP1639830A1 (en) Early detection of zeros in the transform domain
EP3471418A1 (en) Method and apparatus for adaptive transform in video encoding and decoding
CN115918071A (en) Adapting a transformation process to a neural network based intra-prediction mode
US20220377358A1 (en) Video compression based on long range end-to-end deep learning
US7885335B2 (en) Variable shape motion estimation in video sequence
US6611361B1 (en) Method for restoring compressed image of image processing system and apparatus therefor
KR101979379B1 (en) Method and apparatus for encoding image, and method and apparatus for decoding image
CN114080613A (en) System and method for encoding deep neural networks
US20220321879A1 (en) Processing image data
US8971407B2 (en) Detection of skip mode

Legal Events

Date Code Title Description
AS Assignment

Owner name: XI'AN CREATION KEJI CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YUE, QINGDONG;RAN, WENFANG;LI, WEN;REEL/FRAME:047872/0396

Effective date: 20181214

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: SURCHARGE FOR LATE PAYMENT, SMALL ENTITY (ORIGINAL EVENT CODE: M2554); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 4

AS Assignment

Owner name: IP3 2023, SERIES 923 OF ALLIED SECURITY TRUST I, NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:XI'AN CREATION KEJI CO., LTD.;REEL/FRAME:066121/0742

Effective date: 20231205

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY