WO2024114432A1 - 视频编码中的样点自适应补偿方法及装置 - Google Patents

视频编码中的样点自适应补偿方法及装置 Download PDF

Info

Publication number
WO2024114432A1
WO2024114432A1 PCT/CN2023/132735 CN2023132735W WO2024114432A1 WO 2024114432 A1 WO2024114432 A1 WO 2024114432A1 CN 2023132735 W CN2023132735 W CN 2023132735W WO 2024114432 A1 WO2024114432 A1 WO 2024114432A1
Authority
WO
WIPO (PCT)
Prior art keywords
compensation
sample
boundary
value
preset
Prior art date
Application number
PCT/CN2023/132735
Other languages
English (en)
French (fr)
Inventor
张凯明
Original Assignee
百果园技术(新加坡)有限公司
张凯明
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 百果园技术(新加坡)有限公司, 张凯明 filed Critical 百果园技术(新加坡)有限公司
Publication of WO2024114432A1 publication Critical patent/WO2024114432A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/43Hardware specially adapted for motion estimation or compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation

Definitions

  • the embodiments of the present application relate to the technical field of video coding, and in particular to a method and device for adaptive sample compensation in video coding.
  • sample point adaptive compensation technology when encoding a video, sample point adaptive compensation technology is used to solve the ringing effect caused by the quantization of video encoding, and it also brings obvious compression rate gain.
  • each pixel in the video image frame is traversed, classified, and compensated to correct distortion problems such as convex corners and concave corners in a certain edge direction. It is a pixel-level decision-making and compensation process, which causes serious encoding overhead.
  • the increase in the computational complexity of the post-processing module to which the sample point adaptive compensation belongs will lead to a decrease in the overall encoding speed. For example, in the case of multi-threading, the hysteresis of the post-processing thread causes too much waiting of the encoding thread, which significantly reduces the encoding speed and needs to be improved.
  • the embodiments of the present application provide a method and device for sample adaptive compensation in video coding, which solves the problem in the related art that the coding overhead is too large and the coding speed is significantly reduced when performing sample adaptive compensation, optimizes the sample adaptive compensation mechanism, and improves the overall coding efficiency and coding speed.
  • an embodiment of the present application provides a sample adaptive compensation method in video coding, the method comprising:
  • edge strength values of different preset directions are calculated by an edge direction estimation algorithm, and boundary compensation modes are screened based on the edge strength values;
  • the preset categories in the screened boundary compensation mode are traversed to perform sample point adaptive compensation.
  • an embodiment of the present application further provides a sample adaptive compensation device in video coding, comprising:
  • a boundary information determination module is configured to obtain coding depth information and quantization information of the video coding unit, and calculate a boundary information value according to the coding depth information and the quantization information;
  • a sample point compensation skipping module configured to skip the calculation of sample point adaptive compensation when the boundary information value is less than a preset threshold
  • a mode screening module configured to, when the boundary information value is not less than the preset threshold, calculate edge strength values of different preset directions through an edge direction estimation algorithm, and screen the boundary compensation mode based on the edge strength values;
  • the sample point compensation module is configured to traverse the preset categories in the screened boundary compensation mode to perform sample point adaptive compensation.
  • an embodiment of the present application further provides a sample adaptive compensation device in video coding, the device comprising:
  • processors one or more processors
  • a storage device for storing one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the sample adaptive compensation method in video encoding described in the embodiment of the present application.
  • an embodiment of the present application further provides a non-volatile storage medium storing computer executable instructions, wherein the computer executable instructions, when executed by a computer processor, are used to execute the sample adaptive compensation method in video encoding described in the embodiment of the present application.
  • an embodiment of the present application further provides a computer program product, the computer program product comprising a computer program, the computer program being stored in a computer-readable storage medium, at least one processor of a device reading and executing the computer program from the computer-readable storage medium, so that the device executes the present application.
  • a computer program product comprising a computer program, the computer program being stored in a computer-readable storage medium, at least one processor of a device reading and executing the computer program from the computer-readable storage medium, so that the device executes the present application.
  • a boundary information value is calculated based on the coding depth information and the quantization information.
  • the boundary information value is less than a preset threshold
  • the calculation of sample adaptive compensation is skipped.
  • edge strength values of different preset directions are calculated by an edge direction estimation algorithm.
  • the boundary compensation mode is screened based on the edge strength value, and the preset categories in the screened boundary compensation mode are traversed to perform sample adaptive compensation.
  • This processing mechanism of sample adaptive compensation for the case where the boundary information value is small, means that the video coding unit is a flat block, and then the sample adaptive compensation is skipped.
  • the edge strength values in each preset direction of the video coding unit are calculated, and the boundary compensation mode is screened based on the edge strength value.
  • the boundary compensation mode is screened based on the edge strength value.
  • each preset category is traversed, and the original boundary compensation mode is partially eliminated, thereby reducing the traversal process of the preset categories under some boundary compensation modes, and improving the overall coding efficiency and coding speed.
  • FIG1 is a flow chart of a sample adaptive compensation method in video coding provided by an embodiment of the present application.
  • FIG2 is a flow chart of a method for calculating a boundary information value in sample adaptive compensation provided by an embodiment of the present application
  • FIG3 is a flow chart of a method for calculating a boundary information value in sample adaptive compensation provided by an embodiment of the present application
  • FIG4 is a schematic diagram of a convolution matrix used in calculating edge strength values provided in an embodiment of the present application.
  • FIG5 is a flow chart of a method for screening compensation modes based on edge strength values provided in an embodiment of the present application
  • FIG6 is a flowchart of another sample adaptive compensation method in video encoding provided by an embodiment of the present application.
  • FIG7 is a structural block diagram of a sample adaptive compensation device in video coding provided by an embodiment of the present application.
  • FIG. 8 is a structure of a sample adaptive compensation device in video coding provided by an embodiment of the present application. Schematic diagram.
  • first, second, etc. in the specification and claims of this application are used to distinguish similar objects, and are not used to describe a specific order or sequence. It should be understood that the data used in this way can be interchangeable under appropriate circumstances, so that the embodiments of the present application can be implemented in an order other than those illustrated or described here, and the objects distinguished by "first”, “second”, etc. are generally of one type, and the number of objects is not limited.
  • the first object can be one or more.
  • “and/or” in the specification and claims represents at least one of the connected objects, and the character “/" generally indicates that the objects associated with each other are in an "or” relationship.
  • the sample adaptive compensation method in video encoding provided in the embodiment of the present application can be applied to scenarios where video encoding is required, such as sample adaptive compensation when encoding a video generated in a live broadcast scene, or sample adaptive compensation when encoding a video generated during video shooting.
  • the method can be executed by a computing device, such as a smart phone, a server, a laptop computer, a tablet computer, etc.
  • the sample adaptive compensation method of the present application can be used in the process of video encoding.
  • the sample adaptive compensation method can be integrated into an existing video encoding module based on the international standard for high-efficiency video coding, or a separate sample adaptive processing module can be set. After the video encoding module encodes the video frame, sample adaptive processing is performed to finally encode it into a binary code stream and upload it to the network.
  • Step S101 Obtain coding depth information and quantization information of a video coding unit, and calculate a boundary information value according to the coding depth information and the quantization information.
  • the processing object for sample compensation is a video coding unit.
  • a video coding unit for example, taking the H.265 video coding standard as an example, it can be a CTU (Coding Tree Unit) defined in
  • a video coding unit of a fixed size may be divided into four sub-units of a size of 32*32, or three sub-units of a size of 32*32, and four sub-units of a size of 16*16.
  • the video coding unit may be a basic unit defined in other video compression standards.
  • the coding depth information is the best division method selected by the current video coding unit after recursive traversal of the video frame through the coding analysis link;
  • the quantization information is the quantization parameter of the current video coding unit during actual coding, which reflects the compression of spatial details of the video frame during the coding process. For example, taking H.264 and H.265 video coding standards as examples, it can be a specific QP (Quantization Parameter) value.
  • the coding depth information and quantization information of the video coding unit can be obtained by the analysis link of the current mainstream encoder during the video coding process.
  • the boundary information value corresponding to the video coding unit is calculated based on the coding depth information and the quantization information.
  • the boundary information value reflects the flatness of the video coding unit.
  • FIG2 is a flow chart of a method for calculating the boundary information value in sample adaptive compensation provided in an embodiment of the present application, specifically including:
  • Step S1011 Calculate an average coding depth value according to the coding depth values of the sub-units in the video coding unit.
  • the average coding depth value calculated by the sub-units of the video coding unit is used as the coding depth value of the video coding unit.
  • An exemplary calculation formula is as follows:
  • avgDepth represents the average coding depth value of the video coding unit
  • sum() is a summation function
  • the cu_depth(x) is recorded in units of 4x4, which corresponds to the division depth at the xth small block
  • the optional values are 0, 1, 2 and 3.
  • the coding depth value of the above sub-unit can be obtained from the encoding result in the encoder.
  • Step S1012 Calculate a boundary information value based on the average coding depth value, the quantization information, and the set weight.
  • the boundary information value is further calculated in combination with the quantization information and the set weight.
  • the quantization information as the quantization parameter QP as an example
  • the weight value is assumed to be a
  • the boundary information value is recorded as ES
  • the calculation formula of the boundary information value ES can be:
  • the weight coefficient a can be assigned different constant values according to actual conditions, such as 20 or 25.
  • Step S102 When the boundary information value is less than a preset threshold, skip the calculation of sample adaptive compensation.
  • the boundary information value is calculated, it is determined whether to skip the calculation of sample adaptive compensation based on the boundary information value.
  • the boundary information value is less than a preset threshold, the calculation of sample adaptive compensation is skipped.
  • the smaller the boundary information value the flatter the video coding unit. If the video coding unit is flatter, the benefit of sample adaptive compensation is lower. Therefore, the calculated boundary information value is compared with the preset threshold. When it is less than the preset threshold, sample adaptive compensation is skipped.
  • the value range of the preset threshold can be 2-5, that is, according to different usage scenarios, a suitable value is selected within the value range.
  • Step S103 when the boundary information value is not less than the preset threshold, edge strength values of different preset directions are calculated by an edge direction estimation algorithm, and boundary compensation modes are screened based on the edge strength values.
  • the sample point adaptive compensation calculation includes two calculation processes: boundary compensation and sideband compensation.
  • boundary compensation it is divided into 4 modes according to the direction of pixel traversal, and exemplarily recorded as: horizontal mode (EO_0), vertical mode (EO_1), 135° direction mode (EO_2) and 45° direction mode (EO_3).
  • each boundary compensation mode it includes multiple preset categories, such as being divided into 5 preset categories according to the relationship between the current pixel and the two adjacent pixels in the above-mentioned mode direction, exemplarily recorded as obvious concave corners, ordinary concave corners, ordinary convex corners, obvious convex corners and other categories.
  • the first two preset categories are concave corner situations, where the current pixel value is lower than the two adjacent pixels, A positive compensation value is used; a negative compensation value is used for the two types of convex corners in the preset category to reduce the protrusion of the current pixel; no compensation is performed for other categories of scenes.
  • boundary compensation is divided into 20 (4 boundary compensation modes, each boundary compensation mode contains 5 preset categories) specific traversal categories, and different compensation values are used for different categories.
  • sideband compensation is to classify pixels according to the size of pixel values. For example, when H.265 processes 8-bit video, it divides the pixel values from 0 to 255 into 32 sidebands, and each sideband contains 8 continuous pixel values.
  • the encoder will count the pixel values in the specified sideband and calculate the mean, and then transmit the difference between the original pixel mean and the reconstructed pixel mean to the decoder.
  • the decoder will compensate the resolved difference to the specified sideband to narrow the gap between the reconstructed mean and the original mean.
  • Sideband compensation is divided into 32 categories, and different categories use different compensation values.
  • the sample point adaptive compensation includes 52 different types of compensation methods, so in the traversal calculation process of the sample point adaptive compensation, the bottleneck effect caused in the encoder is very obvious.
  • the boundary information value is not less than the preset threshold
  • the edge strength values of different preset directions are calculated by the edge direction estimation algorithm, and then the boundary compensation mode is performed in turn to eliminate unreasonable boundary compensation modes to reduce the data calculation amount of the encoder.
  • FIG3 is a flowchart of a method for calculating a boundary information value in sample adaptive compensation provided by an embodiment of the present application, specifically including:
  • Step S1031 divide the video coding unit into multiple sub-units, and calculate the division depth value of each sub-unit respectively.
  • a method for calculating the split depth value of each sub-unit may be: obtaining the split depth value of the minimum coding unit in each sub-unit, and determining the sum of the split depth values of the minimum coding unit as the split depth value of the sub-unit.
  • the calculation formula is as follows:
  • subCUDepth(i) represents the division depth value of the i-th sub-unit
  • the value of i represents different sub-units
  • j represents the index of the minimum coding unit in each sub-unit
  • the size of the minimum coding unit can be 4*4.
  • the subCUDepth(i) can be stored in the form of a matrix.
  • a video coding unit of size 64*64 is taken as an example, which is equally divided into 4 sub-units of size 32*32.
  • a 2*2 matrix is used to store the division depth values of the upper left sub-unit, the upper right sub-unit, the lower left sub-unit and the lower right sub-unit, respectively.
  • Step S1032 performing a convolution operation on the divided depth value and a preset matrix to obtain edge strength values in different preset directions, wherein the preset matrix includes a plurality of matrices, and each preset matrix corresponds to a preset direction.
  • the edge strength value of the video encoding unit in different preset directions is calculated, wherein the preset directions may exemplarily include 4, namely, horizontal direction, vertical direction, 135° direction and 45° direction, which respectively correspond to the 4 boundary compensation modes in boundary compensation.
  • the above-mentioned preset directions can be adaptively adjusted based on different encoders and sample adaptive compensation algorithms, the number of preset directions can be increased or deleted, and the specific direction angle of the preset direction can also be adjusted.
  • a method for calculating the edge strength value can be: convolving the division depth value with the preset matrix to obtain edge strength values of different preset directions, wherein, taking the horizontal direction, vertical direction, 135° direction and 45° direction as examples, the corresponding preset matrices are shown in Figure 4, and Figure 4 is a schematic diagram of a convolution matrix used in edge strength value calculation provided in an embodiment of the present application.
  • the pixel change differences of the video encoding unit in the horizontal direction, vertical direction, 135° direction and 45° direction can be obtained respectively, which are represented by edge strength values to be used as the basis for subsequent boundary compensation mode screening.
  • the edge strength value may also be calculated by using an edge extraction operator such as a Marr-Hildreth operator.
  • FIG5 is a flow chart of a method for screening the compensation mode based on the edge strength value provided in an embodiment of the present application, specifically including:
  • Step S1033 sorting the calculated edge strength values in different preset directions.
  • Step S1034 When the edge strength value satisfies the elimination condition, the boundary compensation mode corresponding to the minimum edge strength value is eliminated.
  • different edge intensity values correspond to a preset direction, that is, to a boundary compensation mode, which reflects the pixel change in the direction.
  • a boundary compensation mode which reflects the pixel change in the direction.
  • they are sorted, and the boundary compensation mode corresponding to the smallest edge intensity value is eliminated.
  • the boundary compensation modes including horizontal mode, vertical mode, 135° direction mode and 45° direction mode as an example, assuming that the edge intensity value corresponding to the 45° direction mode is the smallest, it is eliminated and the remaining three boundary compensation modes are retained.
  • the number of eliminations can be set according to actual needs.
  • the edge strength value when the boundary compensation mode is eliminated, it is determined that the edge strength value satisfies the elimination condition.
  • the elimination condition may be that the maximum edge strength value is not 0, thereby avoiding the introduction of large errors caused by eliminating the boundary compensation mode.
  • Step S104 traverse the preset categories in the screened boundary compensation mode to perform sample point adaptive compensation.
  • the preset categories in the boundary compensation mode obtained after screening are traversed to perform sample adaptive compensation.
  • the boundary compensation mode obtained after screening includes a horizontal mode, a vertical mode, and a 135° direction mode
  • the five categories of obvious concave angle, ordinary concave angle, ordinary convex angle, obvious convex angle, and other categories set in the horizontal mode, the vertical mode, and the 135° direction mode are traversed respectively to complete the boundary compensation of the video encoding unit.
  • the boundary information value is calculated based on the coding depth information and the quantization information.
  • the calculation of the sample adaptive compensation is skipped.
  • the boundary information value is not less than the preset threshold, the edge strength values of different preset directions are calculated by the edge direction estimation algorithm.
  • the boundary compensation mode is screened based on the edge strength value, and the preset categories in the screened boundary compensation mode are traversed to perform sample adaptive compensation. This processing mechanism of sample adaptive compensation, for the case where the boundary information value is small, it means that the video coding unit is a flat block, and then the sample adaptive compensation is skipped.
  • the edge strength values in each preset direction of the video coding unit are calculated, and the boundary compensation mode is screened based on the edge strength value.
  • the boundary compensation mode is screened based on the edge strength value.
  • each preset category is traversed, and the original boundary compensation mode is partially eliminated, thereby reducing the traversal process of the preset categories under some boundary compensation modes, and improving the overall coding efficiency and coding speed.
  • FIG6 is a flow chart of another sample adaptive compensation method in video encoding provided by an embodiment of the present application, which provides a method for traversing preset categories in a screened boundary compensation mode to perform sample adaptive compensation, specifically including:
  • Step S201 Obtain coding depth information and quantization information of a video coding unit, and calculate a boundary information value according to the coding depth information and the quantization information.
  • Step S202 When the boundary information value is less than a preset threshold, skip the calculation of sample adaptive compensation.
  • Step S203 when the boundary information value is not less than the preset threshold, edge strength values of different preset directions are calculated by an edge direction estimation algorithm, and boundary compensation modes are screened based on the edge strength values.
  • Step S204 when traversing the preset categories in the screened boundary compensation mode, perform brightness sample compensation of the video encoding unit in sequence, and determine the best brightness sample compensation mode.
  • the boundary compensation modes including a total of four modes, namely, horizontal mode, vertical mode, 135° direction mode and 45° direction mode, and the preset categories including obvious concave angle, ordinary concave angle, ordinary convex angle, obvious convex angle and other categories as an example
  • the five categories of the boundary compensation modes obtained after screening are traversed respectively, and the brightness sample compensation of the video encoding unit is performed in turn during the traversal process, wherein the brightness sample compensation can be based on the encoder setting, or the existing brightness sample compensation method for compensation, which is not limited here.
  • the process of performing brightness sample compensation also includes the step of determining the best brightness sample compensation mode.
  • the number of pixels belonging to each boundary compensation mode in the video coding unit and the mean value to be compensated are calculated in sequence, compensation is performed and the rate distortion cost is calculated, and the best brightness sample compensation mode is screened out by cost comparison.
  • Step S205 Filter the chroma sample compensation mode based on the optimal luminance sample compensation mode, traverse the preset categories in the filtered chroma sample compensation mode, and perform chroma sample compensation of the video encoding unit in sequence.
  • the brightness and chrominance components have similarities on the edges.
  • the similarity is utilized, that is, the candidate modes to be traversed in the chrominance sample compensation are constrained by the best brightness sample compensation mode determined in the brightness sample compensation. For example, assuming that the best brightness sample compensation mode is the 135° direction mode, then in the chrominance sample compensation During the compensation process, only the chroma sample compensation corresponding to the 135° direction mode and the chroma sample compensation corresponding to the inherent mode set in the chroma sample compensation may be performed.
  • the best luminance sample compensation mode determined in the process of luminance sample compensation is used to screen the chroma sample compensation mode, which further reduces the number of mode traversals in the process of sample compensation and improves the encoding speed.
  • the determination process of the chroma sample compensation mode also includes: screening the chroma sample compensation mode according to the video scene corresponding to the video encoding unit.
  • the proportion of chroma sample compensation mode in different video scenes is different. For example, for the live broadcast scene, the proportion of horizontal mode and vertical mode in the chroma sample compensation mode is low, so when screening the chroma sample compensation mode, these two modes can be eliminated; for the building scene, the proportion of 135° direction mode and 45° direction mode in the chroma sample compensation mode is low, so when screening the chroma sample compensation mode, these two modes can be eliminated. It can be seen that when screening the chroma sample compensation mode, the mode is screened by the video scene, which can effectively reduce the number of traversals and improve the overall efficiency of video encoding while ensuring the accuracy of the sample compensation error.
  • FIG7 is a structural block diagram of a sample adaptive compensation device in video coding provided by an embodiment of the present application.
  • the device is used to execute the sample adaptive compensation method in video coding provided by the above embodiment, and has functional modules and beneficial effects corresponding to the execution method.
  • the device specifically includes: a boundary information determination module 101, a sample compensation skip module 102, a mode screening module 103 and a sample compensation module 104, wherein:
  • the boundary information determination module 101 is configured to obtain coding depth information and quantization information of the video coding unit, and calculate the boundary information value according to the coding depth information and the quantization information;
  • a sample compensation skipping module 102 configured to skip the calculation of sample adaptive compensation when the boundary information value is less than a preset threshold
  • a mode screening module 103 is configured to, when the boundary information value is not less than the preset threshold, calculate edge strength values of different preset directions by an edge direction estimation algorithm, and screen the boundary compensation mode based on the edge strength values;
  • the sample point compensation module 104 is configured to traverse the preset categories in the screened boundary compensation mode to perform sample point adaptive compensation.
  • the boundary information value is calculated based on the coding depth information and the quantization information.
  • the calculation of the sample adaptive compensation is skipped.
  • the boundary information value is not less than the preset threshold, the edge strength values of different preset directions are calculated by the edge direction estimation algorithm, and then the boundary compensation mode is screened based on the edge strength value, and the preset categories in the screened boundary compensation mode are traversed to perform sample adaptive compensation.
  • This processing mechanism of sample adaptive compensation for the case where the boundary information value is small, it means that the video coding unit is a flat block, and then the sample adaptive compensation is skipped.
  • the edge strength values in each preset direction of the video coding unit are calculated, and the boundary compensation mode is screened based on the edge strength value.
  • the boundary compensation mode is screened based on the edge strength value.
  • each preset category is traversed, and the original boundary compensation mode is partially eliminated, thereby reducing the traversal process of the preset categories under some boundary compensation modes, and improving the overall coding efficiency and coding speed.
  • the boundary information determination module 101 is configured as follows:
  • An average coding depth value is calculated according to the coding depth values of the sub-units in the video coding unit;
  • a boundary information value is calculated based on the average coding depth value, the quantization information and the set weight.
  • the boundary information determination module 101 is configured as follows:
  • the divided depth value is convolved with a preset matrix to obtain edge strength values in different preset directions.
  • the preset matrix includes a plurality of matrices, and each preset matrix corresponds to a preset direction.
  • the boundary information determination module 101 is configured as follows:
  • a split depth value of a minimum coding unit in each of the sub-units is obtained, and a sum of the split depth values of the minimum coding units is determined as a split depth value of the sub-unit.
  • the mode screening module 103 is configured as follows:
  • the boundary compensation mode corresponding to the minimum edge strength value is eliminated.
  • the sample point compensation module 104 is configured as follows:
  • the video encoding is performed in sequence. Luminance sample compensation of code units and determining the best luminance sample compensation mode;
  • the preset categories in the screened chroma sample compensation mode are traversed, and the chroma sample compensation of the video encoding unit is performed in sequence.
  • the sample compensation module 104 is further configured to: filter the chroma sample compensation mode according to the video scene corresponding to the video encoding unit.
  • FIG8 is a schematic diagram of the structure of a sample adaptive compensation device in video coding provided by an embodiment of the present application.
  • the device includes a processor 201, a memory 202, an input device 203, and an output device 204; the number of processors 201 in the device may be one or more, and FIG8 takes one processor 201 as an example; the processor 201, the memory 202, the input device 203, and the output device 204 in the device may be connected via a bus or other means, and FIG8 takes the connection via a bus as an example.
  • the memory 202 can be used to store software programs, computer executable programs, and modules, such as program instructions/modules corresponding to the sample adaptive compensation method in video coding in the embodiment of the present application.
  • the processor 201 executes various functional applications and data processing of the device by running the software programs, instructions, and modules stored in the memory 302, that is, the sample adaptive compensation method in video coding described above is implemented.
  • the input device 203 can be used to receive input digital or character information, and generate key signal input related to user settings and function control of the device.
  • the output device 204 may include a display device such as a display screen.
  • the embodiment of the present application further provides a non-volatile storage medium containing computer executable instructions, wherein the computer executable instructions are used to perform a sample adaptive compensation method in video encoding described in the above embodiment when executed by a computer processor, including:
  • edge strength values of different preset directions are calculated by an edge direction estimation algorithm, and boundary compensation modes are screened based on the edge strength values;
  • the preset categories in the screened boundary compensation mode are traversed to perform sample point adaptive compensation.
  • various aspects of the method provided by the present application may also be implemented in the form of a program product, which includes a program code.
  • the program product When the program product is run on a computer device, the program code is used to enable the computer device to perform the steps of the method according to various exemplary implementations of the present application described above in this specification.
  • the computer device may perform the sample adaptive compensation method in video coding recorded in the embodiment of the present application.
  • the program product may be implemented in any combination of one or more readable media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请实施例提供了一种视频编码中的样点自适应补偿方法及装置,该方法包括:获取视频编码单元的编码深度信息与量化信息,根据所述编码深度信息与所述量化信息计算得到边界信息值;在所述边界信息值小于预设阈值的情况下,跳过样点自适应补偿的计算;在所述边界信息值不小于所述预设阈值的情况下,通过边缘方向估计算法计算得到不同预设方向的边缘强度值,基于所述边缘强度值进行边界补偿模式的筛选;对筛选的边界补偿模式中的预设类别进行遍历,以进行样点自适应补偿。本方案优化了样点自适应补偿机制,提高了整体的编码效率和编码速度。

Description

视频编码中的样点自适应补偿方法及装置
本申请要求在2022年12月01日提交中国专利局,申请号为202211537637.5的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及视频编码技术领域,尤其涉及一种视频编码中的样点自适应补偿方法及装置。
背景技术
随着互联网发展,通过视频进行信息传播的方式越来越广泛。视频在传输过程中,为了提高传输效率需要对视频进行编码,因此使用高效、快速的视频编码方式,是当前研究的重要课题。
相关技术中,在对视频进行编码时,使用样点自适应补偿技术来解决视频编码量化时造成的振铃效应,同时也带来了明显的压缩率增益。在样点自适应补偿的过程中,对视频图像帧中每一个像素点进行遍历、分类、补偿,实现对某一边缘方向上凸角、凹角等失真问题的修正,为一种像素级的决策及补偿过程,由此造成了严重的编码开销。与此同时,样点自适应补偿所属的后处理模块计算复杂度的增加会导致整体的编码速度下降,例如在多线程的情况下,后处理线程的迟滞造成过多的编码线程的等待,使得编码速度显著降低,需要改进。
发明内容
本申请实施例提供了一种视频编码中的样点自适应补偿方法及装置,解决了相关技术中,在进行样点自适应补偿时造成的编码开销过大,编码速度显著降低的问题,优化了样点自适应补偿机制,提高了整体的编码效率和编码速度。
第一方面,本申请实施例提供了一种视频编码中的样点自适应补偿方法,该方法包括:
获取视频编码单元的编码深度信息与量化信息,根据所述编码深度信息与 所述量化信息计算得到边界信息值;
在所述边界信息值小于预设阈值的情况下,跳过样点自适应补偿的计算;
在所述边界信息值不小于所述预设阈值的情况下,通过边缘方向估计算法计算得到不同预设方向的边缘强度值,基于所述边缘强度值进行边界补偿模式的筛选;
对筛选的边界补偿模式中的预设类别进行遍历,以进行样点自适应补偿。
第二方面,本申请实施例还提供了一种视频编码中的样点自适应补偿装置,包括:
边界信息确定模块,配置为获取视频编码单元的编码深度信息与量化信息,根据所述编码深度信息与所述量化信息计算得到边界信息值;
样点补偿跳过模块,配置为在所述边界信息值小于预设阈值的情况下,跳过样点自适应补偿的计算;
模式筛选模块,配置为在所述边界信息值不小于所述预设阈值的情况下,通过边缘方向估计算法计算得到不同预设方向的边缘强度值,基于所述边缘强度值进行边界补偿模式的筛选;
样点补偿模块,配置为对筛选的边界补偿模式中的预设类别进行遍历,以进行样点自适应补偿。
第三方面,本申请实施例还提供了一种视频编码中的样点自适应补偿设备,该设备包括:
一个或多个处理器;
存储装置,用于存储一个或多个程序,
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现本申请实施例所述的视频编码中的样点自适应补偿方法。
第四方面,本申请实施例还提供了一种存储计算机可执行指令的非易失性存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行本申请实施例所述的视频编码中的样点自适应补偿方法。
第五方面,本申请实施例还提供了一种计算机程序产品,该计算机程序产品包括计算机程序,该计算机程序存储在计算机可读存储介质中,设备的至少一个处理器从计算机可读存储介质读取并执行计算机程序,使得设备执行本申 请实施例所述的视频编码中的样点自适应补偿方法。
本申请实施例中,通过获取视频编码单元的编码深度信息与量化信息,基于该编码深度信息与量化信息计算得到边界信息值,在边界信息值小于预设阈值的情况下,跳过样点自适应补偿的计算,在边界信息值不小于预设阈值的情况下,通过边缘方向估计算法计算得到不同预设方向的边缘强度值,再基于边缘强度值进行边界补偿模式的筛选,对筛选的边界补偿模式中的预设类别进行遍历,以进行样点自适应补偿,该种样点自适应补偿的处理机制,针对边界信息值小的情况,意味着视频编码单元为平坦块,进而进行样点自适应补偿的跳过,针对非跳过的情况,计算视频编码单元各个预设方向上的边缘强度值,并基于该边缘强度值进行边界补偿模式的筛选,对筛选得到的边界补偿模式中,每种预设类别进行遍历,对原有的边界补偿模式进行部分剔除,减少了部分边界补偿模式下预设类别的遍历过程,提高了整体的编码效率和编码速度。
附图说明
图1为本申请实施例提供的一种视频编码中的样点自适应补偿方法的流程图;
图2为本申请实施例提供的一种样点自适应补偿中计算边界信息值的方法的流程图;
图3为本申请实施例提供的一种样点自适应补偿中边界信息值的计算方法的流程图;
图4为本申请实施例提供的一种进行边缘强度值计算时使用的卷积矩阵示意图;
图5为本申请实施例提供的一种基于边缘强度值进行补偿模式进行筛选的方法的流程图;
图6为本申请实施例提供的另一种视频编码中的样点自适应补偿方法的流程图;
图7为本申请实施例提供的一种视频编码中的样点自适应补偿装置的结构框图;
图8为本申请实施例提供的一种视频编码中的样点自适应补偿设备的结构 示意图。
具体实施方式
下面结合附图和实施例对本申请实施例作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本申请实施例,而非对本申请实施例的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本申请实施例相关的部分而非全部结构。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
本申请实施例中提供的视频编码中的样点自适应补偿方法,可应用于需要对视频进行编码的场景,如对直播场景中生成的视频进行编码时进行的样点自适应补偿,或者对视频拍摄过程中生成的视频进行编码时的样点自适应补偿。该方法可由计算设备,如智能手机、服务器、笔记本电脑、平板电脑等执行。当计算设备有发送视频的需求时,进行视频编码的过程中可采用本申请的样点自适应补偿方法。可选的,该样点自适应补偿方法可集成于现有的基于高效率视频编码国际标准的视频编码模块中,或者设置单独的样点自适应处理模块,在视频编码模块对视频帧进行编码处理后,进行样点自适应处理,以最终编码成二进制码流,上传至网络中。
图1为本申请实施例提供的一种视频编码中的样点自适应补偿方法的流程图,具体包括如下步骤:
步骤S101、获取视频编码单元的编码深度信息与量化信息,根据所述编码深度信息与所述量化信息计算得到边界信息值。
在一个实施例中,针对样点补偿的处理对象为视频编码单元。示例性的,以H.265视频编码标准为例,可以是定义的CTU(Coding Tree Unit,编码树单 元),其尺寸大小可以是64*64;针对其它视频编码标准,其尺寸大小还可以是128*128、32*32或者16*16。针对固定大小的视频编码单元,其可以被递归分割。如一个尺寸大小为64*64的视频编码单元,可被分割为4个32*32尺寸大小的子单元,或者分割为3个32*32尺寸大小的子单元,以及4个16*16尺寸大小的子单元。在另一个实施例中,该视频编码单元可以是其它视频压缩标准中定义的基本单元。
在一个实施例中,编码深度信息为对视频帧经过编码分析环节的递归遍历后,为当前视频编码单元选择出的最佳划分方式;量化信息为当前视频编码单元实际编码时的量化参数,其反映了视频帧在编码过程中的空间细节压缩情况,示例性的以H.264、H.265视频编码标准为例,可以是具体的QP(Quantization Parameter,量化参数)值。可选的,该视频编码单元的编码深度信息与量化信息可以由当前主流使用的编码器在对视频进行编码过程中的分析环节得到。
在一个实施例中,在获取视频编码单元的编码深度信息与量化信息后,基于该编码深度信息与量化信息计算得到该视频编码单元对应的边界信息值。其中,该边界信息值反映了视频编码单元的平坦程度。可选的,一种根据编码深度信息与量化信息计算得到边界信息值的过程如图2所示,图2为本申请实施例提供的一种样点自适应补偿中计算边界信息值的方法的流程图,具体包括:
步骤S1011、根据视频编码单元中的子单元的编码深度值计算得到平均编码深度值。
在一个实施例中,使用视频编码单元的子单元计算得到的平均编码深度值作为视频编码单元的编码深度值,示例性的计算公式如下:
其中,avgDepth表示该视频编码单元的平均编码深度值,sum()为求和函数,以视频编码单元的尺寸大小为64*64为例,该cu_depth(x)为以4x4为单位记录,其对应第x个小块处的划分深度,可选值为0、1、2和3。其中,上述的子单元的编码深度值可从编码器中的编码结果获取得到。
步骤S1012、基于平均编码深度值、量化信息以及设置的权重计算得到边界信息值。
在一个实施例中,计算得到视频编码单元的平均编码深度值后,进一步结合量化信息以及设置的权重计算得到边界信息值。示例性的,以量化信息为量化参数QP为例,权重值假设记为a,边界信息值记为ES,则边界信息值ES的计算公式可以是:
其中,权重系数a可根据实际情况赋予不同的常数值,例如20、25均可。
需要说明的是,上述的计算边界信息值的方式为一种较优的计算方法,任何其它的常规和可替换方法均可使用,本方案不做限定。
步骤S102、在所述边界信息值小于预设阈值的情况下,跳过样点自适应补偿的计算。
在一个实施例中,在计算得到边界信息值后,基于该边界信息值确定是否跳过样点自适应补偿的计算。可选的,在边界信息值小于预设阈值的情况下,跳过样点自适应补偿的计算。其中,在边界信息值越小的情况,意味着视频编码单元越平坦,若视频编码单元越平坦,则进行样点自适应补偿的收益越低,故此处将计算得到的边界信息值与预设阈值进行比对,在小于该预设阈值时,进行样点自适应补偿的跳过。可选的,该预设阈值的取值范围可以是2-5,即根据不同的使用场景,在该取值范围内选择合适的值。
步骤S103、在所述边界信息值不小于所述预设阈值的情况下,通过边缘方向估计算法计算得到不同预设方向的边缘强度值,基于所述边缘强度值进行边界补偿模式的筛选。
在一个实施例中,在边界信息值不小于预设阈值的情况下,进行相应的样点自适应补偿的计算。可选的,样点自适应补偿的计算包括边界补偿和边带补偿两种计算处理过程,其中,边界补偿时根据像素遍历时的方向,分为4种模式,示例性的分别记为:水平模式(EO_0)、竖直模式(EO_1)、135°方向模式(EO_2)以及45°方向模式(EO_3)。针对每一种边界补偿模式,其包括多个预设类别,如根据当前像素与上述模式方向上相邻两像素的关系分为5个预设类别,示例性的记为明显凹角、普通凹角、普通凸角、明显凸角以及其它类别。其中,前两个预设类别为凹角情况,其当前像素值低于相邻两像素, 采用一个正的补偿值;针对预设类别中的两类凸角情况则采用一个负的补偿值,降低当前像素的凸出程度;针对其它类别的场景不做补偿处理。即上述的边界补偿共分为了20(4种边界补偿模式下,每种边界补偿模式包含5个预设类别)种具体的遍历类别,不同类别采用不同的补偿值。其中,边带补偿为根据像素值的大小对像素进行分类,例如,H.265在处理8bit视频时,将0至255的像素值等分为了32条边带,每条边带包含8个连续的像素值。编码端将会对规定边带内的像素值进行统计并计算均值,然后将原始像素均值与重建像素均值之差传送到解码端,解码端则将解析出的差值补偿到规定的边带内,以此来拉近重建均值与原始均值的差距。边带补偿共分为了32种类别,不同类别采用不同的补偿值。
由上述可知,在进行样点自适应补偿是包括52种不同类别的补偿方式,由此在样点自适应补偿的遍历计算过程中,在编码器中造成的瓶颈效应非常明显。在一个实施例中,针对边界信息值不小于预设阈值的情况下,通过边缘方向估计算法计算得到不同预设方向的边缘强度值,再依次进行边界补偿模式,以剔除掉不合理的边界补偿模式来降低编码器的数据运算量。
可选的,边界信息值的计算过程示例性的如果3所示,图3为本申请实施例提供的一种样点自适应补偿中边界信息值的计算方法的流程图,具体包括:
步骤S1031、将视频编码单元划分为多个子单元,分别计算每个子单元的划分深度值。
示例性的,以视频编码单元尺寸为64*64为例,其划分为4个32x32的子单元,分别计算每个子单元的划分深度值。可选的,每个子单元的划分深度值的一种计算方式可以是:获取每个子单元中的最小编码单元的划分深度值,将最小编码单元的划分深度值之和确定为该子单元的划分深度值。示例性的,计算公式如下:
其中,subCUDepth(i)代表第i个子单元的划分深度值,i的取值分别代表了不同的子单元,j表示每个子单元内最小编码单元的索引,示例性的,该最小编码单元的尺寸大小示例性的可以是4*4。
可选的,该subCUDepth(i)可以以矩阵的形式存储。示例性的,以一个尺寸大小为64*64的视频编码单元为例,其等分为4个尺寸大小为32*32的子单元,假设i=0,1,2,3分别对应左上子单元、右上子单元、左下子单元右下子单元。相应的,采用2*2大小的矩阵分别存储左上子单元、右上子单元、左下子单元右下子单元的划分深度值。
步骤S1032、将划分深度值与预设矩阵进行卷积运算得到不同预设方向的边缘强度值,其中,预设矩阵包括多个,每个预设矩阵对应一个预设方向。
在一个实施例中,计算得到视频编码单元的每个子单元的划分深度值后,计算得到该视频编码单元在不同预设方向的边缘强度值,其中,该预设方向示例性的可以是包括4个,即水平方向、竖直方向、135°方向以及45°方向,其分别对应边界补偿中的4种边界补偿模式。需要说明的是,上述的预设方向可基于不同的编码器和样点自适应补偿算法进行适应性调整,可增加或删除预设方向的数量,也可以对预设方向的具体方向角度进行调整。可选的,以前述划分深度值为计算并存储为的2*2矩阵为例,一种计算边缘强度值的方式可以是:将划分深度值与预设矩阵进行卷积运算得到不同预设方向的边缘强度值,其中,以水平方向、竖直方向、135°方向以及45°方向为例,分别对应设置的预设矩阵如图4所示,图4为本申请实施例提供的一种进行边缘强度值计算时使用的卷积矩阵示意图。通过每个预设矩阵的卷积运算后,由于该预设矩阵的数值的设置,可分别得到视频编码单元在水平方向、竖直方向、135°方向以及45°方向上的像素变化差异,通过边缘强度值的方式表征,以用于后续边界补偿模式筛选的依据。
在另一个实施例中,针对算力较强的处理设备,边缘强度值的计算还可以采用边缘提取算子如Marr-Hildreth算子等进行计算得到。
在一个实施例中,在计算得到不同方向的边缘强度值后,还包括对边界补偿模式进行筛选的过程。示例性的,如图5所示,图5为本申请实施例提供的一种基于边缘强度值进行补偿模式进行筛选的方法的流程图,具体包括:
步骤S1033、对计算得到不同预设方向的边缘强度值进行排序。
步骤S1034、在边缘强度值满足剔除条件的情况下,将最小的边缘强度值对应的边界补偿模式进行剔除。
在一个实施例中,不同的边缘强度值分别对应一个预设方向,也即对应一种边界补偿模式,其反应了该方向上的像素变化情况,在计算得到不同预设方向的边缘强度值后,对其进行排序,将最小的边缘强度值对应的边界补偿模式进行剔除。示例性的,以边界补偿模式包括水平模式、竖直模式、135°方向模式以及45°方向模式为例,假定45°方向模式对应的边缘强度值最小,则对其进行剔除,保留剩余的三种边界补偿模式。可选的,剔除数量可根据实际需要进行设定。
在一个实施例中,在对边界补偿模式进行剔除时,包括判断边缘强度值满足剔除条件的情况下。可选的,该剔除条件可以是最大的边缘强度值不为0的情况,由此避免了对边界补偿模式进行剔除导致的较大的误差引入。
步骤S104、对筛选的边界补偿模式中的预设类别进行遍历,以进行样点自适应补偿。
在一个实施例中,在进行视频编码单元的样点自适应补偿时,对筛选后得到的边界补偿模式中的预设类别进行遍历,以进行样点自适应补偿。示例性的,如筛选得到的边界补偿模式包括水平模式、竖直模式、135°方向模式,则分别对该水平模式、竖直模式、135°方向模式中设置的明显凹角、普通凹角、普通凸角、明显凸角以及其它类别中的5个类别进行遍历,以完成视频编码单元的边界补偿。
由上述可知,通过获取视频编码单元的编码深度信息与量化信息,基于该编码深度信息与量化信息计算得到边界信息值,在边界信息值小于预设阈值的情况下,跳过样点自适应补偿的计算,在边界信息值不小于预设阈值的情况下,通过边缘方向估计算法计算得到不同预设方向的边缘强度值,再基于边缘强度值进行边界补偿模式的筛选,对筛选的边界补偿模式中的预设类别进行遍历,以进行样点自适应补偿,该种样点自适应补偿的处理机制,针对边界信息值小的情况,意味着视频编码单元为平坦块,进而进行样点自适应补偿的跳过,针对非跳过的情况,计算视频编码单元各个预设方向上的边缘强度值,并基于该边缘强度值进行边界补偿模式的筛选,对筛选得到的边界补偿模式中,每种预设类别进行遍历,对原有的边界补偿模式进行部分剔除,减少了部分边界补偿模式下预设类别的遍历过程,提高了整体的编码效率和编码速度。
图6为本申请实施例提供的另一种视频编码中的样点自适应补偿方法的流程图,给出了一种对筛选的边界补偿模式中的预设类别进行遍历,以进行样点自适应补偿的方法,具体包括:
步骤S201、获取视频编码单元的编码深度信息与量化信息,根据所述编码深度信息与所述量化信息计算得到边界信息值。
步骤S202、在所述边界信息值小于预设阈值的情况下,跳过样点自适应补偿的计算。
步骤S203、在所述边界信息值不小于所述预设阈值的情况下,通过边缘方向估计算法计算得到不同预设方向的边缘强度值,基于所述边缘强度值进行边界补偿模式的筛选。
步骤S204、在对筛选的边界补偿模式中的预设类别进行遍历时,依次进行所述视频编码单元的亮度样点补偿,并确定最佳亮度样点补偿模式。
在一个实施例中,以边界补偿模式总计包括水平模式、竖直模式、135°方向模式以及45°方向模式四种,以及预设类别包括明显凹角、普通凹角、普通凸角、明显凸角以及其它类别五个类别为例,分别进行筛选后得到的边界补偿模式中的五个类别的遍历,遍历过程中依次进行视频编码单元的亮度样点补偿,其中,该亮度样点补偿可基于编码器设置,或现有的亮度样点补偿方法进行补偿,此处不做限定。
在一个实施例中,进行亮度样点补偿的过程中,还包括确定最佳亮度样点补偿模式的步骤。可选的,可以是依次计算出视频编码单元内归属于每个边界补偿模式的像素个数以及待补偿的均值,进行补偿并计算率失真代价,通过代价对比筛选出最佳的亮度样点补偿模式。
步骤S205、基于所述最佳亮度样点补偿模式对色度样点补偿模式进行筛选,对筛选得到的色度样点补偿模式中的预设类别进行遍历,依次进行所述视频编码单元的色度样点补偿。
以视频编码格式为YUV为例,亮度与色度分量在边缘上具有相似性,在一个实施例中,对该相似性进行利用,即通过在亮度样点补偿中确定的最佳的亮度样点补偿模式对色度样点补偿中,待遍历的候选模式进行约束。示例性的,假定确定的最佳的亮度样点补偿模式为135°方向模式,则在进行色度样点补 偿过程中,可仅执行135°方向模式对应的色度样点补偿以及在色度样点补偿中设置的固有模式对应的色度样点补偿。
由上述可知,在进行色度样点补偿的过程中,利用亮度样点补偿过程中决策出的最佳亮度样点补偿模式对色度样点补偿模式进行筛选,进一步降低了样点补偿过程中模式遍历的次数,提高了编码速度。
在上述技术方案的基础上,色度样点补偿模式的确定过程还包括:根据所述视频编码单元对应的视频场景,对色度样点补偿模式进行筛选。在一个实施例中,通过实验测试比对以及方案设计经验,发现不同的视频场景下色度样点补偿模式的占比不同。举例而言,针对直播场景,色度样点补偿模式中水平模式和竖直模式的占比较低,故在进行色度样点补偿模式的筛选时,可剔除该两种模式;针对建筑物场景,色度样点补偿模式中135°方向模式以及45°方向模式占比较低,故在进行色度样点补偿模式的筛选时,可剔除该两种模式。由此可知,在进行色度样点补偿模式的筛选时通过视频场景进行模式筛选,在保证了样点补偿误差精度的同时,可有效的减少遍历次数,提高视频编码的整体效率。
图7为本申请实施例提供的一种视频编码中的样点自适应补偿装置的结构框图,该装置用于执行上述实施例提供的视频编码中的样点自适应补偿方法,具备执行方法相应的功能模块和有益效果。如图7所示,该装置具体包括:边界信息确定模块101、样点补偿跳过模块102、模式筛选模块103和样点补偿模块104,其中,
边界信息确定模块101,配置为获取视频编码单元的编码深度信息与量化信息,根据所述编码深度信息与所述量化信息计算得到边界信息值;
样点补偿跳过模块102,配置为在所述边界信息值小于预设阈值的情况下,跳过样点自适应补偿的计算;
模式筛选模块103,配置为在所述边界信息值不小于所述预设阈值的情况下,通过边缘方向估计算法计算得到不同预设方向的边缘强度值,基于所述边缘强度值进行边界补偿模式的筛选;
样点补偿模块104,配置为对筛选的边界补偿模式中的预设类别进行遍历,以进行样点自适应补偿。
由上述方案可知,通过获取视频编码单元的编码深度信息与量化信息,基于该编码深度信息与量化信息计算得到边界信息值,在边界信息值小于预设阈值的情况下,跳过样点自适应补偿的计算,在边界信息值不小于预设阈值的情况下,通过边缘方向估计算法计算得到不同预设方向的边缘强度值,再基于边缘强度值进行边界补偿模式的筛选,对筛选的边界补偿模式中的预设类别进行遍历,以进行样点自适应补偿,该种样点自适应补偿的处理机制,针对边界信息值小的情况,意味着视频编码单元为平坦块,进而进行样点自适应补偿的跳过,针对非跳过的情况,计算视频编码单元各个预设方向上的边缘强度值,并基于该边缘强度值进行边界补偿模式的筛选,对筛选得到的边界补偿模式中,每种预设类别进行遍历,对原有的边界补偿模式进行部分剔除,减少了部分边界补偿模式下预设类别的遍历过程,提高了整体的编码效率和编码速度。
在一个可能的实施例中,所述边界信息确定模块101,配置为:
根据所述视频编码单元中的子单元的编码深度值计算得到平均编码深度值;
基于所述平均编码深度值、所述量化信息以及设置的权重计算得到边界信息值。
在一个可能的实施例中,所述边界信息确定模块101,配置为:
将所述视频编码单元划分为多个子单元,分别计算每个所述子单元的划分深度值;
将所述划分深度值与预设矩阵进行卷积运算得到不同预设方向的边缘强度值,所述预设矩阵包括多个,每个所述预设矩阵对应一个预设方向。
在一个可能的实施例中,所述边界信息确定模块101,配置为:
获取每个所述子单元中的最小编码单元的划分深度值,将所述最小编码单元的划分深度值之和确定为所述子单元的划分深度值。
在一个可能的实施例中,所述模式筛选模块103,配置为:
对计算得到不同预设方向的边缘强度值进行排序;
在所述边缘强度值满足剔除条件的情况下,将最小的边缘强度值对应的边界补偿模式进行剔除。
在一个可能的实施例中,所述样点补偿模块104,配置为:
在对筛选的边界补偿模式中的预设类别进行遍历时,依次进行所述视频编 码单元的亮度样点补偿,并确定最佳亮度样点补偿模式;
基于所述最佳亮度样点补偿模式对色度样点补偿模式进行筛选;
对筛选得到的色度样点补偿模式中的预设类别进行遍历,依次进行所述视频编码单元的色度样点补偿。
在一个可能的实施例中,所述样点补偿模块104,还配置为:根据所述视频编码单元对应的视频场景,对色度样点补偿模式进行筛选。
图8为本申请实施例提供的一种视频编码中的样点自适应补偿设备的结构示意图,如图8所示,该设备包括处理器201、存储器202、输入装置203和输出装置204;设备中处理器201的数量可以是一个或多个,图8中以一个处理器201为例;设备中的处理器201、存储器202、输入装置203和输出装置204可以通过总线或其他方式连接,图8中以通过总线连接为例。存储器202作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本申请实施例中的视频编码中的样点自适应补偿方法对应的程序指令/模块。处理器201通过运行存储在存储器302中的软件程序、指令以及模块,从而执行设备的各种功能应用以及数据处理,即实现上述的视频编码中的样点自适应补偿方法。输入装置203可用于接收输入的数字或字符信息,以及产生与设备的用户设置以及功能控制有关的键信号输入。输出装置204可包括显示屏等显示设备。
本申请实施例还提供一种包含计算机可执行指令的非易失性存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行一种上述实施例描述的视频编码中的样点自适应补偿方法,其中,包括:
获取视频编码单元的编码深度信息与量化信息,根据所述编码深度信息与所述量化信息计算得到边界信息值;
在所述边界信息值小于预设阈值的情况下,跳过样点自适应补偿的计算;
在所述边界信息值不小于所述预设阈值的情况下,通过边缘方向估计算法计算得到不同预设方向的边缘强度值,基于所述边缘强度值进行边界补偿模式的筛选;
对筛选的边界补偿模式中的预设类别进行遍历,以进行样点自适应补偿。
值得注意的是,上述视频编码中的样点自适应补偿装置的实施例中,所包 括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本申请实施例的保护范围。
在一些可能的实施方式中,本申请提供的方法的各个方面还可以实现为一种程序产品的形式,其包括程序代码,当所述程序产品在计算机设备上运行时,所述程序代码用于使所述计算机设备执行本说明书上述描述的根据本申请各种示例性实施方式的方法中的步骤,例如,所述计算机设备可以执行本申请实施例所记载的视频编码中的样点自适应补偿方法。所述程序产品可以采用一个或多个可读介质的任意组合实现。

Claims (11)

  1. 视频编码中的样点自适应补偿方法,其中,包括:
    获取视频编码单元的编码深度信息与量化信息,根据所述编码深度信息与所述量化信息计算得到边界信息值;
    在所述边界信息值小于预设阈值的情况下,跳过样点自适应补偿的计算;
    在所述边界信息值不小于所述预设阈值的情况下,通过边缘方向估计算法计算得到不同预设方向的边缘强度值,基于所述边缘强度值进行边界补偿模式的筛选;
    对筛选的边界补偿模式中的预设类别进行遍历,以进行样点自适应补偿。
  2. 根据权利要求1所述的视频编码中的样点自适应补偿方法,其中,所述根据所述编码深度信息与所述量化信息计算得到边界信息值,包括:
    根据所述视频编码单元中的子单元的编码深度值计算得到平均编码深度值;
    基于所述平均编码深度值、所述量化信息以及设置的权重计算得到边界信息值。
  3. 根据权利要求1所述的视频编码中的样点自适应补偿方法,其中,所述通过边缘方向估计算法计算得到不同预设方向的边缘强度值,包括:
    将所述视频编码单元划分为多个子单元,分别计算每个所述子单元的划分深度值;
    将所述划分深度值与预设矩阵进行卷积运算得到不同预设方向的边缘强度值,所述预设矩阵包括多个,每个所述预设矩阵对应一个预设方向。
  4. 根据权利要求3所述的视频编码中的样点自适应补偿方法,其中,所述分别计算每个所述子单元的划分深度值,包括:
    获取每个所述子单元中的最小编码单元的划分深度值,将所述最小编码单元的划分深度值之和确定为所述子单元的划分深度值。
  5. 根据权利要求1所述的视频编码中的样点自适应补偿方法,其中,所述基于所述边缘强度值进行边界补偿模式的筛选,包括:
    对计算得到不同预设方向的边缘强度值进行排序;
    在所述边缘强度值满足剔除条件的情况下,将最小的边缘强度值对应的边界补偿模式进行剔除。
  6. 根据权利要求1-5中任一项所述的视频编码中的样点自适应补偿方法, 其中,所述对筛选的边界补偿模式中的预设类别进行遍历,以进行样点自适应补偿,包括:
    在对筛选的边界补偿模式中的预设类别进行遍历时,依次进行所述视频编码单元的亮度样点补偿,并确定最佳亮度样点补偿模式;
    基于所述最佳亮度样点补偿模式对色度样点补偿模式进行筛选;
    对筛选得到的色度样点补偿模式中的预设类别进行遍历,依次进行所述视频编码单元的色度样点补偿。
  7. 根据权利要求6所述的视频编码中的样点自适应补偿方法,其中,所述色度样点补偿模式的确定过程还包括:
    根据所述视频编码单元对应的视频场景,对色度样点补偿模式进行筛选。
  8. 视频编码中的样点自适应补偿装置,其中,包括:
    边界信息确定模块,配置为获取视频编码单元的编码深度信息与量化信息,根据所述编码深度信息与所述量化信息计算得到边界信息值;
    样点补偿跳过模块,配置为在所述边界信息值小于预设阈值的情况下,跳过样点自适应补偿的计算;
    模式筛选模块,配置为在所述边界信息值不小于所述预设阈值的情况下,通过边缘方向估计算法计算得到不同预设方向的边缘强度值,基于所述边缘强度值进行边界补偿模式的筛选;
    样点补偿模块,配置为对筛选的边界补偿模式中的预设类别进行遍历,以进行样点自适应补偿。
  9. 一种视频编码中的样点自适应补偿设备,所述设备包括:一个或多个处理器;存储装置,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现权利要求1-7中任一项所述的视频编码中的样点自适应补偿方法。
  10. 一种存储计算机可执行指令的非易失性存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行权利要求1-7中任一项所述的视频编码中的样点自适应补偿方法。
  11. 一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时实现权利要求1-7中任一项所述的视频编码中的样点自适应补偿方 法。
PCT/CN2023/132735 2022-12-01 2023-11-20 视频编码中的样点自适应补偿方法及装置 WO2024114432A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211537637.5A CN116016937A (zh) 2022-12-01 2022-12-01 视频编码中的样点自适应补偿方法及装置
CN202211537637.5 2022-12-01

Publications (1)

Publication Number Publication Date
WO2024114432A1 true WO2024114432A1 (zh) 2024-06-06

Family

ID=86028771

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/132735 WO2024114432A1 (zh) 2022-12-01 2023-11-20 视频编码中的样点自适应补偿方法及装置

Country Status (2)

Country Link
CN (1) CN116016937A (zh)
WO (1) WO2024114432A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116016937A (zh) * 2022-12-01 2023-04-25 百果园技术(新加坡)有限公司 视频编码中的样点自适应补偿方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140376619A1 (en) * 2013-06-19 2014-12-25 Apple Inc. Sample adaptive offset control
CN107343199A (zh) * 2017-06-29 2017-11-10 武汉大学 用于hevc中样点的快速自适应补偿方法
US20180091812A1 (en) * 2016-09-23 2018-03-29 Apple Inc. Video compression system providing selection of deblocking filters parameters based on bit-depth of video data
CN108259903A (zh) * 2018-04-10 2018-07-06 重庆邮电大学 基于人眼感兴趣区域的h.265样点自适应补偿方法
CN114501035A (zh) * 2022-01-26 2022-05-13 百果园技术(新加坡)有限公司 视频编解码滤波处理方法、系统、设备及存储介质
CN116016937A (zh) * 2022-12-01 2023-04-25 百果园技术(新加坡)有限公司 视频编码中的样点自适应补偿方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140376619A1 (en) * 2013-06-19 2014-12-25 Apple Inc. Sample adaptive offset control
US20180091812A1 (en) * 2016-09-23 2018-03-29 Apple Inc. Video compression system providing selection of deblocking filters parameters based on bit-depth of video data
CN107343199A (zh) * 2017-06-29 2017-11-10 武汉大学 用于hevc中样点的快速自适应补偿方法
CN108259903A (zh) * 2018-04-10 2018-07-06 重庆邮电大学 基于人眼感兴趣区域的h.265样点自适应补偿方法
CN114501035A (zh) * 2022-01-26 2022-05-13 百果园技术(新加坡)有限公司 视频编解码滤波处理方法、系统、设备及存储介质
CN116016937A (zh) * 2022-12-01 2023-04-25 百果园技术(新加坡)有限公司 视频编码中的样点自适应补偿方法及装置

Also Published As

Publication number Publication date
CN116016937A (zh) 2023-04-25

Similar Documents

Publication Publication Date Title
CN108495135B (zh) 一种屏幕内容视频编码的快速编码方法
CN110546953B (zh) 复杂度自适应单程转码与双程转码
US20200310739A1 (en) Real-time screen sharing
US20220046261A1 (en) Encoding method and apparatus for screen sharing, storage medium, and electronic device
WO2024114432A1 (zh) 视频编码中的样点自适应补偿方法及装置
CN110620924B (zh) 编码数据的处理方法、装置、计算机设备及存储介质
US11503312B2 (en) Method and devices for video predicting a first color component from a second component
US20220377339A1 (en) Video signal processor for block-based picture processing
KR20220006113A (ko) 루프 필터링 방법 및 장치
CN111669595A (zh) 一种屏幕内容编码方法、装置、设备和介质
US20240064284A1 (en) Methods, systems and encoded bitstream for combined lossless and lossy coding
CN107682699B (zh) 一种近无损图像压缩方法
EP3985983A1 (en) Interpolation filtering method and apparatus for intra-frame prediction, medium, and electronic device
CN104301722A (zh) 一种基于频域的视频流模糊检测方法
CN116760983B (zh) 用于视频编码的环路滤波方法及装置
US20210289206A1 (en) Block-based spatial activity measures for pictures
US11622118B2 (en) Determination of coding modes for video content using order of potential coding modes and block classification
WO2024119821A1 (zh) 视频数据处理方法、装置、存储介质、设备和程序产品
CN105230015B (zh) 自适应处理视频信号帧中的视频取样的方法和装置
RU2782583C1 (ru) Слияние изображений на блочной основе для контекстной сегментации и обработки
WO2023130899A1 (zh) 环路滤波方法、视频编解码方法、装置、介质及电子设备
WO2024198897A1 (zh) 一种视频传输方法、装置、存储介质和系统
US20230206411A1 (en) Methods and systems for generating metadata pertaining to a raw frame
CN110944180B (zh) 色度块预测方法及装置
WO2021128265A1 (zh) 滤波方法及其装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23896584

Country of ref document: EP

Kind code of ref document: A1