WO2022116246A1 - 帧间预测方法、视频编解码方法、装置及介质 - Google Patents

帧间预测方法、视频编解码方法、装置及介质 Download PDF

Info

Publication number
WO2022116246A1
WO2022116246A1 PCT/CN2020/135490 CN2020135490W WO2022116246A1 WO 2022116246 A1 WO2022116246 A1 WO 2022116246A1 CN 2020135490 W CN2020135490 W CN 2020135490W WO 2022116246 A1 WO2022116246 A1 WO 2022116246A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
block
current block
inter
illumination compensation
Prior art date
Application number
PCT/CN2020/135490
Other languages
English (en)
French (fr)
Inventor
黄航
谢志煌
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/CN2020/133709 external-priority patent/WO2022116119A1/zh
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to CN202311171521.9A priority Critical patent/CN117221534A/zh
Priority to MX2023006442A priority patent/MX2023006442A/es
Priority to CN202080107715.0A priority patent/CN116569554A/zh
Publication of WO2022116246A1 publication Critical patent/WO2022116246A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding

Definitions

  • the embodiments of the present disclosure relate to, but are not limited to, video coding and decoding technologies, and more particularly, to an inter-frame prediction method, a video coding and decoding method, an apparatus, and a medium.
  • the video coding process mainly includes intra-frame prediction, inter-frame prediction, transformation, quantization, entropy coding, in-loop filtering and other links.
  • Video decoding is equivalent to the reverse process of video encoding.
  • LIC Local Illumination Compensation
  • the LIC technology uses the adjacent pixels of the current block and the adjacent pixels of the reference block to construct a linear model of illumination compensation, and then derives the prediction block of the current block according to the linear model of illumination compensation.
  • the buffer memory cache needs to be able to store more content, which will greatly increase the extra overhead and burden of the hardware.
  • An embodiment of the present disclosure provides an inter-frame prediction method, including:
  • the available pixels include pixels adjacent to the current block and/or pixels of the reference block of the current block;
  • An embodiment of the present disclosure provides an inter-frame prediction apparatus, including: a processor and a memory storing a computer program executable on the processor, wherein the processor implements the above-mentioned computer program when executing the computer program Inter prediction method.
  • An embodiment of the present disclosure provides a video encoding method, including:
  • the current block is encoded according to the predicted value of the current block.
  • An embodiment of the present disclosure provides a video encoding apparatus, including: a processor and a memory storing a computer program executable on the processor, wherein the processor implements the above video when executing the computer program encoding method.
  • An embodiment of the present disclosure provides a video decoding method, including:
  • the current block is decoded according to the predicted value of the current block.
  • An embodiment of the present disclosure provides a video decoding apparatus, including: a processor and a memory storing a computer program that can be executed on the processor, wherein the processor implements the above video when executing the computer program decoding method.
  • An embodiment of the present disclosure provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, wherein the computer program, when executed by a processor, implements the above-mentioned inter-frame prediction method, or implements the above-mentioned inter-frame prediction method. the video encoding method, or implement the above-mentioned video decoding method.
  • FIG. 1 is a schematic diagram of obtaining pixels adjacent to a current block and a reference block in the related art
  • FIG. 2 is a schematic flowchart of an inter-frame prediction method provided by an exemplary embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart of an inter-frame prediction method provided by an exemplary embodiment of the present disclosure
  • 4a is a schematic diagram of obtaining adjacent pixels of a current block and pixels of a reference block in an example of the disclosure
  • 4b is a schematic diagram of obtaining adjacent pixels of a current block and pixels of a reference block in an example of the disclosure
  • 4c is a schematic diagram of obtaining adjacent pixels of a current block and pixels of a reference block in an example of the disclosure
  • 4d is a schematic diagram of obtaining adjacent pixels of a current block and pixels of a reference block in an example of the disclosure
  • 4e is a schematic diagram of obtaining adjacent pixels of a current block and pixels of a reference block in an example of the disclosure
  • 4f is a schematic diagram of obtaining adjacent pixels of a current block and pixels of a reference block in an example of the disclosure
  • FIG. 5 is a schematic structural diagram of an inter-frame prediction apparatus according to an exemplary embodiment of the present disclosure
  • FIG. 6 is a schematic flowchart of a video encoding method provided by an exemplary embodiment of the present disclosure
  • FIG. 7 is a schematic flowchart of a video encoding method provided by an exemplary embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of a video encoding apparatus according to an exemplary embodiment of the present disclosure.
  • FIG. 9 is a schematic flowchart of a video decoding method provided by an exemplary embodiment of the present disclosure.
  • FIG. 10 is a schematic flowchart of a video decoding method provided by an exemplary embodiment of the present disclosure.
  • FIG. 11 is a schematic structural diagram of a video decoding apparatus according to an exemplary embodiment of the present disclosure.
  • FIG. 12 is a structural block diagram of a video encoder according to an exemplary embodiment of the present disclosure.
  • FIG. 13 is a structural block diagram of a video decoder provided by an exemplary embodiment of the present disclosure.
  • FIG. 14a is a schematic diagram of a pixel whose illumination compensation mode index is IC_TL provided by an exemplary embodiment of the present disclosure
  • FIG. 14b is a schematic diagram of a pixel whose illumination compensation mode index is IC_T provided by an exemplary embodiment of the present disclosure
  • FIG. 14c is a schematic diagram of a pixel with an illumination compensation mode index IC_L provided by an exemplary embodiment of the present disclosure
  • FIG. 15a is a schematic diagram of available pixels in an IC_TL mode provided by an exemplary embodiment of the present disclosure
  • FIG. 15b is a schematic diagram of available pixels in the IC_T mode provided by an exemplary embodiment of the present disclosure.
  • FIG. 15c is a schematic diagram of available pixels in the IC_L mode provided by an exemplary embodiment of the present disclosure.
  • FIG. 16 is a schematic diagram of obtaining adjacent pixels of a current block and pixels of a reference block according to an exemplary embodiment of the present disclosure.
  • Image block In the process of video coding and decoding, image block is used as the basic unit to perform various types of encoding or decoding operations. For example, image block based prediction, transform, entropy coding, etc.
  • An image block may also be referred to as a coding unit CU (Coding Unit).
  • An image block refers to a two-dimensional array of sampling points, which can be a square array or a rectangular array. For example, a 4x8 image block can be regarded as a 4x8 square sampling point array consisting of 32 sampling points.
  • Current block refers to the image block currently being processed, which may also be referred to as the current coding unit CU.
  • the current block refers to the image block currently being encoded; in decoding, the current block refers to the image block currently being decoded.
  • Reference block refers to an image block that provides a reference signal for the current block, and may also be referred to as a reference coding unit CU.
  • the pixel points of the reference block refer to the pixel points contained in the reference block.
  • Prediction block refers to a block that provides prediction for the current block, and may also be referred to as a prediction coding unit CU.
  • an illumination compensation linear model when constructing an illumination compensation linear model, it is necessary to obtain the pixels adjacent to the reference block. For example, as shown in Figure 1, when constructing the illumination compensation linear model, it is necessary to select the pixels in the upper row and the left column of the current block (indicated by the black circles in the figure), and select the pixels in the upper row and the left column of the reference block ( The black circles in the figure), and then the reconstructed values of these two groups of pixels are selected as the samples for building the linear model.
  • the cache in order to obtain the relevant information of the adjacent pixels of the reference block, the cache needs to be able to store more content, which will greatly increase the extra overhead and burden of the hardware, and also increase the overhead and burden of the bandwidth.
  • Some embodiments of the present disclosure provide an inter-frame prediction method.
  • the method includes:
  • Step 201 Determine local illumination compensation parameters according to the acquired available pixels; the available pixels include pixels adjacent to the current block and/or pixels of the reference block of the current block;
  • Step 202 Perform inter-frame prediction on the current block according to the local illumination compensation parameter to obtain a predicted value of the current block.
  • the pixels adjacent to the current block include pixels in a row adjacent to the current block and/or pixels in a column adjacent to the current block.
  • the pixels of the reference block include pixels in a row above the reference block and/or pixels in a column to the left of the reference block.
  • the pixel points of the reference block include pixels in a row other than the upper row of the reference block and/or pixels in a column to the left of the reference block.
  • the pixels of the reference block include pixels in a row above the reference block and/or pixels in a column other than the left column of the reference block.
  • the pixels of the reference block include pixels in a row of the reference block other than the upper row and/or pixels in a column of the reference block other than a left column.
  • the pixels in a row include all or part of the pixels in the row; the pixels in a column include all or part of the pixels in the column;
  • the part of the pixel points is selected after down-sampling, or selected according to a preset position.
  • the local illumination compensation parameters are determined in one or more of the following ways:
  • a preset algorithm is used to determine the local illumination compensation parameter
  • a preset algorithm is used to determine the local illumination compensation parameter
  • the predicted value of the pixel point of the reference block includes one or more of the following:
  • the pixel values of the pixels in the prediction block obtained after the reference block undergoes motion compensation; the pixel values of the pixels in the prediction block obtained after the reference block undergoes motion compensation and bidirectional optical flow BIO or bidirectional gradient correction BGC.
  • the technical solution of the exemplary embodiment provided by the present disclosure eliminates the need to select pixels adjacent to the reference block, reduces the content that needs to be stored in the cache, and can reduce the extra overhead and burden of hardware.
  • Some embodiments of the present disclosure provide an inter-frame prediction method.
  • the method includes:
  • Step 301 obtaining the pixels adjacent to the current block and the pixels of the reference block of the current block;
  • the method before step 301, further includes: acquiring a reference block of the current block.
  • the method of determining the reference block is different.
  • the current block MV is derived through the motion vector MV (Motion Vector) of the adjacent block, so as to determine the reference block.
  • the common inter prediction mode derives the current block MV through a motion estimation algorithm, so as to determine the reference block.
  • the method before step 301, the method further includes:
  • the subsequent inter-frame prediction mode does not need to perform local illumination compensation.
  • step 301 is executed.
  • the pixels adjacent to the current block include pixels in a row adjacent to the current block and/or pixels in a column adjacent to the current block; the pixels in the reference block include all pixels in a row above the reference block and/or pixels in a column to the left of the reference block.
  • the pixels in a row adjacent to the current block include all pixels in a row adjacent to the current block; the pixels in a column adjacent to the current block include the left side of the current block. All pixels in an adjacent column;
  • Pixels in a row above the reference block include all pixels in a row above the reference block; pixels in a column to the left of the reference block include all pixels in a column to the left of the reference block.
  • the available pixels to select pixels adjacent to the current block include all 8 pixels in a row adjacent to the top of the current block and all 4 pixels in a column adjacent to the left.
  • the pixels of the reference block that can be selected from the available pixel points include all 8 pixels in the upper row of the reference block and all 4 pixels in the left column (indicated by black circles in the figure).
  • the pixels in a row adjacent to the current block include some pixels in a row adjacent to the current block; the pixels in a column adjacent to the current block include pixels adjacent to the left of the current block. part of the pixels in a column;
  • the pixels in the row above the reference block include part of the pixels in the row above the reference block; the pixels in the left column of the reference block include part of the pixels in the left column of the reference block;
  • the partial pixel points are selected after down-sampling.
  • the pixel points that are down-sampled by 1/2 can be selected from the pixel points (indicated by the black circles in the figure).
  • the pixels adjacent to the current block include pixels in a row adjacent to the current block and/or pixels in a column adjacent to the current block; the pixels in the reference block include all All the pixels or part of the pixels in the row above the reference block, and/or all the pixels or part of the pixels in the other column except the left column of the reference block.
  • all the pixels in the upper row in the reference block and all the pixels in the second column from the left can be selected by using the pixel points (indicated by the black circles in the figure).
  • the pixels adjacent to the current block include pixels in a row adjacent to the current block and/or pixels in a column adjacent to the current block; the pixels in the reference block include all All the pixels or part of the pixels in the other row except the upper row of the reference block, and/or all the pixels or part of the pixels in the left column of the reference block.
  • all the pixels in the second upper row in the reference block and all the pixels in the left column can be selected by using the pixels.
  • the pixels adjacent to the current block include pixels in a row adjacent to the current block and/or pixels in a column adjacent to the current block; the pixels in the reference block include all All the pixels or part of the pixels in the other row except the upper row of the reference block, and/or all the pixels or part of the pixels in the other column except the left column of the reference block.
  • all the pixels in the upper second row in the reference block and all the pixels in the left second column can be selected by using the pixel points.
  • the pixels in a row adjacent to the current block include some pixels in a row adjacent to the current block; the pixels in a column adjacent to the current block include pixels adjacent to the left of the current block. part of the pixels in a column;
  • the pixels in the row above the reference block include part of the pixels in the row above the reference block; the pixels in the left column of the reference block include part of the pixels in the left column of the reference block;
  • the partial pixel points are selected according to preset positions.
  • the preset positions of the current block are the second pixel point and the last pixel point from left to right in the adjacent row, and the second pixel point and the last pixel point from top to bottom in the adjacent column;
  • the preset positions of the reference block are the second and last pixel from left to right in the upper row, and the second and last pixel from top to bottom in the left column (the black circle in the figure). Show).
  • the preset position can be preset.
  • FIGS. 4a , 4b , 4c , 4d , 4e , and 4f are only examples sexual description.
  • Step 302 using a preset algorithm to determine a local illumination compensation parameter according to the reconstruction value of the adjacent pixel points of the current block and the predicted value or reconstruction value of the pixel point of the reference block;
  • the local illumination compensation parameter is calculated by using the predicted value of the pixel point of the reference block after the motion compensation is performed as the predicted value of the pixel point of the reference block. ;
  • the local illumination compensation parameter is calculated by taking the reconstruction value of the pixel point of the reference block in the reference image as the reconstruction value of the pixel point of the reference block.
  • the pixel value of the pixel point in the prediction block obtained by the reference block after the motion compensation MC is directly used as the prediction value of the pixel point after the motion compensation of the reference block; after the motion compensation MC (Motion Compensatiom), and the bidirectional optical flow BIO (
  • the pixel value of the pixel in the prediction block obtained after BIO or BGC is used as the reference block for motion compensation. predicted value.
  • the local illumination compensation parameters include a parameter a and a parameter b, where a is a scaling factor, and b is an offset.
  • the preset algorithm may be any existing algorithm for calculating linear model parameters, which is not limited herein.
  • the preset algorithm may be the least squares method.
  • the maximum value and the minimum value can be derived according to a certain method to calculate the linear model parameters a and b.
  • other data are used to calculate the linear model parameters a and b to avoid the influence of outliers.
  • the two points with the smallest value and the two points with the largest value are selected for interpolation to obtain new maximum and minimum points, and the maximum and minimum values are used to calculate the linear model parameters a and b according to a certain method. and many more.
  • the preset algorithm is the least square method
  • the local illumination compensation parameter includes a scaling coefficient a and an offset b.
  • 2N represents the number of pixels adjacent to the current block and the number of pixels of the reference block
  • Rec neig represents the reconstructed pixel value of the pixels adjacent to the current block
  • Ref inner represents the pixels of the reference block The predicted pixel value or the reconstructed pixel value.
  • the predicted pixel value of the pixel point of the reference block is the pixel value of the pixel point in the predicted block obtained after the reference block is subjected to motion compensation; or, the reference block is subjected to motion compensation and bidirectional optical flow BIO or bidirectional gradient The pixel value of the pixel point in the prediction block obtained after BGC is corrected.
  • the prediction value of the pixel point of the reference block may be preprocessed first, and then the preprocessed prediction value may be used to determine the local illumination compensation parameter by using a preset algorithm.
  • the preprocessing includes one or more of the following operations: filtering, smoothing, denoising, and so on.
  • the reconstruction value of the pixel point of the reference block may be preprocessed first, and then the preprocessed reconstruction value may be used to determine the local illumination compensation parameter by using a preset algorithm.
  • the preprocessing includes one or more of the following operations: filtering, smoothing, denoising, and so on.
  • Step 303 Perform inter-frame prediction on the current block according to the local illumination compensation parameter to obtain a predicted value of the current block.
  • a prediction based on local illumination compensation may be performed on the current block according to a linear model including the local illumination compensation parameters, to obtain a prediction value of the current block.
  • the linear model can be any existing illumination compensation linear model, which is not limited here.
  • CUP (x,y) a*CU C (x,y)+b, where CUP represents the prediction block of the current block, CUP represents the current block, ( x, y) are the position coordinates.
  • the inter-frame prediction method in the exemplary embodiment provided by the present disclosure can be applied to an encoding device, and can also be applied to a decoding device.
  • the encoding apparatus and the decoding apparatus may include any of a wide range of devices, including desktop computers, notebook (eg, laptop) computers, tablet computers, set-top boxes, telephone handsets, tablets, televisions, cameras, display devices , digital media players, video game consoles, video streaming devices, and more.
  • the encoding device and the decoding device may be equipped for wireless communication or wired communication.
  • the technical solution of the exemplary embodiment provided by the present disclosure eliminates the need to select pixels adjacent to the reference block, reduces the content that needs to be stored in the cache, and can reduce the extra overhead and burden of hardware.
  • the reference block may be further divided into a preset number of reference block sub-blocks, and each reference block sub-block extracts preset pixels of the sub-block according to the illumination compensation mode index.
  • the available pixels further include pixels in the upper row and/or left column of the sub-block of the reference block, and pixels in a row adjacent to the current block and/or the current block. Adjacent pixels in a column.
  • Illumination compensation is performed on the reference block, that is, the prediction of each sub-block of the reference block is performed linearly offset according to the linear model parameters corresponding to each sub-block to obtain the prediction value of the current coding unit.
  • Linear offset is done by dividing the reference into sub-blocks.
  • Each sub-block can calculate a linear model separately, or only one linear model can be used alone.
  • the IC in the inter-frame prediction process of the coding end, if the IC allows the identification bit to be '1', then execute all the following steps; if the IC allows the identification bit to be '0', then only execute (a), (b) and (f).
  • Inter prediction first traverses all candidate MVs for motion compensation, calculates the motion-compensated predicted pixels under each MV, and calculates the rate-distortion cost according to the original pixels.
  • a reference block is matched as an initial prediction block, and motion compensation is performed on the initial prediction block to obtain a reference block.
  • the reference block is divided into several sub-blocks with a preset number, and each sub-block extracts the left and/or top predicted pixels inside the sub-block and the left and/or top reconstructed pixels outside the block to be encoded in the current frame according to the illumination compensation mode index , the predicted pixels and reconstructed pixels are sorted and averaged (the specific operations are as described above) and brought into the above formula to obtain the linear model parameters a and b corresponding to each sub-block.
  • the reference block sub-block and the pixels in the adjacent row (such as the upper side) above the current block and/or in the adjacent column can be used.
  • the linear model is calculated from the pixels on the left (as shown in the left); if the illumination compensation mode index is marked as IC_T, as shown in Figure 14b, the linear model can only be calculated by using the pixels in the sub-block of the reference block and the adjacent row above the current block (as above).
  • the illumination compensation mode index is marked as IC_L, as shown in Figure 14c, only the reference block sub-block and the adjacent column (such as the left) of the current block can be used to reconstruct the linear model, and the above index numbers are only for distinction.
  • only the IC_TL model may be used for all the above sub-blocks, so that the encoding end does not need to try the IC_T and IC_L models, and the decoding end does not need to parse the model index value.
  • the MV index and prediction mode recorded in (b) are encoded as the current
  • the optimal information of the coding unit is transmitted to the decoding end through the code stream, and the current coding unit identification position of the illumination compensation technology is no, indicating that the illumination compensation technology is not used, and is also transmitted to the decoding end through the code stream; if the rate-distortion in (b) is higher. If it is small, then the MV index, illumination compensation mode index and prediction mode code recorded in (b) are transmitted as the optimal information of the current coding unit to the decoding end through the code stream, and the current coding unit identification position of the illumination compensation technology is true and the current coding unit is encoded.
  • the index of the illumination compensation mode of the coding unit block indicating that the illumination compensation technology is used, is also transmitted to the decoding end through the code stream.
  • the above-mentioned illumination compensation techniques are represented in Table 1 or Table 2 using an identifier syntax.
  • the MV index and prediction mode recorded in (b) are encoded as the current
  • the optimal information of the coding unit is transmitted to the decoding end through the code stream, and the current coding unit identification position of the illumination compensation technology is no, indicating that the illumination compensation technology is not used, and is also transmitted to the decoding end through the code stream; if the rate-distortion in (e) is higher. If it is small, then the MV index, illumination compensation mode index and prediction mode code recorded in (e) are transmitted as the optimal information of the current coding unit to the decoding end through the code stream, and the current coding unit identification position of the illumination compensation technology is true and the current coding unit is encoded.
  • the index of the illumination compensation mode of the coding unit block indicating that the illumination compensation technology is used, is also transmitted to the decoding end through the code stream.
  • the above-mentioned illumination compensation techniques are represented in Table 1 or Table 2 using an identifier syntax.
  • the final prediction block and the inversely transformed and inversely quantized residual are superimposed to obtain the reconstructed coding unit block, which is used as the prediction reference block of the next coding unit.
  • codeword transmission syntax in the illumination compensation process is shown in Table 1 or Table 2.
  • the reference block when the reference block is divided into four sub-blocks, the above-mentioned illumination compensation models, the schematic diagrams of the points are shown in Figures 15a, 15b and 15c, and the available pixels of the sub-block of the reference block are respectively selected inside each sub-block.
  • the linear model of each sub-block is calculated for the pixels adjacent to the current block of the corresponding coordinates.
  • the method shown in FIG. 16 may also be used to obtain the adjacent pixels of the current block and the pixels of the reference block.
  • the specific implementation of the inter-frame prediction part at the encoding end may also be:
  • the encoder obtains the coding information, including the allowable flag bits of the inter-frame prediction illumination compensation technology (IC), etc. After obtaining the image information, the image is divided into several CTUs, and further divided into several CUs, and each independent CU performs inter-frame processing. It is predicted that the current CU can limit the minimum area and the maximum area for IC technology;
  • Inter prediction firstly traverses all candidate MVs for motion compensation, calculates the motion-compensated predicted pixels under each MV, and calculates the rate-distortion cost according to the original pixels.
  • the current inter-frame prediction mode is an ordinary inter-frame prediction mode (INTER)
  • the illumination compensation technology is enabled and the three illumination compensation modes should be traversed.
  • each sub-block Divides the intermediate prediction block into a predetermined number of sub-blocks, and each sub-block extracts the left and/or upper prediction pixels in the sub-block and the left and/or upper side outside the block to be encoded in the current frame according to the illumination compensation mode index Reconstructing the pixels, sorting and averaging the predicted pixels and the reconstructed pixels (the specific operations are as described above) and bringing them into the above formula to obtain the linear model parameters a and b of each sub-block.
  • the above-mentioned illumination compensation mode index 1 is denoted as IC_TL, and the linear model can be calculated using the reconstructed pixels on the left and upper sides of the reference block and the current block;
  • the illumination compensation mode index 2 is denoted as IC_T, and the linear model can only be calculated using the reconstructed pixels on the upper side of the reference block and the current block.
  • the illumination compensation mode index 3 is recorded as IC_L, and the linear model can only be calculated using the left reconstructed pixels of the reference block and the current block;
  • the current inter-frame prediction mode is skip mode (SKIP) or merge mode (MERGE/DIRECT)
  • obtain the IC usage flag and IC mode index of the adjacent surrounding available blocks of the current coding unit If the adjacent surrounding blocks of the current frame do not use the IC technology, the current coding block does not use the IC technology; if there is a reconstruction block using the IC technology in the adjacent surrounding blocks of the current frame, the IC technology use identification bit and IC technology are obtained according to a specific order.
  • the mode index is used as the IC usage information of the current coding unit.
  • the above specific order is the order of top-left-top-right-top-left-bottom-left.
  • the intermediate prediction block is divided into several sub-blocks with a preset number, and the scaling factor a and the offset factor b are obtained respectively.
  • the MV index and prediction mode recorded in b) are encoded as the optimal information of the current coding unit and transmitted to the decoding end via the code stream. If the current inter-frame prediction mode is the normal inter-frame prediction mode (INTER), the current coding unit identification position of the illumination compensation technology is set to "0", indicating that the illumination compensation technology is not used, and it is also transmitted to the decoding end through the code stream, otherwise the illumination is not transmitted. Compensation techniques use flags and pattern indices;
  • the rate-distortion in e) is smaller, if it is an ordinary inter-frame prediction mode (INTER), the MV index, illumination compensation mode index and prediction mode encoding recorded in e) are used as the optimal information of the current coding unit via the code stream It is transmitted to the decoding end, and the current coding unit identification position of the illumination compensation technology is "1" and the illumination compensation mode index of the current coding unit block is encoded, indicating that the illumination compensation technology is used, and is also transmitted to the decoding end through the code stream; if the skip mode (SKIP ) or merge mode (MERGE), then only the corresponding MV index and the corresponding inter-frame prediction mode need to be encoded and written into the code stream and transmitted to the decoding end.
  • the above-mentioned illumination compensation techniques are represented in Table 1 or Table 2 using an identifier syntax.
  • the final prediction block and the inversely transformed and inversely quantized residual are superimposed to obtain the reconstructed coding unit block, which is used as the prediction reference block of the next coding unit.
  • the apparatus includes: a processor and a computer program storing a computer program that can be executed on the processor.
  • a memory wherein the processor implements the inter-frame prediction method described in any of the above examples when the processor executes the computer program.
  • Some embodiments of the present disclosure provide a video encoding method.
  • the method includes:
  • Step 601 performing inter-frame prediction on the current block according to any of the above-mentioned inter-frame prediction methods to obtain the predicted value of the current block;
  • Step 602 Encode the current block according to the predicted value of the current block.
  • the operation of encoding the current block may be any existing encoding manner, which is not limited herein.
  • the current block is encoded by: subtracting the prediction block from the current block to form a residual block; transforming and quantizing the residual block; after quantization, performing entropy encoding to obtain encoded data.
  • the method further includes:
  • a new set of prediction modes is obtained; wherein, the CU-level illumination compensation flags in the new set of prediction modes are all set to valid values true;
  • the prediction mode corresponding to the current block is selected according to the result of the rate-distortion cost calculation, and whether local illumination compensation is performed for the current block is determined according to the prediction mode corresponding to the current block.
  • N is 6.
  • RDO Rate Distortion Optimization
  • Some embodiments of the present disclosure provide a video encoding method.
  • the method includes:
  • Step 701 obtaining a code stream, and determining that the prediction mode of the current block is an inter-frame prediction mode
  • Step 702 determine that the encoding mode of the current block is a merge mode
  • inter-frame prediction includes two prediction modes, namely merge mode and normal mode.
  • Step 703 derive a motion information list
  • Step 704 constructing a merge mode prediction information list according to the motion information list
  • the motion information list includes one or more of the following information: temporal motion information, spatial motion information, HMVP (History based motion vector prediction, History based motion vector) motion information, UMVE (Ultimate motion vector expression) motion information.
  • Step 705 in the process of traversing the motion information list to construct the prediction information, obtain the reference block according to the motion information and the reference image, and perform motion compensation MC to obtain the prediction block;
  • Step 706 performing local illumination compensation on the motion-compensated prediction block
  • the performing local illumination compensation on the motion-compensated prediction block includes:
  • a preset algorithm is used to determine the local illumination compensation parameter
  • the predicted value of the pixel point after the motion compensation of the reference block is performed is the pixel value of the pixel point in the predicted block obtained after the motion compensation of the reference block is performed.
  • illumination compensation may be performed on the prediction blocks in each direction separately, or illumination compensation may be performed on the prediction blocks after the prediction blocks in the two directions are combined.
  • Step 707 performing BIO, BGC, interPF (inter prediction filter, inter prediction filter) processing on the prediction block after local illumination compensation;
  • Step 708 adding the inter-frame coding information with the illumination compensation technology to the prediction information list;
  • Step 709 determine whether the current block uses the local illumination compensation technology
  • the current block CU-level illumination compensation flag is set to true; otherwise, the current block CU-level illumination compensation flag is set to false.
  • Step 710 encode the current block.
  • the difference from the encoding method shown in FIG. 7 is that, before performing motion compensation, local illumination compensation is performed on the reference block; and then motion compensation is performed on the reference block after performing local illumination compensation.
  • local illumination compensation is performed on the reference block, including:
  • a preset algorithm is used to determine the local illumination compensation parameter
  • the prediction value of the current block is obtained by predicting the current block according to the local illumination compensation parameter.
  • the difference from the encoding method shown in FIG. 7 is that after performing motion compensation, and after performing BIO, BGC, and interPF processing on the motion-compensated prediction block, perform BIO, BGC , The prediction block processed by interPF performs local illumination compensation.
  • the performing local illumination compensation on the prediction block processed by BIO, BGC, and interPF includes:
  • a preset algorithm is used to determine the local illumination compensation parameter
  • the predicted value of the pixel point after the motion compensation of the reference block is the pixel value of the pixel point in the predicted block obtained after BIO or BGC.
  • the apparatus includes: a processor and a memory storing a computer program executable on the processor, Wherein, when the processor executes the computer program, the video encoding method described in any one of the above examples is implemented.
  • Some embodiments of the present disclosure provide a video decoding method.
  • the method includes:
  • Step 901 when local illumination compensation is enabled, perform inter-frame prediction on the current block according to any of the above-mentioned inter-frame prediction methods to obtain the predicted value of the current block;
  • Step 902 Decode the current block according to the predicted value of the current block.
  • the operation of decoding the current block may be any existing decoding manner, which is not limited herein.
  • the current block is decoded in the following manner: entropy decoding is performed on the current block, after entropy decoding, inverse quantization and inverse transformation are performed to obtain a residual block, the prediction block and the residual block are added, and reconstruction is performed to obtain a residual block. Decode data.
  • the method before step 901, the method further includes:
  • the subsequent inter prediction mode does not need to perform local illumination compensation, that is, local illumination compensation is improperly enabled
  • the technical solutions of the exemplary embodiments provided by the present disclosure can reduce the extra overhead and burden of hardware in decoding, facilitate the hardware implementation of the decoder, and can bring significant performance gains to the existing encoding and decoding standards.
  • Some embodiments of the present disclosure provide a video decoding method.
  • the method includes:
  • Step 1001 parse the code stream to obtain the frame type of the current image, and determine that the frame type of the current image is a P frame or a B frame;
  • Step 1002 parse the code stream to obtain the encoding mode of the current block, and determine that the encoding mode of the current block is the merge mode;
  • Step 1003 parse the code stream to obtain current block motion information
  • Step 1004 parsing the code stream to obtain the current block CU-level illumination compensation flag flag
  • Step 1005 determine the reference block according to the motion information and the reference image
  • Step 1006 performing motion compensation on the reference block to obtain a prediction block
  • Step 1007 if the current block illumination compensation flag is true, then perform local illumination compensation on the motion-compensated prediction block; if the current block illumination compensation flag is false, then do not perform local illumination compensation on the motion-compensated prediction block;
  • the performing local illumination compensation on the motion-compensated prediction block includes:
  • a preset algorithm is used to determine the local illumination compensation parameter
  • the predicted value of the pixel point after the motion compensation of the reference block is performed is the pixel value of the pixel point in the predicted block obtained after the motion compensation of the reference block is performed.
  • illumination compensation may be performed on the prediction blocks in each direction separately, or illumination compensation may be performed on the prediction blocks after the prediction blocks in the two directions are combined.
  • Step 1008 perform BIO, BGC, interPF processing on the prediction block
  • Step 1009 decoding to obtain the current block.
  • the difference from the decoding method shown in FIG. 10 is that, before motion compensation, local illumination compensation is performed on the reference block; then motion compensation is performed on the reference block after local illumination compensation.
  • local illumination compensation is performed on the reference block, including:
  • a preset algorithm is used to determine the local illumination compensation parameter
  • the current block is predicted according to the local illumination compensation parameter to obtain a predicted value of the current block.
  • the difference from the decoding method shown in FIG. 10 is that after performing motion compensation, and after performing BIO, BGC, and interPF processing on the motion-compensated prediction block, perform BIO, BGC , The prediction block after interPF processing is subjected to local illumination compensation.
  • the performing local illumination compensation on the prediction block processed by BIO, BGC, and interPF includes:
  • a preset algorithm is used to determine the local illumination compensation parameter
  • the predicted value of the pixel point in the reference block after motion compensation is performed is the pixel value of the pixel point in the predicted block obtained after BIO or BGC.
  • the apparatus includes: a processor and a memory storing a computer program executable on the processor, Wherein, when the processor executes the computer program, the video decoding method described in any one of the above examples is implemented.
  • Some embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, wherein, when the computer program is executed by a processor, the inter-frame described in any of the above examples is implemented
  • the prediction method or implements the video encoding method described in any of the above examples, or implements the video decoding method described in any of the above examples.
  • the video encoding method described in any of the above examples may be performed by a video encoder.
  • FIG. 12 is a structural block diagram of a video encoder.
  • the video encoder 20 includes a video data memory 33, a division unit 35, a prediction processing unit 41, a summer 50, Transform processing unit 52 , quantification unit 54 , entropy encoding unit 56 .
  • Prediction processing unit 41 includes a motion estimation unit (MEU) 42 , a motion compensation unit (MCU) 44 , an intra-prediction processing unit 46 , and an intra-block copy (IBC) unit 48 .
  • MEU 42, MCU 44, intra-prediction processing unit 46, and IBC unit 48 may actually be highly integrated.
  • video encoder 20 also includes inverse quantization unit 58 , inverse transform processing unit 60 , summer 62 , filter unit 64 , and decoded picture buffer (DPB) 66 .
  • DPB decoded picture buffer
  • the specific implementation of the inter-frame prediction part at the decoding end may be:
  • the decoder obtains the code stream, and parses the code stream to obtain the Illumination Compensation Technology (IC) permission flag of the current video sequence.
  • IC Illumination Compensation Technology
  • the current decoding unit does not use illumination compensation technology; if the first illumination compensation mode index identification bit is true; then continue to parse the second illumination compensation mode index identification bit, otherwise set the illumination compensation mode index to 1, indicating Use the first illumination compensation linear model IC_TL (the upper and left reconstructed pixels can be used to calculate the illumination compensation linear model); if the second illumination compensation mode index flag bit is obtained by analysis, set the illumination compensation mode index to 3, indicating that Use the third illumination compensation linear mode IC_L (only the left reconstructed pixels can be used for illumination compensation linear model calculation), otherwise set the illumination compensation mode index to 2, which means to use the second illumination compensation linear mode IC_T (only the upper reconstructed pixels can be used) Perform lighting compensation linear model calculation).
  • the reference block obtained from the reference frame according to the MV index and the inter prediction mode and other information is used as the initial prediction block, and motion compensation is performed on the initial prediction block to obtain the intermediate prediction block.
  • the current intermediate prediction block is divided into several sub-blocks with a preset number (ie, reference block sub-blocks), and the selection of pixel points can be shown in FIGS. 15a, 15b, and 15c.
  • Each sub-block extracts the left and/or top predicted pixels inside the sub-block and the left and/or top reconstructed pixels outside the block to be encoded in the current frame according to the illumination compensation mode index, and sorts and averages the predicted pixels and reconstructed pixels (specific operation As above) and into the above formula, the linear model parameters a and b corresponding to each sub-block can be obtained.
  • Each sub-block linearly offsets all predicted pixels in the sub-block according to the corresponding scaling factor a and offset factor b to obtain the final predicted value;
  • the intermediate prediction block is regarded as the final prediction block.
  • step f) The final prediction block is superimposed on the residual information restored in step a) to obtain the reconstructed block of the current decoding unit, which is output after post-processing.
  • the position and number of reconstructed pixels in the reference frame obtained by the above illumination compensation technology can be any position in the reference block in the reference frame and an integer number greater than 0, such as the first row in the reference block pointed to by the motion vector and/or first row.
  • the above illumination compensation techniques can be applied to other techniques such as Bidirectional Optical Flow (BDOF/BIO), Decoder Motion Vector Correction (DMVR), Bidirectional Predictive Weighting (BCW/BGC), Inter Frame Prediction Filtering (INTERPF) or Inter Frame Joint Any position in the prediction (CIIP), such as the illumination compensation technology acts before the bidirectional optical flow technology and the bidirectional weighted prediction technology;
  • BDOF/BIO Bidirectional Optical Flow
  • DMVR Decoder Motion Vector Correction
  • BCW/BGC Bidirectional Predictive Weighting
  • INTERPF Inter Frame Prediction Filtering
  • CIIP Inter Frame Joint Any position in the prediction
  • the above illumination compensation technology may not act on the same coding block together with other technologies. If the current block uses the IC technology, the current block will no longer be corrected and compensated using BDOF/BIO;
  • the above illumination compensation technology needs to recalculate the linear model between different color components.
  • the YUV color space needs to calculate the linear models corresponding to the three color components for Y, U, and V, respectively.
  • the specific implementation of the inter-frame prediction part at the decoding end may also be:
  • the decoder obtains the code stream, and parses the code stream to obtain the Illumination Compensation Technology (IC) permission flag of the current video sequence.
  • IC Illumination Compensation Technology
  • the inter-frame prediction mode of the current decoding block is an ordinary inter-frame prediction mode (INTER)
  • the code stream is parsed to obtain the IC usage flag of the current decoding block, and if the IC usage flag of the current decoding block is true, then continue parsing
  • the code stream obtains the first light compensation mode index identification bit of the current decoding block, otherwise the current decoding unit does not use the illumination compensation technology; if the first light compensation mode index identification bit is true; then continue to parse the second light compensation mode index Identification bit, otherwise set the illumination compensation mode index to 1, indicating that the first illumination compensation linear model IC_TL is used (the upper and left reconstructed pixels can be used for illumination compensation linear model calculation); if the second illumination compensation mode index identification bit is obtained by parsing If true, set the illumination compensation mode index to 3, indicating that the third illumination compensation linear mode IC_L is used (only the left reconstructed pixels can be used for illumination compensation linear model calculation), otherwise, the illumination compensation mode index is set to 2, indicating that the second illumination compensation mode
  • inter-frame prediction mode of the current decoding block is skip mode (SKIP) or merge mode (MERGE/DIRECT)
  • SKIP skip mode
  • MERGE/DIRECT merge mode
  • each sub-block obtains the predicted pixels inside the sub-block at the corresponding position and the reconstructed pixels outside the current frame coding unit according to the illumination compensation mode index obtained by the analysis to calculate the linear model parameters. , obtain the scaling factor a and offset factor b of each sub-block; each sub-block linearly offsets all the predicted pixels in the sub-block according to the obtained scaling factor a and offset factor b to obtain the final predicted block;
  • the intermediate prediction block is used as the final prediction block.
  • step f) The final prediction block is superimposed on the residual information restored in step a) to obtain the reconstructed block of the current decoding unit, which is output after post-processing.
  • the above-mentioned IC information derivation process in SKIP, MERGE/DIRECT can also bind the mode index of IC with the surrounding adjacent blocks of the reference. If the reference MV comes from the adjacent upper block, the current prediction block uses the illumination compensation mode of IC_T; If the reference MV comes from the adjacent left block, the current prediction block uses the illumination compensation mode of IC_L; otherwise, the illumination compensation mode of IC_TL is used.
  • the video decoding method described in any of the above examples may be performed by a video decoder.
  • FIG. 13 is a structural block diagram of a video decoder.
  • the video decoder 30 includes a video data memory 78 , an entropy decoding unit 80 , a prediction processing unit 81 , and an inverse quantization unit 86 . , an inverse transform processing unit 88 , a summer 90 , a wave filter unit 92 and a DPB 94 .
  • the prediction processing unit 81 includes an MCU 82 intra-frame prediction processing unit 84 and an IBC unit 85 .
  • video decoder 30 may perform a decoding process that is substantially reciprocal to the encoding process described with respect to video encoder 20 from FIG. 12 .
  • Computer storage media includes both volatile and nonvolatile implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules or other data flexible, removable and non-removable media.
  • Computer storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, magnetic tape, magnetic disk storage or other magnetic storage devices, or may Any other medium used to store desired information and which can be accessed by a computer.
  • communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and can include any information delivery media, as is well known to those of ordinary skill in the art .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本公开实施例提供了一种帧间预测方法、视频编解码方法、装置及介质,其中一示例公开的帧间预测方法,包括:根据获取的可用像素点确定局部光照补偿参数;所述可用像素点包括当前块邻近的像素点和/或所述当前块的参考块的像素点;根据所述局部光照补偿参数,对所述当前块进行帧间预测得到所述当前块的预测值。

Description

帧间预测方法、视频编解码方法、装置及介质
本申请要求于2020年12月3日提交中国专利局、申请号为PCT/CN2020/133693、发明名称为“帧间预测方法、视频编解码方法、装置及介质”的PCT申请的优先权,以及要求于2020年12月3日提交中国专利局、申请号为PCT/CN2020/133709、发明名称为“一种帧间预测方法、编码器、解码器及存储介质”的PCT申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开实施例涉及但不限于视频编解码技术,尤指一种帧间预测方法、视频编解码方法、装置及介质。
背景技术
为了有效的传输和存储视频信号,需要对视频信号进行压缩编码,视频压缩技术越来越成为视频应用领域不可或缺的关键技术。
视频编码过程主要包括帧内预测、帧间预测、变换、量化、熵编码、环内滤波(in-loop filtering)等环节。视频解码则相当于视频编码的逆过程。
局部亮度补偿LIC(Local Illumination Compensation)技术是一种用于补偿当前块和参考块之间亮度差异的帧间预测方法。LIC技术使用当前块相邻的像素点和参考块相邻的像素点构建光照补偿线性模型,然后根据光照补偿线性模型导出当前块的预测块。
由于在构建光照补偿线性模型时,需要获取到参考块邻近的像素点。在硬件实现中,为了获取参考块邻近像素点的相关信息,需要缓冲存储器cache能够存储更多的内容,如此会极大地增加硬件的额外开销和负担
发明概述
本公开一实施例提供了一种帧间预测方法,包括:
根据获取的可用像素点确定局部光照补偿参数;所述可用像素点包括当 前块邻近的像素点和/或所述当前块的参考块的像素点;
根据所述局部光照补偿参数,对所述当前块进行帧间预测得到所述当前块的预测值。
本公开一实施例提供了一种帧间预测装置,包括:处理器以及存储有可在所述处理器上运行的计算机程序的存储器,其中,所述处理器执行所述计算机程序时实现上述的帧间预测方法。
本公开一实施例提供了一种视频编码方法,包括:
按照上述的帧间预测方法对当前块进行帧间预测得到所述当前块的预测值;
根据所述当前块的预测值对所述当前块进行编码。
本公开一实施例提供了一种视频编码装置,包括:处理器以及存储有可在所述处理器上运行的计算机程序的存储器,其中,所述处理器执行所述计算机程序时实现上述的视频编码方法。
本公开一实施例提供了一种视频解码方法,包括:
当启用局部光照补偿时,按照上述的帧间预测方法对当前块进行帧间预测得到所述当前块的预测值;
根据所述当前块的预测值对所述当前块进行解码。
本公开一实施例提供了一种视频解码装置,包括:处理器以及存储有可在所述处理器上运行的计算机程序的存储器,其中,所述处理器执行所述计算机程序时实现上述的视频解码方法。
本公开一实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其中,所述计算机程序时被处理器执行时实现上述的帧间预测方法,或者实现上述的视频编码方法,或者实现上述的视频解码方法。
附图概述
附图用来提供对本公开示例性实施例的理解,并且构成说明书的一部分, 与本公开示例性实施例一起用于解释本公开示例性实施例的技术方案,并不构成对本公开示例性实施例技术方案的限制。
图1为相关技术中获取当前块和参考块邻近的像素点的示意图;
图2为本公开一示例性实施例提供的一种帧间预测方法的流程示意图;
图3为本公开一示例性实施例提供的一种帧间预测方法的流程示意图;
图4a为本公开一示例中获取当前块邻近像素点和参考块像素点的示意图;
图4b为本公开一示例中获取当前块邻近像素点和参考块像素点的示意图;
图4c为本公开一示例中获取当前块邻近像素点和参考块像素点的示意图;
图4d为本公开一示例中获取当前块邻近像素点和参考块像素点的示意图;
图4e为本公开一示例中获取当前块邻近像素点和参考块像素点的示意图;
图4f为本公开一示例中获取当前块邻近像素点和参考块像素点的示意图;
图5为本公开一示例性实施例提供的一种帧间预测装置的结构示意图;
图6为本公开一示例性实施例提供的一种视频编码方法的流程示意图;
图7为本公开一示例性实施例提供的一种视频编码方法的流程示意图;
图8为本公开一示例性实施例提供的一种视频编码装置的结构示意图;
图9为本公开一示例性实施例提供的一种视频解码方法的流程示意图;
图10为本公开一示例性实施例提供的一种视频解码方法的流程示意图;
图11为本公开一示例性实施例提供的一种视频解码装置的结构示意图;
图12为本公开一示例性实施例提供的一种视频编码器的结构框图;
图13为本公开一示例性实施例提供的一种视频解码器的结构框图;
图14a为本公开一示例性实施例提供的光照补偿模式索引为IC_TL的像素点示意图;
图14b为本公开一示例性实施例提供的光照补偿模式索引为IC_T的像素点示意图;
图14c为本公开一示例性实施例提供的光照补偿模式索引为IC_L的像素点示意图;;
图15a为本公开一示例性实施例提供的IC_TL模式的可用像素点示意图;
图15b为本公开一示例性实施例提供的IC_T模式的可用像素点示意图;
图15c为本公开一示例性实施例提供的IC_L模式的可用像素点示意图;
图16为本公开一示例性实施例提供的获取当前块邻近像素点和参考块像素点的示意图。
详述
本公开描述了多个实施例,但是该描述是示例性的,而不是限制性的,并且对于本领域的普通技术人员来说显而易见的是,在本公开所描述的实施例包含的范围内可以有更多的实施例和实现方案。尽管在附图中示出了许多可能的特征组合,并在具体实施方式中进行了讨论,但是所公开的特征的许多其它组合方式也是可能的。除非特意加以限制的情况以外,任何实施例的任何特征或元件可以与任何其它实施例中的任何其他特征或元件结合使用,或可以替代任何其它实施例中的任何其他特征或元件。
本公开包括并设想了与本领域普通技术人员已知的特征和元件的组合。本公开已经公开的实施例、特征和元件也可以与任何常规特征或元件组合,以形成由权利要求限定的独特的发明方案。任何实施例的任何特征或元件也可以与来自其它发明方案的特征或元件组合,以形成另一个由权利要求限定的独特的发明方案。因此,应当理解,在本公开中示出和/或讨论的任何特征可以单独地或以任何适当的组合来实现。因此,除了根据所附权利要求及其等同替换所做的限制以外,实施例不受其它限制。此外,可以在所附权利要求的保护范围内进行各种修改和改变。
本公开中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本公开中被描述为“示例性的”或者“例如”的任何实施例不应被解释为比其他实施例更优选或更具优势。
此外,在描述具有代表性的实施例时,说明书可能已经将方法和/或过程呈现为特定的步骤序列。然而,在该方法或过程不依赖于本文所述步骤的特定顺序的程度上,该方法或过程不应限于所述的特定顺序的步骤。如本领域普通技术人员将理解的,其它的步骤顺序也是可能的。因此,说明书中阐述的步骤的特定顺序不应被解释为对权利要求的限制。此外,针对该方法和/或过程的权利要求不应限于按照所写顺序执行它们的步骤,本领域技术人员可以容易地理解,这些顺序可以变化,并且仍然保持在本公开实施例的精神和范围内。
下面先对本公开实施例中涉及到的一些概念进行介绍。
图像块:视频编解码过程中以图像块为基本单元进行各种类型的编码操作或解码操作。例如基于图像块的预测、变换、熵编码等等。图像块也可以称为编码单元CU(Coding Unit)。图像块指一个二维采样点阵列,可以是正方形阵列,也可以是矩形阵列。例如,一个4x8大小的图像块可看做4x8共32个采样点构成的方形采样点阵列。
当前块:是指当前正在处理的图像块,也可以称为当前编码单元CU。例如,在编码中,当前块指当前正在编码的图像块;在解码中,当前块指当前正在解码的图像块。
参考块:是指为当前块提供参考信号的图像块,也可以称为参考编码单元CU。其中,参考块的像素点是指该参考块内包含的像素点。
预测块:是指为当前块提供预测的块,也可以称为预测编码单元CU。
相关技术中,在构建光照补偿线性模型时,需要获取到参考块邻近的像素点。例如图1所示,在构建光照补偿线性模型时,需要选取当前块上方一行以及左方一列的像素点(图中黑色圆圈所示),以及选取参考块上方一行以及左方一列的像素点(图中黑色圆圈所示),然后将选取地这两组像素点的重构值作为用于构建线性模型的样本。
在硬件实现中,为了获取参考块邻近像素点的相关信息,需要cache能够存储更多的内容,会极大地增加硬件的额外开销和负担,同时还会增加带宽的开销和负担。
本公开一些实施例提供了一种帧间预测方法,在一示例性实施例中,如图2所示,该方法包括:
步骤201,根据获取的可用像素点确定局部光照补偿参数;所述可用像素点包括当前块邻近的像素点和/或所述当前块的参考块的像素点;
步骤202,根据所述局部光照补偿参数,对所述当前块进行帧间预测得到所述当前块的预测值。
在一示例性实施例中,所述当前块邻近的像素点包括所述当前块邻近一行中的像素点和/或所述当前块邻近一列中的像素点。
在一示例性实施例中,所述参考块的像素点包括所述参考块上方一行中的像素点和/或所述参考块左方一列中的像素点。
在一示例性实施例中,所述参考块的像素点包括所述参考块除了上方一行的其他一行中的像素点和/或所述参考块左方一列中的像素点。
在一示例性实施例中,所述参考块的像素点包括所述参考块上方一行中的像素点和/或所述参考块除了左方一列的其他一列中的像素点。
在一示例性实施例中,所述参考块的像素点包括所述参考块除了上方一行的其他一行中的像素点和/或所述参考块除了左方一列的其他一列中的像素点。
在一示例性实施例中,一行中的像素点包括该行中的全部像素点或者部分像素点;一列中的像素点包括该列中的全部像素点或者部分像素点;
其中,所述部分像素点通过下采样后选取得到,或者根据预设位置选取得到。
在一示例性实施例中,通过以下一种或者多种方式确定局部光照补偿参数:
根据当前块邻近的像素点的重构值以及所述参考块的像素点的重构值,采用预设的算法确定局部光照补偿参数;
根据当前块邻近的像素点的重构值以及所述参考块的像素点的预测值,采用预设的算法确定局部光照补偿参数;
其中,所述参考块的像素点的预测值包括以下一种或者多种:
所述参考块经过运动补偿之后得到的预测块中像素点的像素值;所述参考块经过运动补偿以及双向光流BIO或双向梯度修正BGC之后得到的预测块中像素点的像素值。
本公开提供的示例性实施例的技术方案,无需再选取参考块邻近的像素点,减少了缓存cache需要存储的内容,可以减少硬件的额外开销和负担。
本公开一些实施例提供了的一种帧间预测方法,在一示例性实施例中,如图3所示,该方法包括:
步骤301,获取当前块邻近的像素点以及所述当前块的参考块的像素点;
在一示例性实施例中,在步骤301之前,该方法还包括:获取当前块的参考块。根据不同的帧间预测模式,确定参考块的方法不同。例如,在skip(跳跃)/merge(融合)模式下通过相邻块运动向量MV(Motion Vector)导出当前块MV,从而确定参考块。又例如,普通帧间预测模式通过运动估计算法导出当前块MV,从而确定参考块。
在一示例性实施例中,在步骤301之前,该方法还包括:
获取经编码视频码流,解析当前图像光照补偿CU级标志flag;
若CU级标志flag为无效值false,后续帧间预测模式无需进行局部光照补偿。
若CU级标志flag为有效值true,则执行步骤301。
在一示例性实施例中,所述当前块邻近的像素点包括所述当前块邻近一行中的像素点和/或所述当前块邻近一列中的像素点;所述参考块的像素点包括所述参考块上方一行中的像素点和/或所述参考块左方一列中的像素点。
在一示例性实施例中,所述当前块邻近一行中的像素点包括所述当前块上方邻近的一行中的全部像素点;所述当前块邻近一列中的像素点包括所述当前块左方邻近的一列中的全部像素点;
所述参考块上方一行中的像素点包括所述参考块上方一行中的全部像素点;所述参考块左方一列中的像素点包括所述参考块左方一列中的全部像素点。
例如图4a所示,可用像素点选取当前块邻近的像素点包括当前块上方邻近的一行中的全部8个像素点,以及左方邻近的一列中的全部4个像素点。可用像素点选取参考块的像素点包括参考块上方一行中的全部8个像素点,以及左方一列中的全部4个像素点(图中黑色圆圈所示)。
在一示例性实施例中,所述当前块邻近一行中的像素点包括所述当前块上方邻近一行中的部分像素点;所述当前块邻近一列中的像素点包括所述当前块左方邻近一列中的部分像素点;
所述参考块上方一行中的像素点包括所述参考块上方一行中的部分像素点;所述参考块左方一列中的像素点包括所述参考块左方一列中的部分像素点;
其中,所述部分像素点通过下采样后选取得到。
例如图4b所示,可用像素点选取1/2下采样的像素点(图中黑色圆圈所示)。
在一示例性实施例中,所述当前块邻近的像素点包括所述当前块邻近一行中的像素点和/或所述当前块邻近一列中的像素点;所述参考块的像素点包括所述参考块上方一行中的全部像素点或者部分像素点,和/或所述参考块除了左方一列的其他一列中的全部像素点或者部分像素点。
例如图4c所示,可用像素点选取参考块中上方一行的全部像素点,以及左方第二列中的全部像素点(图中黑色圆圈所示)。
在一示例性实施例中,所述当前块邻近的像素点包括所述当前块邻近一行中的像素点和/或所述当前块邻近一列中的像素点;所述参考块的像素点包括所述参考块除了上方一行的其他一行中的全部像素点或部分像素点,和/ 或所述参考块左方一列中的全部像素点或者部分像素点。
例如图4d所示,可用像素点选取参考块中上方第二行的全部像素点,以及左方一列中的全部像素点(图中黑色圆圈所示)。
在一示例性实施例中,所述当前块邻近的像素点包括所述当前块邻近一行中的像素点和/或所述当前块邻近一列中的像素点;所述参考块的像素点包括所述参考块除上方一行的其他一行中的全部像素点或者部分像素点,和/或所述参考块除了左方一列的其他一列中的全部像素点或者部分像素点。
例如图4e所示,可用像素点选取参考块中上方第二行的全部像素点,以及左方第二列中的全部像素点(图中黑色圆圈所示)。
在一示例性实施例中,所述当前块邻近一行中的像素点包括所述当前块上方邻近一行中的部分像素点;所述当前块邻近一列中的像素点包括所述当前块左方邻近一列中的部分像素点;
所述参考块上方一行中的像素点包括所述参考块上方一行中的部分像素点;所述参考块左方一列中的像素点包括所述参考块左方一列中的部分像素点;
其中,所述部分像素点根据预设位置选取得到。
例如图4f所示,当前块的预设位置为邻近一行中从左到右第二个像素点和最后一个像素点,以及邻近一列中从上到下第二个像素点和最后一个像素点;参考块的预设位置为上方一行中从左到右第二个像素点和最后一个像素点,以及左方一列中从上到下第二个像素点和最后一个像素点(图中黑色圆圈所示)。其中,该预设位置可以预先设定。
需要说明地是:本公开实施例提供的示例中,在参考块内选取的像素点的位置不作具体的限定,上述图4a、图4b、图4c、图4d、图4e、图4f仅为示例性说明。
需要说明地是:当前块和参考块各自选取的像素点的位置可以不对应,但是数量需要一致。
步骤302,根据当前块邻近的像素点的重构值以及所述参考块的像素点的预测值或者重构值,采用预设的算法确定局部光照补偿参数;
在一示例性实施例中,当在运动补偿MC(Motion Compensatiom)之后进行局部光照补偿时,将参考块进行运动补偿后像素点的预测值作为参考块的像素点的预测值计算局部光照补偿参数;当在运动补偿之前进行局部光照补偿时,将参考图象中参考块像素点的重构值作为参考块的像素点的重构值计算局部光照补偿参数。
在一示例性实施例中,当在运动补偿MC(Motion Compensatiom)之后,进行双向光流BIO(Bi-directional Optical flow)、双向梯度修正BGC(Bi-directional Gradient Correction)之前进行局部光照补偿时,直接将参考块经过运动补偿MC之后得到的预测块中像素点的像素值作为参考块进行运动补偿后像素点的预测值;当在运动补偿MC(Motion Compensatiom)之后,以及进行双向光流BIO(Bi-directional Optical flow)、双向梯度修正BGC(Bi-directional Gradient Correction)之后进行局部光照补偿时,将经过BIO或BGC之后得到的预测块中像素点的像素值作为参考块进行运动补偿后像素点的预测值。
在一示例性实施例中,局部光照补偿参数包括参数a和参数b,a为放缩系数,b为偏移量。
在一示例性实施例中,所述预设的算法可以为现有任一种计算线性模型参数的算法,在此并不作限制。例如该预设的算法可以为最小二乘法。又例如可以通过对两组数据进行排序,按照一定的方法导出最大值和最小值用于计算线性模型参数a和b。又例如剔除最大和最小的一个像素点后利用其它数据计算线性模型参数a和b,避免异常值带来的影响。又例如选取值最小的两个点和值最大的两个点进行插值得到新的最大值点和最小值点,按照一定的方法利用最大值和最小值用于计算线性模型参数a和b。等等。
在本示例性实施例中,所述预设的算法为最小二乘法,所述局部光照补偿参数包括放缩系数a和偏移量b。计算公式如下:
Figure PCTCN2020135490-appb-000001
Figure PCTCN2020135490-appb-000002
其中,2N表示所述当前块邻近的像素点数目以及所述参考块的像素点数目;Rec neig表示所述当前块邻近的像素点的重构像素值;Ref inner表示所述参考块的像素点的预测像素值或者重构像素值。
其中,所述参考块的像素点的预测像素值为所述参考块经过运动补偿之后得到的预测块中像素点的像素值;或者,所述参考块经过运动补偿以及双向光流BIO或双向梯度修正BGC之后得到的预测块中像素点的像素值。
在一示例性实施例中,可以先对参考块的像素点的预测值进行预处理,然后再利用预处理后的预测值,采用预设的算法确定局部光照补偿参数。其中,预处理包括以下一种或者多种操作:滤波、平滑、去噪,等等。
在一示例性实施例中,可以先对参考块的像素点的重构值进行预处理,然后再利用预处理后的重构值,采用预设的算法确定局部光照补偿参数。其中,预处理包括以下一种或者多种操作:滤波、平滑、去噪,等等。
步骤303,根据所述局部光照补偿参数,对所述当前块进行帧间预测得到所述当前块的预测值。
在一示例性实施例中,可以根据包含所述局部光照补偿参数的线性模型对所述当前块进行基于局部光照补偿的预测,得到所述当前块的预测值。
该线性模型可以为现有任一种光照补偿线性模型,在此并不作限制。
在本示例中,该线性模型的公式为:CU P(x,y)=a*CU C(x,y)+b,其中,CU P表示当前块的预测块,CU C表示当前块,(x,y)为位置坐标。
其中,本公开提供的示例性实施例中的帧间预测方法,可以应用于编码设备,还可以应用于解码设备。
其中,编码设备及解码设备可以包括广泛范围的装置中的任一者,包含桌上型计算机、笔记本(例如膝上型)计算机、平板计算机、机顶盒、电话手机、平板、电视、相机、显示装置、数字媒体播放器、视频游戏控制台、视频流式处理装置等等。在一些情况下,编码设备及解码设备可经配备以用于无线通信或者有线通信。
本公开提供的示例性实施例的技术方案,无需再选取参考块邻近的像素点,减少了缓存cache需要存储的内容,可以减少硬件的额外开销和负担。
在获取图像信息以确定可用像素点之后,还可以将参考块进一步划分成预设数量的参考块子块,每个参考块子块根据光照补偿模式索引提取子块的预设像素点。在一示例性实施例中,可用像素点还包括参考块子块的上方一行中的像素点和/或左方一列中的像素点以及当前块邻近一行中的像素点和/或所述当前块邻近一列中的像素点。
对参考块进行光照补偿,即对每个参考块子块的预测像是根据对应每个子块的线性模型参数做线性偏移,得到当前编码单元的预测值。
通过对参考进行子块划分,做线性偏移。每个子块可以分别计算线性模型,也可以单独只用一种线性模型。
在一示例性实施例中,在编码端帧间预测过程中,若IC允许标识位为‘1’,则执行如下所有步骤;若IC允许标识位为‘0’,则仅执行(a)、(b)和(f)。
(a)帧间预测首先对所有候选MV进行遍历做运动补偿,计算每一个MV下运动补偿后的预测像素,并根据原始像素计算率失真代价。
(b)根据上述所有MV率失真代价最小原则,选择出当前编码单元的最优MV及预测模式如(SKIP,MERGE/DIRECT或者INTER),记录最优信息和与之对应的率失真代价信息。
(c)对所有候选MV再次进行遍历,开启光照补偿技术并该遍历3种光照补偿模式。首先根据预测模式和MV匹配到参考块作为初始预测块,对初始预测块做运动补偿得到参考块。将参考块划分为若干个预设置个数的子块,每个子块根据光照补偿模式索引提取子块内部的左边及/或上边预测像素和当前帧待编码块外的左边及/或上边重建像素,对预测像素和重建像素进行排序和求平均(具体操作如上述)并带入上述式子可得到每个子块对应的线性模型参数a和b。
在一示例性实施例中,若光照补偿模式索引可以记为IC_TL,如图14a所示,可使用参考块子块和当前块上方临近一行中(如上边)的像素点和/或临近一列中(如左边)的像素点计算线性模型;若光照补偿模式索引记为 IC_T,如图14b所示,仅可使用参考块子块和当前块上方临近一行中(如上边)的像素点计算线性模型;若光照补偿模式索引记为IC_L,如图14c所示,仅可使用参考块子块和当前块临近一列(如左边)重建像素计算线性模型,上述索引数字仅为区分作用。在一些实施例中,上述所有子块也可以仅用IC_TL的模型,如此则编码端不需要尝试IC_T和IC_L模型,解码端则不用解析模型索引值。
(d)对参考块进行光照补偿,即对每个参考块子块内部的预测像素根据对应每个子块的线性模型参数做线性偏移,得到当前编码单元的最终预测块。
(e)根据上述经过光照补偿技术的最终预测像素与原始像素计算得到每一个MV的率失真代价信息,记录当前光照补偿模式索引、最小率失真代价信息的MV索引、对应的预测模式(如SKIP,MERGE/DIRECT或INTER)和对应代价值。
(f)若光照补偿技术允许标识位为‘0’,则将(b)中记录的MV索引和预测模式经码流传输给解码端;若光照补偿技术允许标识位为‘1’,则将(b)中记录的最小代价值与(e)中记录的最小代价值进行比较,若(b)中的率失真代价更小,则将(b)中记录的MV索引和预测模式编码作为当前编码单元的最优信息经码流传输给解码端,将光照补偿技术当前编码单元标识位置否,表示不使用光照补偿技术,也经码流传输给解码端;若(b)中的率失真更小,则将(b)中记录的MV索引、光照补偿模式索引和预测模式编码作为当前编码单元的最优信息经码流传输给解码端,将光照补偿技术当前编码单元标识位置真并编码当前编码单元块的光照补偿模式索引,表示使用光照补偿技术,也经码流传输给解码端。上述光照补偿技术使用标识语法表示见表1或表2。
(f)若光照补偿技术允许标识位为‘0’,则将(b)中记录的MV索引和预测模式经码流传输给解码端;若光照补偿技术允许标识位为‘1’,则将(b)中记录的最小代价值与(e)中记录的最小代价值进行比较,若(b)中的率失真代价更小,则将(b)中记录的MV索引和预测模式编码作为当前编码单元的最优信息经码流传输给解码端,将光照补偿技术当前编码单元标识位置否,表示不使用光照补偿技术,也经码流传输给解码端;若(e)中的率失 真更小,则将(e)中记录的MV索引、光照补偿模式索引和预测模式编码作为当前编码单元的最优信息经码流传输给解码端,将光照补偿技术当前编码单元标识位置真并编码当前编码单元块的光照补偿模式索引,表示使用光照补偿技术,也经码流传输给解码端。上述光照补偿技术使用标识语法表示见表1或表2。
后将最终的预测块和反变换、反量化后的残差叠加得到重建后的编码单元块,作为下一个编码单元的预测参考块。
示例性的,光照补偿处理过程中的码字传输语法如表1或表2所示。
表1
IC_Index IC_flag IC_index0 IC_index1
No_IC 0 - -
IC_TL 1 0 -
IC_T 1 1 0
IC_L 1 1 1
表2
IC_Index IC_flag IC_index0 IC_index1
No_IC 0 - -
IC_TL 1 1 -
IC_T 1 0 0
IC_L 1 0 1
在一示例的实施例中,上述各光照补偿模型在参考块划分成四个子块时,取点示意图如图15a、15b和15c所示,每个子块内部分别取参考块子块的可用像素点与对应坐标的当前块邻近的像素点计算每个子块的线性模型。
在一示例的实施例中,还可以采用图16所示的获取当前块邻近像素点和参考块像素点。
在本公开的一些实施例中,在编码端帧间预测部分的具体实施方式还可以为:
编码器获取编码信息,包括帧间预测光照补偿技术(IC)的允许标识位等,获取图像信息后将图像划分成若干个CTU,进一步再划分成若干个CU,每个独立CU都进行帧间预测,当前CU可以限制最小面积和最大面积进行IC技术;
在编码端帧间预测过程中,若IC允许标识位为‘1’,则执行如下所有步骤;若IC允许标识位为‘0’,则仅执行a)、b)和f):
a)帧间预测首先对所有候选MV进行遍历做运动补偿,计算每一个MV下运动补偿后的预测像素,并根据原始像素计算率失真代价。
b)根据上述所有MV率失真代价最小原则,选择出当前编码单元的最优MV及预测模式如(SKIP,MERGE/DIRECT或者INTER),记录最优信息和与之对应的率失真代价信息。
c)对所有候选MV再次进行遍历做运动补偿得到中间预测块:
若当前帧间预测模式为普通帧间预测模式(INTER),则开启光照补偿技术并该遍历3种光照补偿模式。
将中间预测块划分成若干个预设个数的子块,每个子块根据光照补偿模式索引提取子块内的左边及/或上边预测像素和当前帧中待编码块外的左边及/或上边重建像素,对预测像素和重建像素进行排序和求平均(具体操作如上述)并带入上述式子可得到每个子块的线性模型参数a和b。
上述光照补偿模式索引1记为IC_TL,可使用参考块和当前块的左边和上边重建像素计算线性模型;光照补偿模式索引2记为IC_T,仅可使用参考块和当前块的上边重建像素计算线性模型;光照补偿模式索引3记为IC_L,仅可使用参考块和当前块的左边重建像素计算线性模型;
若当前帧间预测模式为跳过模式(SKIP)或合并模式(MERGE/DIRECT),则获取当前编码单元的相邻周边可用块的IC使用标识位和IC模式索引。若当前帧相邻周边块均不使用IC技术,则当前编码块也不使用IC技术;若当前帧相邻周边块存在使用IC技术的重建块,则根据特定顺序获取IC技术使用标识位和IC模式索引作为当前编码单元的IC使用信息。上述特定顺序如上-左-右上-左上-左下的顺序。获取得到IC使用信息后,将中间预测块划分 成若干个预设个数的子块并分别求取缩放因子a和偏移因子b。
d)对中间预测块的每个子块分别进行光照补偿,即对每个预测块中的像素根据对应的线性模型参数做线性偏移,得到当前编码单元的最终预测块。
e)根据上述经过光照补偿技术的最终预测像素与原始像素计算得到每一个MV的率失真代价信息,记录当前光照补偿模式索引、最小率失真代价信息的MV索引、对应的预测模式(如SKIP,MERGE/DIRECT或INTER)和对应代价值。
f)若光照补偿技术允许标识位为‘0’,则将b)中记录的MV索引和预测模式经码流传输给解码端。
若光照补偿技术允许标识位为‘1’,则将b)中记录的最小代价值与e)中记录的最小代价值进行比较:
若b)中的率失真代价更小,则将b)中记录的MV索引和预测模式编码作为当前编码单元的最优信息经码流传输给解码端。若当前帧间预测模式为普通帧间预测模式(INTER),则将光照补偿技术当前编码单元标识位置“0”,表示不使用光照补偿技术,也经码流传输给解码端,否则不传输光照补偿技术使用标识位和模式索引;
若e)中的率失真更小,则若是普通帧间预测模式(INTER),则将e)中记录的MV索引、光照补偿模式索引和预测模式编码作为当前编码单元的最优信息经码流传输给解码端,将光照补偿技术当前编码单元标识位置“1”并编码当前编码单元块的光照补偿模式索引,表示使用光照补偿技术,也经码流传输给解码端;若是跳过模式(SKIP)或合并模式(MERGE),则仅需将对应的MV索引和对应的帧间预测模式进行编码写进码流并传输给解码端。上述光照补偿技术使用标识语法表示见表1或表2。
后将最终的预测块和反变换、反量化后的残差叠加得到重建后的编码单元块,作为下一个编码单元的预测参考块。
本公开一些实施例提供了的一种帧间预测装置,在一示例性实施例中,如图5所示,该装置包括:处理器以及存储有可在所述处理器上运行的计算 机程序的存储器,其中,所述处理器执行所述计算机程序时实现上述任一示例所述的帧间预测方法。
本公开一些实施例提供了一种视频编码方法,在一示例性实施例中,如图6所示,该方法包括:
步骤601,按照上述任一所述的帧间预测方法对当前块进行帧间预测得到所述当前块的预测值;
步骤602,根据所述当前块的预测值对所述当前块进行编码。
在一示例性实施例中,对当前块进行编码的操作可以为现有任一种编码方式,在此并不作限制。例如,通过以下方式对所述当前块进行编码:从当前块减去预测块形成残差块;将残差块经过变换和量化;在量化后,进行熵编码,得到经编码数据。
在一示例性实施例中,在编码过程中,所述方法还包括:
对所述当前块在合并模式下各种候选模式中进行局部光照补偿之后,得到新的一组预测模式;其中,所述新的一组预测模式中CU级光照补偿标识flag全部设置为有效值true;
在所述CU级光照补偿标识flag为有效值true的预测模式中选择N组模式进行率失真代价计算;N≥1;
根据所述率失真代价计算的结果选择所述当前块对应的预测模式,根据所述当前块对应的预测模式确定所述当前块是否进行局部光照补偿。
在一示例性实施例中,N为6。例如,在所诉CU级光照补偿标识flag为true的预测模式中粗选出6组模式进入率失真代价计算(RDO,Rate Distortion Optimization)过程;最终经过编码端的率失真代价计算选择当前块最优的预测模式从而确定当前块是否使用光照补偿技术。本公开提供的示例性实施例的技术方案,可以减少编码中硬件的额外开销和负担,利于编码器的硬件实现,同时能够为现有编解码标准带来较显著的性能增益。
本公开一些实施例提供了一种视频编码方法,在一示例性实施例中,如 图7所示,该方法包括:
步骤701,获取码流,确定当前块的预测模式为帧间预测模式;
步骤702,确定当前块的编码模式为合并模式;
其中,帧间预测包括两种预测模式,分别为合并模式和普通模式。
步骤703,导出运动信息列表;
步骤704,根据所述运动信息列表构建合并模式预测信息列表;
在一示例性实施例中,所述运动信息列表包括以下一种或多种信息:时域运动信息,空域运动信息,HMVP(基于历史信息的运动矢量预测,History based motion vector)运动信息,UMVE(高级运动矢量表达,Ultimate motion vector expression)运动信息。
步骤705,遍历所述运动信息列表构建预测信息的过程中,根据运动信息以及参考图像获取参考块,进行运动补偿MC得到预测块;
步骤706,对运动补偿后的预测块进行局部光照补偿;
在一示例性实施例中,所述对运动补偿后的预测块进行局部光照补偿,包括:
根据当前块邻近的像素点的重构值以及所述参考块进行运动补偿后像素点的预测值,采用预设的算法确定局部光照补偿参数;
根据包含所述局部光照补偿参数的线性模型对所述当前块进行预测;
其中,所述参考块进行运动补偿后像素点的预测值为参考块经过运动补偿之后得到的预测块中像素点的像素值。
在一示例性实施例中,若当前块为双向预测模式,还可以对每个方向上的预测块单独进行光照补偿,也可以对两个方向上的预测块合并之后的预测块进行光照补偿。
步骤707,对局部光照补偿之后的预测块进行BIO、BGC、interPF(帧间预测滤波,inter prediction filter)处理;
步骤708,将进行了光照补偿技术的帧间编码信息添加到预测信息列表;
步骤709,根据RDO过程判断当前块是否使用局部光照补偿技术;
在一示例性实施例中,若使用局部光照补偿技术,则将当前块CU级光照补偿标识flag设置为true;否则,将当前块CU级光照补偿标识flag设置为false。
步骤710,对当前块进行编码。
在本公开另一实施例中,与图7所示的编码方法的区别在于,在进行运动补偿之前,先对参考块进行局部光照补偿;然后对进行局部光照补偿后的参考块进行运动补偿。
在一示例性实施例中,在进行运动补偿之前,对参考块进行局部光照补偿,包括:
根据当前块邻近的像素点的重构值以及所述参考块像素点的重构值,采用预设的算法确定局部光照补偿参数;
根据所述局部光照补偿参数对所述当前块进行预测得到所述当前块的预测值。
在本公开另一实施例中,与图7所示的编码方法的区别在于,在进行运动补偿之后,以及在对运动补偿后的预测块进行BIO、BGC、interPF处理之后,对进行BIO、BGC、interPF处理的预测块进行局部光照补偿。
在一示例性实施例中,所述对进行BIO、BGC、interPF处理的预测块进行局部光照补偿,包括:
根据当前块邻近的像素点的重构值以及所述参考块进行运动补偿后像素点的预测值,采用预设的算法确定局部光照补偿参数;
根据包含所述局部光照补偿参数的线性模型对所述当前块进行预测;
其中,所述参考块进行运动补偿后像素点的预测值为经过BIO或BGC之后得到的预测块中像素点的像素值。
本公开一些实施例提供了一种视频编码装置,在一示例性实施例中,如图8所示,该装置包括:处理器以及存储有可在所述处理器上运行的计算机 程序的存储器,其中,所述处理器执行所述计算机程序时实现上述任一示例所述的视频编码方法。
本公开一些实施例提供了一种视频解码方法,在一示例性实施例中,如图9所示,该方法包括:
步骤901,当启用局部光照补偿时,按照上述任一所述的帧间预测方法对当前块进行帧间预测得到所述当前块的预测值;
步骤902,根据所述当前块的预测值对所述当前块进行解码。
在一示例性实施例中,对当前块进行解码的操作可以为现有任一种解码方式,在此并不作限制。例如,通过以下方式对所述当前块进行解码:对当前块进行熵解码,在熵解码后进行反量化和反变换得到残差块,将预测块和残差块进行相加,重构得到经解码数据。
在一示例性实施例中,在步骤901之前,该方法还包括:
获取经编码视频码流,解析当前图像光照补偿CU级标志flag;
若CU级flag为无效值false,后续帧间预测模式无需进行局部光照补偿,即不当启用局部光照补偿;
若CU级flag为有效值true,则启用局部光照补偿。
本公开提供的示例性实施例的技术方案,可以减少解码中硬件的额外开销和负担,利于解码器的硬件实现,同时能够为现有编解码标准带来较显著的性能增益。
本公开一些实施例提供了的一种视频解码方法,在一示例性实施例中,如图10所示,该方法包括:
步骤1001,解析码流获取当前图像的帧类型,确定当前图像的帧类型为P帧或者B帧;
步骤1002,解析码流获取当前块的编码模式,确定当前块的编码模式为合并模式;
步骤1003,解析码流获取当前块运动信息;
步骤1004,解析码流获取当前块CU级光照补偿标识flag;
步骤1005,根据运动信息以及参考图像确定参考块;
步骤1006,对参考块进行运动补偿,得到预测块;
步骤1007,若当前块光照补偿标识flag为true,则对运动补偿后的预测块进行局部光照补偿;若当前块光照补偿标识flag为false,则不对运动补偿后的预测块进行局部光照补偿;
在一示例性实施例中,所述对运动补偿后的预测块进行局部光照补偿,包括:
根据当前块邻近的像素点的重构值以及所述参考块进行运动补偿后像素点的预测值,采用预设的算法确定局部光照补偿参数;
根据所述局部光照补偿参数的线性模型对所述当前块进行预测,得到所述当前块的预测值;
其中,所述参考块进行运动补偿后像素点的预测值为参考块经过运动补偿之后得到的预测块中像素点的像素值。
在一示例性实施例中,若当前块为双向预测模式,还可以对每个方向上的预测块单独进行光照补偿,也可以对两个方向上的预测块合并之后的预测块进行光照补偿。
步骤1008,对预测块进行BIO、BGC、interPF处理;
步骤1009,解码得到当前块。
在本公开另一实施例中,与图10所示的解码方法的区别在于,在进行运动补偿之前,先对参考块进行局部光照补偿;然后对进行局部光照补偿后的参考块进行运动补偿。
在一示例性实施例中,在进行运动补偿之前,对参考块进行局部光照补偿,包括:
根据当前块邻近的像素点的重构值以及所述参考块像素点的重构值,采用预设的算法确定局部光照补偿参数;
根据所述局部光照补偿参数对所述当前块进行预测,得到所述当前块的预测值。
在本公开另一实施例中,与图10所示的解码方法的区别在于,在进行运动补偿之后,以及在对运动补偿后的预测块进行BIO、BGC、interPF处理之后,对进行BIO、BGC、interPF处理后的预测块进行局部光照补偿。
在一示例性实施例中,所述对进行BIO、BGC、interPF处理后的预测块进行局部光照补偿,包括:
根据当前块邻近的像素点的重构值以及所述参考块进行运动补偿后像素点的预测值,采用预设的算法确定局部光照补偿参数;
根据所述局部光照补偿参数对所述当前块进行预测,得到所述当前块的预测值;
其中,所述参考块进行运动补偿后像素点的预测值为经过BIO或BGC之后得到的预测块中像素点的像素值。
本公开一些实施例提供了一种视频解码装置,在一示例性实施例中,如图11所示,该装置包括:处理器以及存储有可在所述处理器上运行的计算机程序的存储器,其中,所述处理器执行所述计算机程序时实现上述任一示例所述的视频解码方法。
本公开一些实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其中,所述计算机程序时被处理器执行时实现上述任一示例所述的帧间预测方法,或者实现上述任一示例所述的视频编码方法,或者实现上述任一示例所述的视频解码方法。
在本公开的一实施例中,可以由视频编码器执行上述任一示例所述的视频编码方法。
在一示例性实施例中,图12为一种视频编码器的结构框图,如图12所示,视频编码器20包含视频数据存储器33、分割单元35、预测处理单元41、求和器50、变换处理单元52、量单元54、熵编码单元56。预测处理单元41包含运动估单元(MEU)42、运动补偿单元(MCU)44帧内预测处理单元46及帧内块副本(IBC)单元48。尽管为了易于阐释而在图15中单独地展示,但应理解,MEU42、MCU44、帧内预测处理单元46及IBC单元48可实际上高度地集成。对于视频块重构,视频编码器20还包含反量化单元58、反变换处理单元60、求和器62、滤波器单元64及经解码图片缓冲器(DPB)66。
在本公开的一些实施例中,在解码端帧间预测部分的具体实施方式可以为:
解码器获取码流,解析码流得到当前视频序列的光照补偿技术(IC)允许标识位。
在帧间预测解码过程中,若IC技术允许标识位为‘1’,则执行如下所有步骤;若IC技术允许标识位为‘0’,则仅执行a)、b)、d)和g)步骤:
a)获取码流并解码得到残差信息,经过反变换与反量化等过程得到时域残差信息。
b)解析码流得到当前解码块的帧间预测模式及MV索引。
c)解析码流得到当前解码块的IC使用标识位,以表1语法为例,若当前解码块的IC使用标识位为真,则继续解析码流得到当前解码块的第一位光照补偿模式索引标识位,否则当前解码单元不使用光照补偿技术;若第一位光照补偿模式索引标识位为真;则继续解析第二位光照补偿模式索引标识位,否则置光照补偿模式索引为1,表示使用第一个光照补偿线性模型IC_TL(可使用上边和左边重建像素进行光照补偿线性模型计算);若解析得到第二位光照补偿模式索引标识位为真,则置光照补偿模式索引为3,表示使用第三个光照补偿线性模式IC_L(仅可使用左边重建像素进行光照补偿线性模型计算),否则置光照补偿模式索引为2,表示使用第二个光照补偿线性模式IC_T(仅可使用上边重建像素进行光照补偿线性模型计算)。
d)对根据MV索引及帧间预测模式等信息获取参考帧的参考块作为初始预测块,并对初始预测块做运动补偿得到中间预测块。
e)若IC使用标识位不为否,即需要对当前中间预测块做光照补偿,则:
将当前中间预测块划分为若干个预设置个数的子块(即参考块子块),像素点的选取可以参考图15a、15b、15c所示。每个子块根据光照补偿模式索引提取子块内部的左边及/或上边预测像素和当前帧待编码块外的左边及/或上边重建像素,对预测像素和重建像素进行排序和求平均(具体操作如上述)并带入上述式子可得到每个子块对应的线性模型参数a和b。每个子块根据对应的缩放因子a和偏移因子b对子块内所有预测像素进行线性偏移,得到最终预测值;
若IC使用标识位为否,则将中间预测块视为最终预测块。
f)将最终的预测块叠加步骤a)还原后的残差信息得到当前解码单元的重建块,经后处理输出。
上述光照补偿技术获取参考帧中重建像素的位置和个数可以为,参考帧中参考块内的任意位置和大于0的整数个数,如运动矢量所指向的参考块内第一行及/或第一列。
上述计算线性模型参数过程中,可以是对所有像素值进行排序后,去除最大值和最小值,求取两个最大值的平均值和两个最小值的平均值,即和本文前述一致;也可以是间隔性取点,如上侧第一个点及倒数第二个点,左侧第一个点及倒数第二个点,进行排序求取两个较大值的平均值和两个较小值的平均值,之后计算步骤与本文前述一致;
上述光照补偿技术可以作用在其他如双向光流技术(BDOF/BIO)、解码端运动矢量修正(DMVR)、双向预测加权(BCW/BGC)、帧间预测滤波(INTERPF)或帧间帧内联合预测(CIIP)中的任一位置,如光照补偿技术作用在双向光流技术和双向加权预测技术之前;
上述光照补偿技术可以不和其他技术共同作用于同一编码块,如若当前块使用IC技术,则不再对当前块使用BDOF/BIO进行修正补偿;
上述光照补偿技术在不同颜色分量之间需要重新计算线性模型,如YUV 颜色空间则需要分别对Y、U和V计算对应于三个颜色分量的线性模型。
在本公开的一些实施例中,在解码端帧间预测部分的具体实施方式还可以为:
解码器获取码流,解析码流得到当前视频序列的光照补偿技术(IC)允许标识位。
在帧间预测解码过程中,若IC技术允许标识位为‘1’,则执行如下所有步骤;若IC技术允许标识位为‘0’,则仅执行a)、b)、d)和f)步骤:
a)获取码流并解码得到残差信息,经过反变换与反量化等过程得到时域残差信息。
b)解析码流得到当前解码块的帧间预测模式及MV索引。
c)若当前解码块的帧间预测模式为普通帧间预测模式(INTER),则解析码流得到当前解码块的IC使用标识位,若当前解码块的IC使用标识位为真,则继续解析码流得到当前解码块的第一位光照补偿模式索引标识位,否则当前解码单元不使用光照补偿技术;若第一位光照补偿模式索引标识位为真;则继续解析第二位光照补偿模式索引标识位,否则置光照补偿模式索引为1,表示使用第一个光照补偿线性模型IC_TL(可使用上边和左边重建像素进行光照补偿线性模型计算);若解析得到第二位光照补偿模式索引标识位为真,则置光照补偿模式索引为3,表示使用第三个光照补偿线性模式IC_L(仅可使用左边重建像素进行光照补偿线性模型计算),否则置光照补偿模式索引为2,表示使用第二个光照补偿线性模式IC_T(仅可使用上边重建像素进行光照补偿线性模型计算)。
若当前解码块的帧间预测模式为跳过模式(SKIP)或合并模式(MERGE/DIRECT),则根据与编码端一致的顺序获取周边相邻重建块中的IC技术信息作为当前解码单元的IC技术使用信息。
d)根据预测模式和MV索引得到初始预测块,并对初始预测块做运动补偿得到中间预测块。
e)若当前解码块的IC使用标识位不为否,则:
将中间预测块划分成若干个预设个数的子块,每个子块根据解析得到的光照补偿模式索引获取对应位置的子块内部的预测像素和当前帧编码单元外的重建像素计算线性模型参数,得到每个子块的缩放因子a和偏移因子b;每个子块根据求解得到的缩放因子a和偏移因子b对子块内所有预测像素进行线性偏移,得到最终预测块;
若当前解码块的IC使用标识位为否,则将中间预测块作为最终预测块。
f)将最终的预测块叠加步骤a)还原后的残差信息得到当前解码单元的重建块,经后处理输出。
上述在SKIP、MERGE/DIRECT的IC信息导出过程还可以将IC的模式索引与参考的周边相邻块做绑定,若参考MV来自相邻上边块,则当前预测块使用IC_T的光照补偿模式;若参考MV来自相邻左边块,则当前预测块使用IC_L的光照补偿模式;其他则使用IC_TL的光照补偿模式。
在本公开的一实施例中,可以由视频解码器执行上述任一示例所述的视频解码方法。
在一示例性实施例中,图13为一种视频解码器的结构框图,如图13所示,视频解码器30包含视频数据存储器78、熵解码单元80、预测处理单元81、反量化单元86、反变换处理单元88、求和器90、波器单元92及DPB94。预测处理单元81包含MCU82帧内预测处理单元84及IBC单元85。在一些实例中,视频解码器30可执行与关于来自图12的视频编码器20所描述的编码过程大体上互逆的解码过程。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些组件或所有组件可以被实施为由处理器,如数字信号处理器或微处理器执行的软件,或者被实施为硬 件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。

Claims (15)

  1. 一种帧间预测方法,包括:
    根据获取的可用像素点确定局部光照补偿参数;所述可用像素点包括当前块邻近的像素点和/或所述当前块的参考块的像素点;
    根据所述局部光照补偿参数,对所述当前块进行帧间预测得到所述当前块的预测值。
  2. 根据权利要求1所述的帧间预测方法,其中,所述当前块邻近的像素点包括所述当前块邻近一行中的像素点和/或所述当前块邻近一列中的像素点。
  3. 根据权利要求1所述的帧间预测方法,其中,
    所述参考块的像素点包括所述参考块上方一行中的像素点和/或所述参考块左方一列中的像素点。
  4. 根据权利要求1所述的帧间预测方法,其中,所述参考块的像素点包括所述参考块除了上方一行的其他一行中的像素点和/或所述参考块左方一列中的像素点。
  5. 根据权利要求1所述的帧间预测方法,其中,所述参考块的像素点包括所述参考块上方一行中的像素点和/或所述参考块除了左方一列的其他一列中的像素点。
  6. 根据权利要求1所述的帧间预测方法,其中,所述参考块的像素点包括所述参考块除了上方一行的其他一行中的像素点和/或所述参考块除了左方一列的其他一列中的像素点。
  7. 根据权利要求2-6任一项所述的帧间预测方法,其中,
    一行中的像素点包括该行中的全部像素点或者部分像素点;一列中的像素点包括该列中的全部像素点或者部分像素点;
    其中,所述部分像素点通过下采样后选取得到,或者根据预设位置选取得到。
  8. 根据权利要求1所述的帧间预测方法,其中,通过以下一种或者多种 方式确定局部光照补偿参数:
    根据当前块邻近的像素点的重构值以及所述参考块的像素点的重构值,采用预设的算法确定局部光照补偿参数;
    根据当前块邻近的像素点的重构值以及所述参考块的像素点的预测值,采用预设的算法确定局部光照补偿参数;
    其中,所述参考块的像素点的预测值包括以下一种或者多种:
    所述参考块经过运动补偿之后得到的预测块中像素点的像素值;所述参考块经过运动补偿以及双向光流BIO或双向梯度修正BGC之后得到的预测块中像素点的像素值。
  9. 一种帧间预测装置,其中,包括:处理器以及存储有可在所述处理器上运行的计算机程序的存储器,其中,所述处理器执行所述计算机程序时实现如权利要求1至8中任一项所述的帧间预测方法。
  10. 一种视频编码方法,包括:
    按照如权利要求1至8中任一所述的帧间预测方法对当前块进行帧间预测得到所述当前块的预测值;
    根据所述当前块的预测值对所述当前块进行编码。
  11. 根据权利要求10所述的视频编码方法,其中,在编码过程中,所述方法还包括:
    对所述当前块在合并模式下各种候选模式中进行局部光照补偿之后,得到新的一组预测模式;其中,所述新的一组预测模式中编码单元CU级光照补偿标识全部设置为有效值;
    在所述CU级光照补偿标识为有效值的预测模式中选择N组模式进行率失真代价计算;N≥1;
    根据所述率失真代价计算的结果选择所述当前块对应的预测模式,根据所述当前块对应的预测模式确定所述当前块是否进行局部光照补偿。
  12. 一种视频编码装置,其中,包括:处理器以及存储有可在所述处理器上运行的计算机程序的存储器,其中,所述处理器执行所述计算机程序时 实现如权利要求10至11中任一项所述的视频编码方法。
  13. 一种视频解码方法,包括:
    当启用局部光照补偿时,按照如权利要求1至8中任一所述的帧间预测方法对当前块进行帧间预测得到所述当前块的预测值;
    根据所述当前块的预测值对所述当前块进行解码。
  14. 一种视频解码装置,其中,包括:处理器以及存储有可在所述处理器上运行的计算机程序的存储器,其中,所述处理器执行所述计算机程序时实现如权利要求13中任一项所述的视频解码方法。
  15. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其中,所述计算机程序时被处理器执行时实现如权利要求1至8中任一项所述的帧间预测方法,或者实现如权利要求10至11中任一项所述的视频编码方法,或者实现如权利要求13所述的视频解码方法。
PCT/CN2020/135490 2020-12-03 2020-12-10 帧间预测方法、视频编解码方法、装置及介质 WO2022116246A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202311171521.9A CN117221534A (zh) 2020-12-03 2020-12-10 帧间预测方法、视频编解码方法、装置及介质
MX2023006442A MX2023006442A (es) 2020-12-03 2020-12-10 Metodo de prediccion inter-fotograma, metodo de codificacion y descodificacion de video, aparato y medio.
CN202080107715.0A CN116569554A (zh) 2020-12-03 2020-12-10 帧间预测方法、视频编解码方法、装置及介质

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CNPCT/CN2020/133693 2020-12-03
CN2020133693 2020-12-03
PCT/CN2020/133709 WO2022116119A1 (zh) 2020-12-03 2020-12-03 一种帧间预测方法、编码器、解码器及存储介质
CNPCT/CN2020/133709 2020-12-03

Publications (1)

Publication Number Publication Date
WO2022116246A1 true WO2022116246A1 (zh) 2022-06-09

Family

ID=81853735

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/135490 WO2022116246A1 (zh) 2020-12-03 2020-12-10 帧间预测方法、视频编解码方法、装置及介质

Country Status (3)

Country Link
CN (2) CN116569554A (zh)
MX (1) MX2023006442A (zh)
WO (1) WO2022116246A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116128759A (zh) * 2023-02-08 2023-05-16 爱芯元智半导体(上海)有限公司 图像的光照补偿方法及其装置
WO2023245349A1 (zh) * 2022-06-20 2023-12-28 Oppo广东移动通信有限公司 一种局部光照补偿方法、视频编解码方法、装置和系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107147911A (zh) * 2017-07-05 2017-09-08 中南大学 基于局部亮度补偿lic的快速帧间编码模式选择方法及装置
US20190313104A1 (en) * 2018-04-06 2019-10-10 Arris Enterprises Llc System and Method of Implementing Multiple Prediction Models for Local Illumination Compensation
CN110574377A (zh) * 2017-05-10 2019-12-13 联发科技股份有限公司 用于视频编解码的重新排序运动向量预测候选集的方法及装置
CN110944202A (zh) * 2018-09-24 2020-03-31 腾讯美国有限责任公司 视频编解码方法、装置、计算机设备和存储介质
CN111031319A (zh) * 2019-12-13 2020-04-17 浙江大华技术股份有限公司 一种局部光照补偿预测方法、终端设备及计算机存储介质
CN111526362A (zh) * 2019-02-01 2020-08-11 华为技术有限公司 帧间预测方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110574377A (zh) * 2017-05-10 2019-12-13 联发科技股份有限公司 用于视频编解码的重新排序运动向量预测候选集的方法及装置
CN107147911A (zh) * 2017-07-05 2017-09-08 中南大学 基于局部亮度补偿lic的快速帧间编码模式选择方法及装置
US20190313104A1 (en) * 2018-04-06 2019-10-10 Arris Enterprises Llc System and Method of Implementing Multiple Prediction Models for Local Illumination Compensation
CN110944202A (zh) * 2018-09-24 2020-03-31 腾讯美国有限责任公司 视频编解码方法、装置、计算机设备和存储介质
CN111526362A (zh) * 2019-02-01 2020-08-11 华为技术有限公司 帧间预测方法和装置
CN111031319A (zh) * 2019-12-13 2020-04-17 浙江大华技术股份有限公司 一种局部光照补偿预测方法、终端设备及计算机存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JIANLE CHEN , ELENA ALSHINA, GARY J. SULLIVAN , JENS-RAINER , JILL BOYC: "Algorithm Description of Joint Exploration Test Model 4", (JVET) ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, DOCUMENT: JVET-D1001_V3, LONDON, 15 October 2016 (2016-10-15), London , pages 1 - 39, XP055517214 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023245349A1 (zh) * 2022-06-20 2023-12-28 Oppo广东移动通信有限公司 一种局部光照补偿方法、视频编解码方法、装置和系统
CN116128759A (zh) * 2023-02-08 2023-05-16 爱芯元智半导体(上海)有限公司 图像的光照补偿方法及其装置
CN116128759B (zh) * 2023-02-08 2024-01-09 爱芯元智半导体(上海)有限公司 图像的光照补偿方法及其装置

Also Published As

Publication number Publication date
MX2023006442A (es) 2023-06-15
CN117221534A (zh) 2023-12-12
CN116569554A (zh) 2023-08-08

Similar Documents

Publication Publication Date Title
TWI499267B (zh) 多度量尺度濾波
JP5266342B2 (ja) 映像イントラ予測方法及び装置
TW201740730A (zh) 用於視訊寫碼之濾波器之幾何轉換
WO2022104498A1 (zh) 帧内预测方法、编码器、解码器以及计算机存储介质
US20180160138A1 (en) Method and device for performing deblocking filtering
WO2021203394A1 (zh) 环路滤波的方法与装置
WO2022116246A1 (zh) 帧间预测方法、视频编解码方法、装置及介质
US20220116664A1 (en) Loop filtering method and device
CN113068026B (zh) 编码预测方法、装置及计算机存储介质
WO2022022622A1 (zh) 图像编码方法、图像解码方法及相关装置
CN116982262A (zh) 视频编码中依赖性量化的状态转换
CN118044184A (zh) 用于执行组合帧间预测和帧内预测的方法和系统
WO2022174469A1 (zh) 一种光照补偿方法、编码器、解码器及存储介质
WO2022116119A1 (zh) 一种帧间预测方法、编码器、解码器及存储介质
CN113395520B (zh) 解码预测方法、装置及计算机存储介质
WO2023245349A1 (zh) 一种局部光照补偿方法、视频编解码方法、装置和系统
KR102696422B1 (ko) 디코딩 방법, 인코딩 방법, 장치, 기기 및 저장매체
WO2023141970A1 (zh) 解码方法、编码方法、解码器、编码器和编解码系统
WO2024055155A1 (zh) 一种编解码方法、装置、编码器、解码器及存储介质
WO2023193260A1 (zh) 编解码方法、码流、编码器、解码器以及存储介质
WO2024007120A1 (zh) 编解码方法、编码器、解码器以及存储介质
WO2024138705A1 (zh) 一种帧内预测方法、视频编解码方法、装置和系统
WO2020000487A1 (zh) 变换方法、反变换方法及装置
WO2020007187A1 (zh) 图像块解码方法及装置
CN115731133A (zh) 一种图像滤波方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20964091

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202080107715.0

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20964091

Country of ref document: EP

Kind code of ref document: A1