CN113709501A - Encoding and decoding method, device and equipment - Google Patents
Encoding and decoding method, device and equipment Download PDFInfo
- Publication number
- CN113709501A CN113709501A CN202111155058.XA CN202111155058A CN113709501A CN 113709501 A CN113709501 A CN 113709501A CN 202111155058 A CN202111155058 A CN 202111155058A CN 113709501 A CN113709501 A CN 113709501A
- Authority
- CN
- China
- Prior art keywords
- value
- current block
- prediction
- motion information
- reference weight
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 97
- 230000002093 peripheral effect Effects 0.000 claims abstract description 310
- 238000003860 storage Methods 0.000 claims description 147
- 230000002457 bidirectional effect Effects 0.000 claims description 34
- 238000005520 cutting process Methods 0.000 claims description 3
- 239000013598 vector Substances 0.000 description 295
- 230000009466 transformation Effects 0.000 description 82
- 230000003247 decreasing effect Effects 0.000 description 64
- 230000000875 corresponding effect Effects 0.000 description 62
- 230000007423 decrease Effects 0.000 description 59
- 230000008569 process Effects 0.000 description 41
- 238000010586 diagram Methods 0.000 description 24
- 238000006243 chemical reaction Methods 0.000 description 21
- 238000012545 processing Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 11
- 230000003044 adaptive effect Effects 0.000 description 8
- 238000005192 partition Methods 0.000 description 8
- 230000002123 temporal effect Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 238000010276 construction Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000013139 quantization Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 101100134058 Caenorhabditis elegans nth-1 gene Proteins 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/184—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The application provides a coding and decoding method, a device and equipment thereof, wherein the method comprises the following steps: when the weighted prediction of the current block is determined to be started, acquiring the weighted prediction angle of the current block; aiming at each pixel position of the current block, determining a peripheral matching position pointed by the pixel position from peripheral positions outside the current block according to a weight prediction angle, determining a target weight value of the pixel position according to a reference weight value associated with the peripheral matching position, and determining an associated weight value of the pixel position according to the target weight value; determining a first predicted value of the pixel position according to the first prediction mode, determining a second predicted value of the pixel position according to the second prediction mode, and determining a weighted predicted value of the pixel position according to the first predicted value, the target weight value, the second predicted value and the associated weight value; and determining the weighted prediction value of the current block according to the weighted prediction value of each pixel position. According to the technical scheme, the prediction accuracy is improved.
Description
Technical Field
The present application relates to the field of encoding and decoding technologies, and in particular, to an encoding and decoding method, apparatus, and device.
Background
In order to achieve the purpose of saving space, video images are transmitted after being coded, and the complete video coding method can comprise the processes of prediction, transformation, quantization, entropy coding, filtering and the like. The predictive coding may include intra-frame coding and inter-frame coding, among others. Further, inter-frame coding uses the correlation of the video time domain and uses the pixels of the adjacent coded images to predict the current pixel, so as to achieve the purpose of effectively removing the video time domain redundancy. The intra-frame coding means that the current pixel is predicted by using the pixel of the coded block of the current frame image by utilizing the correlation of a video spatial domain so as to achieve the purpose of removing the video spatial domain redundancy.
In the related art, the current block is rectangular, and the edge of the actual object is often not rectangular, and for the edge of the object, two different objects (such as an object with foreground and a background) often exist. When the motion of two objects is inconsistent, the rectangular partition cannot divide the two objects well, and if the current block is divided into two non-square sub-blocks for the purpose, and the current block is predicted through the two non-square sub-blocks, the problems of poor prediction effect, poor coding performance and the like exist.
Disclosure of Invention
In view of this, the present application provides a coding and decoding method, apparatus and device, which improve the accuracy of prediction.
The application provides a coding and decoding method, which comprises the following steps:
when the weighted prediction of a current block is determined, acquiring the weighted prediction angle of the current block;
for each pixel position of the current block, determining a peripheral matching position pointed by the pixel position from peripheral positions outside the current block according to the weight prediction angle, determining a target weight value of the pixel position according to a reference weight value associated with the peripheral matching position, and determining an associated weight value of the pixel position according to the target weight value of the pixel position;
determining a first predicted value of the pixel position according to a first prediction mode of the current block, determining a second predicted value of the pixel position according to a second prediction mode of the current block, and determining a weighted predicted value of the pixel position according to the first predicted value, the target weight value, the second predicted value and the associated weight value;
and determining the weighted prediction value of the current block according to the weighted prediction value of each pixel position of the current block.
The present application provides a coding and decoding device, the device includes:
the obtaining module is used for obtaining the weight prediction angle of the current block when the weight prediction of the current block is determined to be started;
a first determining module, configured to determine, for each pixel position of the current block, a peripheral matching position to which the pixel position points from peripheral positions outside the current block according to the weight prediction angle, determine a target weight value of the pixel position according to a reference weight value associated with the peripheral matching position, and determine an associated weight value of the pixel position according to the target weight value of the pixel position;
a second determining module, configured to determine a first predicted value of the pixel position according to the first prediction mode of the current block, determine a second predicted value of the pixel position according to the second prediction mode of the current block, and determine a weighted predicted value of the pixel position according to the first predicted value, the target weight value, the second predicted value, and the associated weight value; and determining the weighted prediction value of the current block according to the weighted prediction value of each pixel position of the current block.
The application provides a decoding side device, including: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
The processor is configured to execute machine executable instructions to perform the steps of:
when the weighted prediction of a current block is determined, acquiring the weighted prediction angle of the current block;
for each pixel position of the current block, determining a peripheral matching position pointed by the pixel position from peripheral positions outside the current block according to the weight prediction angle, determining a target weight value of the pixel position according to a reference weight value associated with the peripheral matching position, and determining an associated weight value of the pixel position according to the target weight value of the pixel position;
determining a first predicted value of the pixel position according to a first prediction mode of the current block, determining a second predicted value of the pixel position according to a second prediction mode of the current block, and determining a weighted predicted value of the pixel position according to the first predicted value, the target weight value, the second predicted value and the associated weight value;
and determining the weighted prediction value of the current block according to the weighted prediction value of each pixel position of the current block.
The application provides a coding end device, including: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
The processor is configured to execute machine executable instructions to perform the steps of:
when the weighted prediction of a current block is determined, acquiring the weighted prediction angle of the current block;
for each pixel position of the current block, determining a peripheral matching position pointed by the pixel position from peripheral positions outside the current block according to the weight prediction angle, determining a target weight value of the pixel position according to a reference weight value associated with the peripheral matching position, and determining an associated weight value of the pixel position according to the target weight value of the pixel position;
determining a first predicted value of the pixel position according to a first prediction mode of the current block, determining a second predicted value of the pixel position according to a second prediction mode of the current block, and determining a weighted predicted value of the pixel position according to the first predicted value, the target weight value, the second predicted value and the associated weight value;
and determining the weighted prediction value of the current block according to the weighted prediction value of each pixel position of the current block.
According to the technical scheme, an effective mode for setting the weight value is provided in the embodiment of the application, and a reasonable target weight value can be set for each pixel position of the current block, so that the accuracy of prediction is improved, the prediction performance is improved, the coding performance is improved, the predicted value of the current block can be closer to the original pixel, and the coding performance is improved.
Drawings
FIG. 1 is a schematic diagram of a video coding framework;
FIGS. 2A-2E are schematic diagrams of weighted prediction;
FIG. 3 is a flow chart of a method of encoding and decoding in one embodiment of the present application;
FIG. 4 is a flow chart of a coding and decoding method in another embodiment of the present application;
FIGS. 5A-5D are schematic diagrams of peripheral locations outside of a current block;
FIG. 6 is a flow chart of a coding and decoding method in another embodiment of the present application;
FIG. 7 is a diagram illustrating weight prediction angles in one embodiment of the present application;
FIGS. 8A-8H are schematic diagrams of reference weight values;
FIG. 9A is a schematic structural diagram of an encoding and decoding apparatus according to an embodiment of the present application;
fig. 9B is a hardware structure diagram of a decoding-side device according to an embodiment of the present application;
fig. 9C is a hardware configuration diagram of an encoding-side device according to an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples and claims of this application, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items. It should be understood that although the terms first, second, third, etc. may be used in the embodiments of the present application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the embodiments of the present application. Depending on the context, moreover, the word "if" may be used is interpreted as "at … …," or "when … …," or "in response to a determination.
The embodiment of the application provides a coding and decoding method, a coding and decoding device and equipment thereof, which can relate to the following concepts:
intra and inter prediction (intra and inter prediction) and IBC (intra block copy) prediction:
the intra-frame prediction means that the correlation of a video spatial domain is utilized, and the coded block of the current block is used for prediction so as to achieve the purpose of removing the video spatial redundancy. Intra prediction specifies a plurality of prediction modes, each corresponding to one texture direction (except for the DC mode), and for example, if the image texture is horizontally arranged, the horizontal prediction mode can better predict image information.
Inter-frame prediction refers to that based on the correlation of the video time domain, because a video sequence contains stronger time domain correlation, the pixels of the current image are predicted by using the pixels of the adjacent coded images, and the aim of effectively removing the video time domain redundancy can be achieved. The inter-frame prediction part of the video coding standard adopts a block-based Motion compensation technique, and the main principle is to find a best matching block in a previously coded image for each pixel block of a current image, and the process is called Motion Estimation (ME).
Intra Block Copy (IBC) refers to allowing reference to the same frame, the reference data of the current Block is from the same frame, and Intra Block Copy may also be referred to as Intra Block Copy. In the intra block copy technology, the block vector of the current block can be used to obtain the predicted value of the current block, and for example, based on the characteristic that a large number of repeated textures exist in the same frame in the screen content, when the block vector is used to obtain the predicted value of the current block, the compression efficiency of the screen content sequence can be improved.
Motion Vector (MV): in inter coding, a relative displacement between a current block of a current frame picture and a reference block of a reference frame picture may be represented using a motion vector. Each divided block has a corresponding motion vector transmitted to a decoding side, and if the motion vector of each block is independently encoded and transmitted, especially a large number of blocks of small size, a lot of bits are consumed. In order to reduce the bit number for encoding the motion vector, the spatial correlation between adjacent blocks can be used to predict the motion vector of the current block to be encoded according to the motion vector of the adjacent encoded block, and then the prediction difference is encoded, thus effectively reducing the bit number representing the motion vector. When encoding a Motion Vector of a current block, the Motion Vector of the current block may be predicted using Motion vectors of adjacent encoded blocks, and then a Difference value (MVD) between a predicted value (MVP) of the Motion Vector and a true estimate value of the Motion Vector may be encoded.
Motion Information (Motion Information): since the motion vector indicates a position offset between the current block and a certain reference block, in order to accurately acquire information pointing to the block, index information of the reference frame image is required in addition to the motion vector to indicate which reference frame image the current block uses. In video coding technology, for a current frame picture, a reference frame picture list can be generally established, and the reference frame picture index information indicates that the current block adopts a few reference frame pictures in the reference frame picture list. In addition, many coding techniques also support multiple reference picture lists, and therefore, an index value, which may be referred to as a reference direction, may also be used to indicate which reference picture list is used. As described above, in the video encoding technology, motion-related information such as a motion vector, a reference frame index, and a reference direction may be collectively referred to as motion information.
Block Vector (Block Vector, BV): the block vector is applied in an intra block copy technique, which uses the block vector for motion compensation, i.e., the block vector is used to obtain the prediction value of the current block. Unlike motion vectors, block vectors represent the relative displacement between the current block and the best matching block in the current frame encoded block. Based on the characteristic that a large number of repeated textures exist in the same frame, when the block vector is adopted to obtain the predicted value of the current block, the compression efficiency can be obviously improved.
Intra prediction mode: in intra-frame coding, an intra-frame prediction mode is used for motion compensation, namely, the intra-frame prediction mode is adopted to obtain a prediction value of a current block. For example, the intra prediction mode may include, but is not limited to, a Planar mode, a DC mode, and 33 angular modes. Referring to table 1, as an example of the intra prediction mode, the Planar mode corresponds to mode 0, the DC mode corresponds to mode 1, and the remaining 33 angular modes correspond to modes 1 to 34. The Planar mode is applied to an area where the pixel value changes slowly, and uses two linear filters in the horizontal direction and the vertical direction, and the average value of the two linear filters is used as the predicted value of the current block pixel. The DC mode is applicable to a large-area flat area, and takes an average value of surrounding pixels of the current block as a prediction value of the current block. There are 33 angle modes, and more subdivided angle modes, such as 67 angle modes, are adopted in the new generation codec standard VVC.
TABLE 1
Intra prediction mode | |
0 | |
1 | |
2…34 | angular2…angular34 |
Prediction pixel (Prediction Signal): the prediction pixel is a pixel value derived from a pixel that has been coded and decoded, and a residual is obtained from a difference between an original pixel and the prediction pixel, and then residual transform quantization and coefficient coding are performed. The inter-frame prediction pixel refers to a pixel value derived from a reference frame by a current block, and a final prediction pixel needs to be obtained through interpolation operation due to pixel position dispersion. The closer the predicted pixel is to the original pixel, the smaller the residual energy obtained by subtracting the predicted pixel and the original pixel is, and the higher the coding compression performance is.
Palette Mode (Palette Mode): in palette mode, the pixel values of the current block are represented by a small set of pixel values, i.e., a palette. When a pixel value of a pixel location in the current block approaches a color in the palette, the pixel location encodes an index value of the corresponding color in the palette. When the pixel value of the pixel position in the current block is not similar to all colors in the palette, the pixel position needs to be coded with an escape pixel value, and the escape pixel value is directly coded into a code stream after being quantized. For the decoding end, a color palette is obtained, for example, storing { color a, color B, color C }, then each pixel position is determined whether to be an escape pixel value, if not, an index of the pixel position is obtained from the code stream, and a color is obtained from the color palette based on the index of the pixel position to be assigned to the pixel position, otherwise, the escape pixel value is analyzed.
Rate-Distortion principle (Rate-Distortion Optimized): there are two major indicators for evaluating coding efficiency: code rate and peak signal to Noise Ratio (PSNR), the smaller the bit stream, the larger the compression rate, and the larger the PSNR, the better the reconstructed image quality, and in the mode selection, the discrimination formula is essentially the comprehensive evaluation of the two. For example, the cost for a mode: j (mode) ═ D + λ R, where D denotes Distortion, which can be generally measured using SSE index, SSE being the sum of the mean square of the differences between the reconstructed image block and the source image; and λ is a lagrange multiplier, and R is the actual number of bits required for encoding the image block in the mode, including the sum of bits required for encoding mode information, motion information, residual errors and the like.
The video coding framework comprises the following steps: referring to fig. 1, a video encoding frame may be used to implement the encoding-side processing flow in the embodiment of the present application, a schematic diagram of a video decoding frame is similar to that in fig. 1, and details are not repeated here, and a video decoding frame may be used to implement the decoding-side processing flow in the embodiment of the present application. Illustratively, in the video encoding framework and the video decoding framework, modules such as intra prediction, motion estimation/motion compensation, reference picture buffer, in-loop filtering, reconstruction, transformation, quantization, inverse transformation, inverse quantization, entropy encoder, and the like can be included, but are not limited thereto. At the encoding end, the processing flow at the encoding end can be realized through the matching among the modules, and at the decoding end, the processing flow at the decoding end can be realized through the matching among the modules.
In the related art, if the current block is rectangular, the edge of the actual object is often not rectangular, that is, two different objects (such as an object with foreground and a background) often exist for the edge of the object. When the motion of two objects is not consistent, the rectangular partition cannot divide the two objects well, and for this reason, the current block may be divided into two non-square sub-blocks, and the two non-square sub-blocks may be subjected to weighted prediction. For example, the weighted prediction is a weighting operation performed by using a plurality of predicted values, so as to obtain a final predicted value, and the weighted prediction may include: inter-frame and intra-frame joint weighted prediction, inter-frame and inter-frame joint weighted prediction, intra-frame and intra-frame joint weighted prediction, and the like. For the weighted prediction, the same weight values may be set for all pixel positions of the current block, or different weight values may be set for all pixel positions of the current block.
Fig. 2A is a diagram illustrating inter-frame and intra-frame joint weighted prediction.
The CIIP (Combined inter/intra prediction) prediction block is obtained by weighting an intra prediction block (i.e. an intra prediction value of a pixel position is obtained by adopting an intra prediction mode) and an inter prediction block (i.e. an inter prediction value of a pixel position is obtained by adopting an inter prediction mode), and the weight ratio of the intra prediction value and the inter prediction value adopted by each pixel position is 1: 1. For example, for each pixel position, the intra prediction value of the pixel position and the inter prediction value of the pixel position are weighted to obtain a joint prediction value of the pixel position, and finally the joint prediction value of each pixel position is formed into a CIIP prediction block.
For example, the intra-prediction mode of the intra-prediction block may be fixed at the encoding end and the decoding end, thereby avoiding the transmission syntax indicating a specific intra-prediction mode. Or, an intra-frame prediction mode list is constructed, an encoding end encodes an index value of the selected intra-frame prediction mode into a code stream, and a decoding end selects the intra-frame prediction mode from the intra-frame prediction mode list based on the index value.
Referring to fig. 2B, a diagram of inter-frame triangulation weighted prediction (TPM) is shown.
The TPM prediction block is obtained by weighting an inter prediction block 1 (i.e., an inter prediction value of a pixel position obtained by using an inter prediction mode) and an inter prediction block 2 (i.e., an inter prediction value of a pixel position obtained by using an inter prediction mode). The TPM prediction block may be divided into two regions, one region may be an inter region 1, the other region may be an inter region 2, the two inter regions of the TPM prediction block may be distributed in a non-square shape, and the angle of the dashed boundary may be a main diagonal or a sub diagonal.
Illustratively, for each pixel position of the inter region 1, the inter prediction value of the inter prediction block 1 is mainly determined based on the inter prediction value of the inter prediction block 1, for example, when the inter prediction value of the inter prediction block 1 at the pixel position is weighted with the inter prediction value of the inter prediction block 2 at the pixel position, the weight value of the inter prediction block 1 is larger, and the weight value of the inter prediction block 2 is smaller (even 0), so as to obtain the joint prediction value of the pixel position. For each pixel position of the inter-frame region 2, the inter-frame prediction value of the inter-frame prediction block 2 is mainly determined based on the inter-frame prediction value of the inter-frame prediction block 2, for example, when the inter-frame prediction value of the inter-frame prediction block 1 at the pixel position is weighted with the inter-frame prediction value of the inter-frame prediction block 2 at the pixel position, the weight value of the inter-frame prediction block 2 is larger, the weight value of the inter-frame prediction block 1 is smaller (even 0), and the joint prediction value of the pixel position is obtained. Finally, the joint prediction values for each pixel location may be grouped into TPM prediction blocks.
Fig. 2C is a diagram illustrating inter-frame and intra-frame joint triangular weighted prediction. And modifying the inter-frame and intra-frame combined weighted prediction to enable the inter-frame area and the intra-frame area of the CIIP prediction block to present the weight distribution of the triangular weighted partition prediction.
The CIIP prediction block is obtained by weighting an intra-frame prediction block (namely, an intra-frame prediction value of a pixel position is obtained by adopting an intra-frame prediction mode) and an inter-frame prediction block (namely, an inter-frame prediction value of the pixel position is obtained by adopting an inter-frame prediction mode). The CIIP prediction block can be divided into two regions, one region can be an intra-frame region, the other region can be an inter-frame region, the inter-frame of the CIIP prediction block can be in non-square distribution, a dashed boundary region can be divided in a mixed weighting mode or directly, the angle of the dashed boundary can be a main diagonal or a secondary diagonal, and the positions of the intra-frame region and the inter-frame region can be changed.
For each pixel position of the intra-frame area, the intra-frame prediction value is determined mainly based on the intra-frame prediction value, for example, when the intra-frame prediction value of the pixel position is weighted with the inter-frame prediction value of the pixel position, the weight value of the intra-frame prediction value is larger, the weight value of the inter-frame prediction value is smaller (even 0), and the joint prediction value of the pixel position is obtained. For each pixel position of the inter-frame region, the inter-frame prediction value is determined mainly based on the inter-frame prediction value, for example, when the intra-frame prediction value of the pixel position is weighted with the inter-frame prediction value of the pixel position, the weight value of the inter-frame prediction value is larger, the weight value of the intra-frame prediction value is smaller (even 0), and the joint prediction value of the pixel position is obtained. Finally, the joint prediction value of each pixel position is formed into a CIIP prediction block.
Referring to fig. 2D, a schematic diagram of inter block geometric partitioning (GEO) mode is shown, where the GEO mode is used to divide an inter prediction block into two sub blocks by using a partition line, and different from the TPM mode, the GEO mode may use more division directions, and a weighted prediction process of the GEO mode is similar to that of the TPM mode.
The TPM prediction block is obtained by weighting an inter prediction block 1 (i.e., an inter prediction value of a pixel position obtained by using an inter prediction mode) and an inter prediction block 2 (i.e., an inter prediction value of a pixel position obtained by using an inter prediction mode). The TPM prediction block may be divided into two regions, one of which may be an inter region 1 and the other of which may be an inter region 2.
Illustratively, for each pixel position of the inter region 1, the inter prediction value of the inter prediction block 1 is mainly determined based on the inter prediction value of the inter prediction block 1, for example, when the inter prediction value of the inter prediction block 1 at the pixel position is weighted with the inter prediction value of the inter prediction block 2 at the pixel position, the weight value of the inter prediction block 1 is larger, and the weight value of the inter prediction block 2 is smaller (even 0), so as to obtain the joint prediction value of the pixel position. For each pixel position of the inter-frame region 2, the inter-frame prediction value of the inter-frame prediction block 2 is mainly determined based on the inter-frame prediction value of the inter-frame prediction block 2, for example, when the inter-frame prediction value of the inter-frame prediction block 1 at the pixel position is weighted with the inter-frame prediction value of the inter-frame prediction block 2 at the pixel position, the weight value of the inter-frame prediction block 2 is larger, the weight value of the inter-frame prediction block 1 is smaller (even 0), and the joint prediction value of the pixel position is obtained. Finally, the joint prediction values for each pixel location may be grouped into TPM prediction blocks.
Illustratively, the weight value setting of the TPM prediction block is related to the distance of the pixel location from the dividing line, see fig. 2E, where pixel location a, pixel location B and pixel location C are located at the lower right side of the dividing line, and pixel location D, pixel location E and pixel location F are located at the upper left side of the dividing line. For pixel position A, pixel position B and pixel position C, the weight value sequence of the inter-frame area 2 is B ≧ A ≧ C, and the weight value sequence of the inter-frame area 1 is C ≧ A ≧ B. For pixel position D, pixel position E and pixel position F, the weight value sequence of inter-frame area 1 is D ≧ F ≧ E, and the weight value sequence of inter-frame area 2 is E ≧ F ≧ D. In the above manner, the distance between the pixel position and the dividing line needs to be calculated, and then the weight value of the pixel position is determined.
For each of the above cases, in order to implement weighted prediction, it is necessary to determine a weight value of each pixel position of the current block, and perform weighted prediction on each pixel position based on the weight value of the pixel position. However, in the related art, there is no effective way to set a weight value, and a reasonable weight value cannot be set, resulting in problems of poor prediction effect, poor coding performance, and the like.
In view of the above findings, the embodiment of the present application provides a weight value derivation method, which can determine a target weight value of each pixel position of a current block according to a reference weight value of a peripheral position outside the current block, and can set a reasonable target weight value for each pixel position, thereby improving prediction accuracy, prediction performance, and coding performance, and the prediction value is closer to the original pixel.
The following describes the encoding and decoding method in the embodiments of the present application in detail with reference to several specific embodiments.
Example 1: referring to fig. 3, a schematic flow chart of a coding and decoding method provided in this embodiment of the present application is shown, where the coding and decoding method may be applied to a decoding end or an encoding end, and the coding and decoding method may include the following steps:
Illustratively, the decoding side or the encoding side needs to determine whether to initiate weighted prediction for the current block. If the weighted prediction is started, the coding and decoding method of the embodiment of the application is adopted, namely the weighted prediction angle of the current block is obtained. If weighted prediction is not started, the implementation manner is not limited in the embodiment of the present application.
Illustratively, when determining to start weighted prediction on a current block, a weighted prediction angle of the current block needs to be obtained, where the weighted prediction angle represents an angular direction pointed by a pixel position inside the current block. For example, based on some weight prediction angle, the angular direction to which the pixel position inside the current block points is directed, which points to some outer peripheral position of the current block.
For example, since the weighted prediction angle represents an angular direction pointed to by a pixel position inside the current block, for each pixel position of the current block, the angular direction pointed to by the pixel position is determined based on the weighted prediction angle, and then a peripheral matching position pointed to by the pixel position is determined from peripheral positions outside the current block according to the angular direction.
For each pixel position of the current block, after determining the peripheral matching position pointed by the pixel position, the reference weight value associated with the peripheral matching position may be determined, and the reference weight value associated with the peripheral matching position may be pre-configured or determined by using a certain policy, which is not limited to this, as long as the peripheral matching position has an associated reference weight value.
Then, the target weight value of the pixel position is determined according to the reference weight value associated with the peripheral matching position, for example, the reference weight value associated with the peripheral matching position may be determined as the target weight value of the pixel position.
For example, after obtaining the target weight value of the pixel position, the associated weight value of the pixel position may be determined according to the target weight value of the pixel position. For example, the sum of the target weight value and the associated weight value of each pixel position is a fixed preset value, and therefore, the associated weight value may be the difference between the preset value and the target weight value. Assuming that the preset value is 8 and the target weight value of the pixel position is 0, the associated weight value of the pixel position is 8; if the target weight value of a pixel position is 1, the associated weight value of the pixel position is 7, and so on, as long as the sum of the target weight value and the associated weight value is 8.
For example, for each pixel position of the current block, a first prediction value of the pixel position may be determined according to a first prediction mode, and a second prediction value of the pixel position may be determined according to a second prediction mode, which is not limited to this determination mode.
Assuming that the target weight value is a weight value corresponding to the first prediction mode and the associated weight value is a weight value corresponding to the second prediction mode, the weighted prediction value of the pixel position may be: (the first predicted value of the pixel position + the target weight value of the pixel position + the second predicted value of the pixel position + the associated weight value of the pixel position)/a fixed preset value. Alternatively, if the target weight value is a weight value corresponding to the second prediction mode and the associated weight value is a weight value corresponding to the first prediction mode, the weighted prediction value of the pixel position may be: (the second predicted value of the pixel position + the target weight value of the pixel position + the first predicted value of the pixel position + the associated weight value of the pixel position)/a fixed preset value. For convenience of description, in the following embodiments, the target weight value is a weight value corresponding to the first prediction mode, and the associated weight value is a weight value corresponding to the second prediction mode.
And step 304, determining the weighted prediction value of the current block according to the weighted prediction value of each pixel position of the current block. For example, the weighted prediction value of each pixel position is formed into the weighted prediction value of the current block, and accordingly, the weighted prediction value of the current block is obtained.
According to the technical scheme, an effective mode for setting the weight value is provided in the embodiment of the application, and a reasonable target weight value can be set for each pixel position of the current block, so that the accuracy of prediction is improved, the prediction performance is improved, the coding performance is improved, the predicted value of the current block can be closer to the original pixel, and the coding performance is improved.
Example 2: referring to fig. 4, a flow chart of the encoding and decoding method is schematically shown, which can be applied to an encoding end, and includes:
In one possible embodiment, it may be determined whether the feature information of the current block satisfies a certain condition. If so, it may be determined to initiate weighted prediction for the current block; if not, it may be determined that weighted prediction is not to be initiated for the current block.
The characteristic information includes but is not limited to one or any combination of the following: the frame type of the current frame where the current block is located, the size information of the current block, and the switch control information. The switch control information may include, but is not limited to: SPS (sequence level) switching control information, or PPS (picture parameter level) switching control information, or TILE (slice level) switching control information.
For example, if the feature information is the frame type of the current frame where the current block is located, the frame type of the current frame where the current block is located satisfies a specific condition, which may include but is not limited to: and if the frame type of the current frame where the current block is located is a B frame, determining that the frame type meets a specific condition. Or if the frame type of the current frame where the current block is located is an I frame, determining that the frame type meets a specific condition.
For example, if the feature information is size information of the current block, such as a width and a height of the current block, the size information of the current block satisfies a specific condition, which may include but is not limited to: and if the width is greater than or equal to the first numerical value and the height is greater than or equal to the second numerical value, determining that the size information of the current block meets a specific condition. Or, if the width is greater than or equal to the third value, the height is greater than or equal to the fourth value, the width is less than or equal to the fifth value, and the height is less than or equal to the sixth value, it is determined that the size information of the current block satisfies the specific condition. Or, if the product of the width and the height is greater than or equal to a seventh value, determining that the size information of the current block satisfies a specific condition. The above values can be empirically configured, such as 8, 16, 32, 64, 128, etc., without limitation. For example, the first value is 8, the second value is 8, the third value is 8, the fourth value is 8, the fifth value is 64, the sixth value is 64, and the seventh value is 64. Of course, the above is merely an example, and no limitation is made thereto. In summary, if the width is greater than or equal to 8 and the height is greater than or equal to 8, it is determined that the size information of the current block satisfies the specific condition. Or, if the width is greater than or equal to 8, the height is greater than or equal to 8, the width is less than or equal to 64, and the height is less than or equal to 64, determining that the size information of the current block satisfies the specific condition. Or, if the product of the width and the height is greater than or equal to 64, it is determined that the size information of the current block satisfies a certain condition.
For example, if the feature information is size information of the current block, such as a width and a height of the current block, the size information of the current block satisfies a specific condition, which may include but is not limited to: the width is not less than a and not more than b, and the height is not less than a and not more than b. a may be less than or equal to 16 and b may be greater than or equal to 16. For example, a equals 8, b equals 64, or b equals 32.
For example, if the characteristic information is switch control information, the switch control information satisfies a specific condition, which may include but is not limited to: and if the switch control information allows the current block to start the weighted prediction, determining that the switch control information meets a specific condition.
For example, if the feature information is a frame type of a current frame where the current block is located and size information of the current block, the frame type satisfies a specific condition, and when the size information satisfies the specific condition, it may be determined that the feature information of the current block satisfies the specific condition. If the feature information is the frame type of the current frame where the current block is located and the switch control information, the frame type meets a specific condition, and when the switch control information meets the specific condition, it can be determined that the feature information of the current block meets the specific condition. If the feature information is the size information and the switch control information of the current block, the size information satisfies a specific condition, and when the switch control information satisfies the specific condition, it may be determined that the feature information of the current block satisfies the specific condition. If the feature information is the frame type of the current frame where the current block is located, the size information of the current block, and the switch control information, the frame type meets the specific condition, the size information meets the specific condition, and when the switch control information meets the specific condition, it can be determined that the feature information of the current block meets the specific condition.
In a possible implementation manner, when determining to start weighted prediction on a current block, an encoding end acquires a weighted prediction angle and a weighted prediction position of the current block. The weight prediction angle represents an angular direction to which a pixel position inside the current block points. Referring to FIG. 5A, the angle is predicted based on a certain weight, and the angular direction pointed by the pixel positions (e.g., pixel position 1, pixel position 2, and pixel position 3) inside the current block is shown, which points to a certain peripheral position outside the current block. Referring to FIG. 5B, based on another weighted prediction angle, an angular direction to which pixel positions inside the current block (e.g., pixel position 2, pixel position 3, and pixel position 4) point is shown, which points to some peripheral position outside the current block.
The weight prediction position (which may also be referred to as a distance parameter) is used to indicate which peripheral position outside the current block is a target peripheral position of the current block. For example, the range of the peripheral position outside the current block may be determined according to the weighted prediction angle, as shown in fig. 5A and 5B. Then, the range of the peripheral position is divided into N equal parts, and the value of N can be arbitrarily set, such as 4, 6, 8, etc., and 8 is taken as an example for explanation. The weight prediction position is used to indicate which peripheral position is a target peripheral position of the current block.
Referring to fig. 5C, after equally dividing all peripheral positions 8, 7 weighted predicted positions can be obtained. When the weight prediction position is set to 0, it indicates the peripheral position a0 (i.e., the peripheral position pointed to by the dashed line 0, in practical applications, the dashed line 0 does not exist, the dashed line 0 is only an example given for convenience of understanding, and the dashed line 0-dashed line 6 are used to equally divide all the peripheral positions 8) as the target peripheral position of the current block. When the weight prediction position is 1, the peripheral position a1 is represented as the target peripheral position of the current block. By analogy, when the weight prediction position is 6, the peripheral position a6 is represented as the target peripheral position of the current block.
For example, the value of N may be different for different weight prediction angles, for example, for the weight prediction angle a, the value of N is 6, which indicates that the range of the peripheral position determined based on the weight prediction angle a is divided by 6, and for the weight prediction angle B, the value of N is 8, which indicates that the range of the peripheral position determined based on the weight prediction angle B is divided by 8.
For example, when the values of N are the same for different weight prediction angles, the number of supported weight prediction positions may be different, for example, when the value of N is 8 for the weight prediction angle a, the range of the peripheral position determined based on the weight prediction angle a is divided into 8, and when the value of N is 8 for the weight prediction angle B, the range of the peripheral position determined based on the weight prediction angle B is divided into 8, however, the weight prediction positions corresponding to the weight prediction angle a are selected from 5 positions a1 to a5, and the weight prediction positions corresponding to the weight prediction angle B are selected from 7 positions a0 to a 6.
For example, the range of the peripheral position is divided into N equal parts, in practical applications, the range of the peripheral position may be divided into N equal parts instead of N equal parts by using an uneven dividing manner, which is not limited in this respect.
For example, after dividing all the peripheral positions by 8, 7 weighted prediction positions may be obtained, and in step 401, the encoding side may obtain one weighted prediction position from the 7 weighted prediction positions, or may select some weighted prediction positions (e.g., 5 weighted prediction positions) from the 7 weighted prediction positions and obtain one weighted prediction position from the 5 weighted prediction positions.
Illustratively, the encoding end acquires the weight prediction angle and the weight prediction position of the current block by adopting the following modes:
in the first mode, the encoding end and the decoding end agree on the same weight prediction angle as the weight prediction angle of the current block, and agree on the same weight prediction position as the weight prediction position of the current block. For example, the encoding side and the decoding side use the weighted prediction angle a as the weighted prediction angle of the current block, and the encoding side and the decoding side use the weighted prediction position 4 as the weighted prediction position of the current block.
And secondly, constructing a weight prediction angle list at the encoding end, wherein the weight prediction angle list comprises at least one weight prediction angle, such as a weight prediction angle A and a weight prediction angle B. The encoding end constructs a weight prediction position list, and the weight prediction position list comprises at least one weight prediction position, such as weight prediction position 0-weight prediction position number 6. And the encoding end sequentially traverses each weight prediction angle in the weight prediction angle list and traverses each weight prediction position in the weight prediction position list, namely traverses each weight prediction angle and each weight prediction position combination. For each combination of the weighted prediction angle and the weighted prediction position, as the weighted prediction angle and the weighted prediction position of the current block acquired in step 401, steps 402-407 are performed based on the weighted prediction angle and the weighted prediction position to obtain a weighted prediction value of the current block.
For example, when the encoding end goes through the weighted prediction angle a and the weighted prediction position 0, the steps 402-407 are executed based on the weighted prediction angle a and the weighted prediction position 0 to obtain the weighted prediction value of the current block. And when the coding end traverses the weight prediction angle A and the weight prediction position 1, executing the steps 402-407 based on the weight prediction angle A and the weight prediction position 1 to obtain the weighted prediction value of the current block. And when the coding end traverses the weight prediction angle B and the weight prediction position 0, executing the steps 402-407 based on the weight prediction angle B and the weight prediction position 0 to obtain the weighted prediction value of the current block, and so on. The encoding side can obtain the weighted prediction value of the current block based on each combination (the combination of the weighted prediction angle and the weighted prediction position).
After the coding end obtains the weighted prediction value of the current block based on the weighted prediction angle A and the weighted prediction position 0, rate-distortion cost value is determined according to the weighted prediction value, the determination mode is not limited, after the coding end obtains the weighted prediction value of the current block based on the weighted prediction angle A and the weighted prediction position 1, the rate-distortion cost value is determined according to the weighted prediction value, and by analogy, the coding end can obtain the rate-distortion cost value of each combination, and the minimum rate-distortion cost value is selected from all the rate-distortion cost values.
Then, the encoding end takes the weight prediction angle and the weight prediction position corresponding to the minimum rate distortion cost value as the target weight prediction angle of the current block and the target weight prediction position of the current block respectively, and finally encodes the index value of the target weight prediction angle in the weight prediction angle list and the index value of the target weight prediction position in the weight prediction position list into the code stream.
Of course, the above manner is only an example, and is not limited as long as the weighted prediction angle and the weighted prediction position of the current block can be obtained, for example, one weighted prediction angle is randomly selected from the weighted prediction angle list as the weighted prediction angle of the current block, and one weighted prediction position is randomly selected from the weighted prediction position list as the weighted prediction position of the current block.
For example, since the weighted prediction angle represents an angular direction pointed to by a pixel position inside the current block, for each pixel position of the current block, the angular direction pointed to by the pixel position is determined based on the weighted prediction angle, and then a peripheral matching position pointed to by the pixel position is determined from peripheral positions outside the current block according to the angular direction.
For example, the current block outer peripheral position may include: the peripheral position of the upper line outside the current block, for example, the peripheral position of the nth 1 line outside the current block, n1 may be 1, or 2, 3, etc., which is not limited herein. Or, the peripheral position of the left column outside the current block, for example, the peripheral position of the nth 2 column outside the current block, n2 may be 1, 2, 3, and the like, which is not limited herein. Or, the peripheral position of the lower line outside the current block, for example, the peripheral position of the nth 2 line outside the current block, n3 may be 1, 2, 3, or the like, which is not limited herein. Or, the peripheral position of the right column outside the current block, for example, the peripheral position of the nth 4 column outside the current block, n4 may be 1, 2, 3, and the like, which is not limited herein. Of course, the above are only a few examples of the peripheral positions, and the peripheral positions are not limited to this, for example, the peripheral positions may also be located inside the current block, such as the peripheral position of the n5 th row inside the current block, n5 may be 1, 2, 3, etc., the peripheral positions may also be the peripheral positions of the n6 th column inside the current block, and n6 may be 1, 2, 3, etc.
For convenience of description, in the following embodiments, the peripheral position of the upper row 1 outside the current block or the peripheral position of the left column 1 outside the current block is taken as an example, and the implementation manner is similar for the case of other peripheral positions.
For example, for a range of peripheral positions outside the current block, a range of peripheral positions outside the current block may be specified in advance; alternatively, the range of the peripheral position outside the current block may be determined according to the weighted prediction angle, for example, the peripheral position pointed to by each pixel position inside the current block is determined according to the weighted prediction angle, and the boundary of the peripheral positions pointed to by all the pixel positions may be the range of the peripheral position outside the current block, and the range of the peripheral position is not limited.
For example, the current block outer perimeter location may comprise a integer pixel location; alternatively, the outer peripheral locations of the current block may include non-integer pixel locations, and the non-integer pixel locations may be sub-pixel locations, such as 1/2 sub-pixel locations, 1/4 sub-pixel locations, 3/4 sub-pixel locations, and the like, without limitation; alternatively, the current block outer perimeter locations may include integer pixel locations and sub-pixel locations.
Illustratively, two peripheral positions outside the current block may correspond to one integer-pixel position; or, four peripheral positions outside the current block may correspond to one integer pixel position; or, a peripheral position outside the current block may correspond to a whole pixel position; alternatively, one peripheral location outside the current block may correspond to two integer pixel locations. Of course, the above are only examples, and the relationship between the peripheral position and the whole pixel position may be arbitrarily configured without limitation.
As shown in fig. 5A and 5B, one peripheral position corresponds to one whole pixel position, and as shown in fig. 5D, two peripheral positions correspond to one whole pixel position, and for other cases, the description is omitted in this embodiment.
In step 403, the encoding end determines a target weight value of the pixel position according to the reference weight value associated with the peripheral matching position.
For example, for each pixel position of the current block, after determining the peripheral matching position pointed by the pixel position, the encoding end may determine a reference weight value associated with the peripheral matching position, where the reference weight value associated with the peripheral matching position may be pre-configured or determined by using a certain policy, and the specific determination manner may be referred to in the following embodiments.
Then, the encoding end determines the target weight value of the pixel position according to the reference weight value associated with the peripheral matching position, for example, the reference weight value associated with the peripheral matching position may be determined as the target weight value of the pixel position.
In a possible implementation, determining the target weight value of the pixel position according to the reference weight value associated with the peripheral matching position may include: in case one, if the peripheral matching position is an integer pixel position and the integer pixel position has a set reference weight value, the target weight value of the pixel position is determined according to the reference weight value of the integer pixel position. If the peripheral matching position is an integer pixel position and the integer pixel position is not provided with a reference weight value, determining a target weight value of the pixel position according to the reference weight values of the adjacent positions of the integer pixel position; for example, rounding up the reference weight values of the adjacent positions to obtain a target weight value of the pixel position; or, carrying out a down rounding operation on the reference weight values of the adjacent positions to obtain a target weight value of the pixel position; or, the target weight value of the pixel position is determined according to the interpolation of the reference weight values of the adjacent positions of the whole pixel position, which is not limited. And thirdly, if the peripheral matching position is a sub-pixel position and the sub-pixel position is provided with a reference weight value, determining a target weight value of the pixel position according to the reference weight value of the sub-pixel position. If the peripheral matching position is a sub-pixel position and the sub-pixel position is not provided with a reference weight value, determining a target weight value of the pixel position according to the reference weight values of the adjacent positions of the sub-pixel position; for example, rounding up the reference weight values of the adjacent positions to obtain a target weight value of the pixel position; or, carrying out a down rounding operation on the reference weight values of the adjacent positions to obtain a target weight value of the pixel position; or, the target weight value of the pixel position is determined according to the interpolation of the reference weight values of the adjacent positions of the sub-pixel position, which is not limited.
For example, a reference weight value may be set for a peripheral position outside the current block, which is referred to in the following embodiments, and the peripheral position outside the current block may be an integer pixel position or a sub-pixel position, for example, the reference weight value may be set for the integer pixel position outside the current block, and/or the reference weight value may be set for the sub-pixel position outside the current block.
If a reference weight value is set for integer pixel positions outside the current block, then the following may be the case:
and determining a peripheral matching position pointed by the pixel position according to the weight prediction angle for each pixel position of the current block, wherein if the peripheral matching position pointed by the pixel position is the whole pixel position, the peripheral matching position is already set with a reference weight value, and therefore, the reference weight value of the peripheral matching position can be determined as the target weight value of the pixel position.
And determining a peripheral matching position pointed by the pixel position according to the weight prediction angle aiming at each pixel position of the current block, wherein if the peripheral matching position pointed by the pixel position is a sub-pixel position, the peripheral matching position is not provided with a reference weight value, so that the target weight value of the pixel position is determined according to the reference weight values of adjacent positions of the peripheral matching position.
If reference weight values are set for sub-pixel locations outside the current block, then the following may be the case:
and determining the peripheral matching position pointed by the pixel position according to the weight prediction angle for each pixel position of the current block, wherein if the peripheral matching position pointed by the pixel position is a sub-pixel position, the peripheral matching position is already set with a reference weight value, and therefore, the reference weight value of the peripheral matching position can be determined as the target weight value of the pixel position.
And determining a peripheral matching position pointed by the pixel position according to the weight prediction angle aiming at each pixel position of the current block, wherein if the peripheral matching position pointed by the pixel position is a whole pixel position, the peripheral matching position is not provided with a reference weight value, so that a target weight value of the pixel position is determined according to the reference weight values of adjacent positions of the peripheral matching position.
In step 404, the encoding end determines the associated weight value of the pixel position according to the target weight value of the pixel position.
For example, the sum of the target weight value and the associated weight value of each pixel position is a fixed preset value, that is, the associated weight value may be a difference between the preset value and the target weight value. Assuming that the preset value is 8 and the target weight value of a pixel position is 2, the associated weight value of the pixel position is 6, and so on, as long as the sum of the target weight value and the associated weight value is 8.
For example, the first prediction mode may be any one of an intra block copy prediction mode, an intra prediction mode, an inter prediction mode, and a palette mode; the second prediction mode may be any one of an intra block copy prediction mode, an intra prediction mode, an inter prediction mode, and a palette mode. For example, the first prediction mode may be an intra block copy prediction mode, and the second prediction mode may be an intra block copy prediction mode; alternatively, the first prediction mode may be an intra block copy prediction mode and the second prediction mode may be an intra prediction mode; alternatively, the first prediction mode may be an intra block copy prediction mode and the second prediction mode may be an inter prediction mode; alternatively, the first prediction mode may be an intra block copy prediction mode and the second prediction mode may be a palette mode; and in the same way, the first prediction mode and the second prediction mode are not limited.
For the process of determining the prediction value from the first prediction mode and the second prediction mode, see the subsequent embodiments.
In step 406, the encoding end determines the weighted predicted value of the pixel position according to the first predicted value of the pixel position, the target weight value of the pixel position, the second predicted value of the pixel position and the associated weight value of the pixel position.
For example, the weighted prediction value for the pixel position may be: (the first predicted value of the pixel position + the target weight value of the pixel position + the second predicted value of the pixel position + the associated weight value of the pixel position)/a fixed preset value.
According to the technical scheme, an effective mode for setting the weight value is provided in the embodiment of the application, and a reasonable target weight value can be set for each pixel position of the current block, so that the accuracy of prediction is improved, the prediction performance is improved, the coding performance is improved, the predicted value of the current block can be closer to the original pixel, and the coding performance is improved.
Example 3: referring to fig. 6, a flow chart of the encoding and decoding method is schematically shown, which can be applied to a decoding end, and includes:
In a possible implementation manner, the encoding end judges whether the characteristic information of the current block meets a specific condition, and if so, determines to start weighted prediction on the current block; if not, it is determined not to initiate weighted prediction for the current block. The decoding end also judges whether the characteristic information of the current block meets a specific condition. If yes, determining to start weighted prediction on the current block; if not, it is determined not to initiate weighted prediction for the current block. In this way, both the encoding end and the decoding end can determine whether to start weighted prediction on the current block based on the characteristic information of the current block, that is, directly obtain the determination result whether to start weighted prediction.
Illustratively, the characteristic information includes, but is not limited to, one or any combination of the following: the frame type of the current frame where the current block is located, the size information of the current block, and the switch control information. The switch control information includes, but is not limited to: SPS switch control information, or PPS switch control information, or TILE switch control information. As to how to determine whether the current block starts weighted prediction based on the feature information, see step 401, except that the execution main body becomes the decoding end, which is not repeated herein.
In another possible implementation, the encoding side determines whether the current block supports weighted prediction according to the feature information of the current block, the determining method refers to step 401, and when the current block supports weighted prediction, it may also determine whether to start weighted prediction on the current block in other manners, such as determining whether to start weighted prediction on the current block by using a rate distortion principle, which is not limited herein.
After determining whether to initiate weighted prediction on the current block, when the encoding end transmits the encoded bit stream of the current block, the encoded bit stream may include syntax indicating whether to initiate weighted prediction, where the syntax indicates whether to initiate weighted prediction on the current block.
The decoding end determines whether the current block supports weighted prediction according to the characteristic information of the current block, the determination mode refers to step 401, and when the current block supports weighted prediction, the decoding end can also analyze syntax whether to start weighted prediction from the coded bit stream and determine whether to start weighted prediction on the current block according to the syntax.
For example, the syntax is used to indicate whether the current block starts weighted prediction, the syntax element of the syntax uses context-based adaptive binary arithmetic coding or context-based adaptive binary arithmetic decoding, the coding of the syntax element uses only one context model for coding or decoding, and the related scheme uses a plurality of context models (e.g. determining whether the top block/left block of the current block starts the prediction mode with the current block, whether the size of the current block exceeds the threshold, etc.) for coding or decoding. For another example, the syntax is used to indicate whether the current block starts weighted prediction, the syntax element of the syntax uses context-based adaptive binary arithmetic coding or context-based adaptive binary arithmetic decoding, the coding of the syntax element only uses at most 2 context models for coding or decoding, and only determines whether the size of the current block exceeds a certain threshold, while in the related scheme, multiple context models (including determining whether the top/left blocks of the current block start weighted prediction, and whether the size of the current block exceeds the threshold) are used for coding or decoding.
In a possible implementation manner, when determining to start weighted prediction on the current block, the decoding end obtains a weighted prediction angle and a weighted prediction position of the current block, and the relevant explanation of the weighted prediction angle and the weighted prediction position is referred to step 401, and is not repeated herein. The decoding end can acquire the weight prediction angle and the weight prediction position of the current block by adopting the following modes:
in the first mode, the decoding end and the encoding end agree on the same weight prediction angle as the weight prediction angle of the current block, and agree on the same weight prediction position as the weight prediction position of the current block. For example, the decoding side and the encoding side use the weighted prediction angle a as the weighted prediction angle of the current block, and the decoding side and the encoding side use the weighted prediction position 4 as the weighted prediction position of the current block.
And secondly, a decoding end constructs a weight prediction angle list, the weight prediction angle list is the same as the weight prediction angle list of the encoding end, and the weight prediction angle list comprises at least one weight prediction angle, such as a weight prediction angle A and a weight prediction angle B. The decoding end constructs a weight prediction position list, the weight prediction position list is the same as the weight prediction position list of the encoding end, and the weight prediction position list comprises at least one weight prediction position, such as weight prediction position 0-weight prediction position number 6. After receiving the coded bit stream of the current block, the decoding end analyzes the indication information from the coded bit stream, selects one weight prediction angle from the weight prediction angle list as the weight prediction angle of the current block according to the indication information, and selects one weight prediction position from the weight prediction position list as the weight prediction position of the current block according to the indication information.
The following describes an implementation process of the second method with reference to several specific application scenarios.
Application scenario 1: when the encoding end transmits the encoded bitstream to the decoding end, the encoded bitstream may include indication information 1, where the indication information 1 is used to indicate a weighted prediction angle (i.e., a target weighted prediction angle) of the current block and a weighted prediction position (i.e., a target weighted prediction position) of the current block. For example, when the indication information 1 is 0, the indication information is used to indicate a first weight prediction angle in the weight prediction angle list and indicate a first weight prediction position in the weight prediction position list, when the indication information 1 is 1, the indication information is used to indicate a first weight prediction angle in the weight prediction angle list and indicate a second weight prediction position in the weight prediction position list, and so on, as for the value of the indication information 1, which weight prediction angle and which weight prediction position are indicated, as long as the encoding side and the decoding side make an agreement.
After receiving the coded bit stream, the decoding side parses the indication information 1 from the coded bit stream, and based on the indication information 1, the decoding side can select a weight prediction angle corresponding to the indication information 1 from the weight prediction angle list, where the weight prediction angle is taken as the weight prediction angle of the current block, and the following process will be described with a weight prediction angle a as an example. Based on the indication information 1, the decoding side can select a weighted prediction position corresponding to the indication information 1 from a weighted prediction position list, the weighted prediction position is used as the weighted prediction position of the current block, and the following process takes the weighted prediction position 4 as an example.
Application scenario 2: the encoding end may include indication information 2 and indication information 3 when transmitting the encoded bitstream to the decoding end. The indication information 2 is used to indicate a target weighted prediction angle of the current block, such as an index value 1 of the target weighted prediction angle in the weighted prediction angle list, where the index value 1 indicates that the target weighted prediction angle is the several weighted prediction angles in the weighted prediction angle list. The indication information 3 is used to indicate the target weight prediction position of the current block, such as the index value 2 of the target weight prediction position in the weight prediction position list, and the index value 2 indicates that the target weight prediction position is the several weight prediction positions in the weight prediction position list. The decoding end receives the coded bit stream, analyzes the indication information 2 and the indication information 3 from the coded bit stream, and selects a weight prediction angle corresponding to the index value 1 from the weight prediction angle list based on the indication information 2, wherein the weight prediction angle is used as the weight prediction angle of the current block. Based on the instruction information 3, a weight prediction position corresponding to the index value 2 is selected from the weight prediction position list as the weight prediction position of the current block.
Application scenario 3: the encoding side and the decoding side may agree on a preferred configuration combination, which is not limited, and may be configured according to actual experience, for example, agree on a preferred configuration combination 1 including a weight prediction angle a and a weight prediction position 4, and agree on a preferred configuration combination 2 including a weight prediction angle B and a weight prediction position 4.
And after the coding end determines the target weight prediction angle and the target weight prediction position of the current block, determining whether the target weight prediction angle and the target weight prediction position are in a preferred configuration combination. If so, when the encoding end transmits the encoded bitstream to the decoding end, the encoded bitstream may include indication information 4 and indication information 5. The indication information 4 is used to indicate whether the current block adopts the preferred configuration combination, and if the indication information 4 is the first value (e.g., 0), it indicates that the current block adopts the preferred configuration combination. The indication information 5 is used to indicate which preferred configuration combination the current block adopts, for example, when the indication information 5 is 0, it is used to indicate that the current block adopts the preferred configuration combination 1, and when the indication information 5 is 1, it is used to indicate that the current block adopts the preferred configuration combination 2.
After receiving the coded bit stream, the decoding end analyzes the indication information 4 and the indication information 5 from the coded bit stream, and determines whether the current block adopts the preferred configuration combination or not based on the indication information 4. And if the indication information 4 is the first value, determining that the current block adopts the preferred configuration combination. When the current block adopts the preferred configuration combination, the decoding end determines which preferred configuration combination the current block adopts based on the indication information 5, for example, when the indication information 5 is 0, the decoding end determines that the current block adopts the preferred configuration combination 1, namely, the weighted prediction angle of the current block is the weighted prediction angle A, and the weighted prediction position of the current block is the weighted prediction position 4. For another example, when the instruction information 5 is 1, it is determined that the current block adopts the preferred arrangement combination 2, that is, the weighted prediction angle of the current block is the weighted prediction angle B, and the weighted prediction position of the current block is the weighted prediction position 4.
For example, if the encoding side and the decoding side only agree on a set of preferred configuration combinations, such as the preferred configuration combination including the weighted prediction angle a and the weighted prediction position 4, the encoded bitstream may include the indication information 4 instead of the indication information 5, where the indication information 4 is used to indicate that the current block adopts the preferred configuration combination. After the decoding end analyzes the indication information 4 from the coded bit stream, if the indication information 4 is a first value, the decoding end determines that the current block adopts a preferred configuration combination, determines that the weight prediction angle of the current block is a weight prediction angle A based on the preferred configuration combination, and determines that the weight prediction position of the current block is a weight prediction position 4.
Application scenario 4: the encoding end and the decoding end can agree on a preferred configuration combination, and after the encoding end determines the target weight prediction angle and the target weight prediction position of the current block, the encoding end determines whether the target weight prediction angle and the target weight prediction position are the preferred configuration combination. If not, when the encoding end sends the encoded bit stream to the decoding end, the encoded bit stream includes indication information 4 and indication information 6. The indication information 4 is used to indicate whether the current block adopts the preferred configuration combination, and if the indication information 4 is the second value (e.g., 1), it indicates that the current block does not adopt the preferred configuration combination. The indication information 6 is used to indicate the target weight prediction angle of the current block and the target weight prediction position of the current block. For example, when the indication information 6 is 0, it is used to indicate the first weight prediction angle in the weight prediction angle list, and indicate the first weight prediction position in the weight prediction position list, and so on.
After receiving the coded bit stream, the decoding end analyzes the indication information 4 and the indication information 6 from the coded bit stream, and determines whether the current block adopts the preferred configuration combination or not based on the indication information 4. And if the indication information 4 is the second value, determining that the current block does not adopt the preferred configuration combination. When the current block does not adopt the preferred arrangement combination, the decoding side can select, based on the indication information 6, a weighted prediction angle corresponding to the indication information 6 from a weighted prediction angle list as the weighted prediction angle of the current block, and based on the indication information 6, the decoding side can select, from a weighted prediction position list as the weighted prediction position of the current block, a weighted prediction position corresponding to the indication information 6.
Application scenario 5: the encoding end and the decoding end can agree on a preferred configuration combination, and after the encoding end determines the target weight prediction angle and the target weight prediction position of the current block, the encoding end determines whether the target weight prediction angle and the target weight prediction position are the preferred configuration combination. If not, when the encoding end sends the encoded bit stream to the decoding end, the encoded bit stream includes indication information 4, indication information 7 and indication information 8. Illustratively, the indication information 4 is used to indicate whether the current block adopts the preferred configuration combination, and if the indication information 4 is the second value, it indicates that the current block does not adopt the preferred configuration combination. The indication information 7 is used to indicate a target weighted prediction angle of the current block, such as an index value 1 of the target weighted prediction angle in the weighted prediction angle list, where the index value 1 indicates that the target weighted prediction angle is the several weighted prediction angles in the weighted prediction angle list. The indication information 8 is used to indicate the target weight prediction position of the current block, such as the index value 2 of the target weight prediction position in the weight prediction position list, and the index value 2 indicates that the target weight prediction position is the several weight prediction positions in the weight prediction position list.
After receiving the coded bit stream, the decoding end analyzes the indication information 4, the indication information 7 and the indication information 8 from the coded bit stream, and determines whether the current block adopts the preferred configuration combination or not based on the indication information 4. And if the indication information 4 is the second value, determining that the current block does not adopt the preferred configuration combination. When the current block does not adopt the preferred arrangement combination, the decoding side selects a weighted prediction angle corresponding to the index value 1 from the weighted prediction angle list based on the indication information 7, and the weighted prediction angle is used as the weighted prediction angle of the current block. The decoding side selects a weight prediction position corresponding to the index value 2 from the weight prediction position list based on the instruction information 8, and the weight prediction position is used as the weight prediction position of the current block.
Of course, the above first and second modes are only examples, and are not limited thereto, as long as the decoding end can obtain the weighted prediction angle (i.e. the target weighted prediction angle) of the current block and the weighted prediction position (i.e. the target weighted prediction position) of the current block.
In step 604, the decoding end determines the associated weight value of the pixel position according to the target weight value of the pixel position.
For example, for step 602 to step 607, the implementation process thereof may refer to step 402 to step 407, except that step 602 to step 607 are processing flows of the decoding end, but not processing flows of the encoding end, and are not described herein again.
According to the technical scheme, an effective mode for setting the weight value is provided in the embodiment of the application, and a reasonable target weight value can be set for each pixel position of the current block, so that the accuracy of prediction is improved, the prediction performance is improved, the coding performance is improved, the predicted value of the current block can be closer to the original pixel, and the coding performance is improved.
Example 4: in the above embodiments 1 to 3, the weighted prediction angle is referred to, and the weighted prediction angle may be any angle, such as any angle within 180 degrees, or any angle within 360 degrees, and the weighted prediction angle is not limited, such as 10 degrees, 20 degrees, 30 degrees, and the like. In one possible embodiment, the weighted prediction angle may be a horizontal angle; alternatively, the weighted prediction angle may be a vertical angle; alternatively, the absolute value of the slope of the weighted prediction angle may be an nth power of 2, where n is an integer, such as a positive integer, 0, a negative integer, and the like. For example, the absolute value of the slope of the weighted prediction angle may be 1 (i.e., the 0 th power of 2), 2 (i.e., the 1 st power of 2), 1/2 (i.e., the-1 st power of 2), 4 (i.e., the 2 nd power of 2), 1/4 (i.e., the-2 nd power of 2), 8 (i.e., the 3 rd power of 2), 1/8 (i.e., the-3 rd power of 2), and so on. Illustratively, referring to fig. 7, 8 weighted prediction angles are shown, the absolute value of the slope of which is 2 to the power of n. In the following embodiments, the shift operation may be performed on tan (weight prediction angle), so when the absolute value of the slope of the weight prediction angle is n-th power of 2, when the shift operation is performed on tan (weight prediction angle), division may be avoided, thereby facilitating the shift implementation.
For example, the number of weighted prediction angles supported by different block sizes may be the same or different.
Example 5: in the above embodiments 1 to 3, for each pixel position of the current block, it is necessary to determine a target weight value of the pixel position according to a reference weight value associated with a peripheral matching position to which the pixel position points. In order to obtain the reference weight value associated with the peripheral matching position, in one possible embodiment, the following manner is adopted: and determining a reference weight value associated with the peripheral matching position according to the coordinate value of the peripheral matching position and the coordinate value of the weight prediction position of the current block.
For example, if the peripheral matching position is a peripheral position on the upper row or the lower row outside the current block, the coordinate value of the peripheral matching position is an abscissa value of the peripheral matching position, and the coordinate value of the weight prediction position is an abscissa value of the weight prediction position. Or, if the peripheral matching position is a peripheral position in a left column or a right column outside the current block, the coordinate value of the peripheral matching position is an ordinate value of the peripheral matching position, and the coordinate value of the weighted prediction position is an ordinate value of the weighted prediction position.
For example, the pixel position of the upper left corner of the current block (e.g. the first pixel position of the upper left corner) may be used as a coordinate origin, and the coordinate value of the peripheral matching position (e.g. an abscissa value or an ordinate value) and the weight-predicted position coordinate value (e.g. an abscissa value or an ordinate value) are coordinate values relative to the coordinate origin. Of course, other pixel positions of the current block may also be used as the origin of coordinates, and the implementation manner is similar to that in which the pixel position of the upper left corner is used as the origin of coordinates, which is not described in detail later.
In one possible embodiment, when the reference weight value associated with the peripheral matching position is determined based on the coordinate values of the peripheral matching position and the coordinate values of the weight prediction position, a difference value between the coordinate values of the peripheral matching position and the coordinate values of the weight prediction position may be calculated. If the difference value is smaller than the first numerical value, determining that the reference weight value associated with the peripheral matching position is the first numerical value; if the difference value is larger than a second numerical value, determining that the reference weight value associated with the peripheral matching position is the second numerical value; and if the difference is not smaller than the first numerical value and not larger than the second numerical value, determining the reference weight value associated with the peripheral matching position as the difference.
In another possible implementation manner, when the reference weight value associated with the peripheral matching position is determined according to the coordinate value of the peripheral matching position and the coordinate value of the weight prediction position, the reference weight value associated with the peripheral matching position may also be directly determined according to the magnitude relationship between the coordinate value of the peripheral matching position and the coordinate value of the weight prediction position.
For example, if the coordinate value of the peripheral matching position is smaller than the coordinate value of the weight prediction position, the reference weight value associated with the peripheral matching position is determined to be a first numerical value; and if the coordinate value of the peripheral matching position is not less than the coordinate value of the weight prediction position, determining that the reference weight value associated with the peripheral matching position is a second numerical value. Or if the coordinate value of the peripheral matching position is smaller than the coordinate value of the weight prediction position, determining that the reference weight value associated with the peripheral matching position is a second numerical value; and if the coordinate value of the peripheral matching position is not less than the coordinate value of the weight prediction position, determining that the reference weight value associated with the peripheral matching position is a first numerical value.
For example, the first value and the second value may be configured empirically, and the first value is smaller than the second value, which is not limited to the first value and the second value. For example, the first value is a minimum value of the pre-agreed reference weight value, such as 0, and the second value is a maximum value of the pre-agreed reference weight value, such as 8, although 0 and 8 are also just examples.
The following describes the process of determining the reference weight value in conjunction with several specific application scenarios. Illustratively, assume that the size of the current block is M × N, M being the width of the current block and N being the height of the current block. X is the log2 logarithm of the tan value of the weighted prediction angle, such as 0 or 1. Y is an index value of the weight predicted position, and as shown in fig. 5C, the index value of the weight predicted position is 0 to 6, and when the index value of the weight predicted position is 0, it indicates that the weight predicted position is the peripheral position a0, and when the index value of the weight predicted position is 1, it indicates that the weight predicted position is the peripheral position a1, and so on. a, b, c and d are preset constant values.
Application scenario 1: the effective number (which may also be referred to as a reference weight effective length, and may be denoted as ValidLenth) is determined based on the size and the weight prediction angle of the current block. The weight prediction angle and the weight prediction position determine coordinate values of the weight prediction position based on the size of the current block (the coordinate values of the weight prediction position may be denoted as FirstPos).
For example, the effective amount may be determined by the following formula: ValidLenth ═ (N + (M > > X)) < <1, in the above formula, N and M are the size of the current block, X is determined based on the weighted prediction angle of the current block, > > X denotes a right shift X bit, and < <1 denotes a left shift 1 bit. In the following embodiments, > > indicates a right shift, and < < both indicate a left shift, which will not be described in detail later.
For example, the coordinate value of the weight prediction position may be determined by the following formula: FirstPos ═ (ValidLength > >1) -a + Y ((ValidLength-1) > >3), ValidLength is determined based on the size of the current block and the weighted prediction angle, Y is an index value of the weighted prediction position, and Y is 4 if the weighted prediction position of the current block is the weighted prediction position 4.
Then, for each pixel position of the current block, determining a reference weight value associated with the peripheral matching position according to the coordinate value of the peripheral matching position pointed by the pixel position and the coordinate value of the weight prediction position, wherein the reference weight value is a target weight value of the pixel position. For example, the target weight value for each pixel position of the current block may be derived by the following formula:
SampleWeight[x][y]=Clip3(0,8,(y<<1)+((x<<1)>>X)-FirstPos)。
in the above formula, [ x ] [ y ] denotes the coordinates of the pixel position of the current block, and SampleWeight [ x ] [ y ] denotes the target weight value of the pixel position [ x ] [ y ]. (y < <1) + ((X < <1) > > X) represents the coordinate value of the peripheral matching position pointed to by the pixel position [ X ] [ y ] (i.e., the peripheral matching position pointed to based on the weight prediction angle). (y < <1) + ((X < <1) > > X) -FirstPos, i.e., the difference between the coordinate values of the peripheral matching positions and the coordinate values of the weight prediction positions. Clip3(0,8, (y < <1) + ((X < <1) > > X) -FirstPos) is the reference weight value associated with the peripheral matching position, i.e., the target weight value for pixel position [ X ] [ y ].
Clip3(0,8, (y < <1) + ((X < <1) > > X) -FirstPos) indicates that the difference between the coordinate values of the peripheral matching positions and the coordinate values of the weight prediction positions is limited to between 0 and 8, 0 indicates the first numerical value, and 8 indicates the second numerical value. For example, if the difference is less than 0, then SampleWeight [ x ] [ y ] is 0, if the difference is greater than 8, then SampleWeight [ x ] [ y ] is 8, if the difference is not less than 0, and if the difference is not greater than 8, then SampleWeight [ x ] [ y ] is the difference.
Application scenario 2: in this application scenario, the effective number may be determined by the following formula: ValidLenth ═ (N + (M > > X)) < < 1. The coordinate value of the weight prediction position may be determined by the following formula: FirstPos ═ (ValidLength > >1) -b + Y ((ValidLength-1) > >3) - ((M < <1) > > X). The target weight value for each pixel position of the current block may be derived by the following formula: SampleWeight [ X ] [ y ] ═ Clip3(0,8, (y < <1) - ((X < <1) > > X) -FirstPos).
Application scenario 3: in this application scenario, the effective number may be determined by the following formula: ValidLenth ═ (M + (N > > X)) < < 1. The coordinate value of the weight prediction position may be determined by the following formula: FirstPos ═ (ValidLength > >1) -c + Y ((ValidLength-1) > >3) - ((N < <1) > > X). The target weight value for each pixel position of the current block may be derived by the following formula: SampleWeight [ X ] [ y ] ═ Clip3(0,8, (X < <1) - ((y < <1) > > X) -FirstPos).
Application scenario 4: in this application scenario, the effective number may be determined by the following formula: ValidLenth ═ (M + (N > > X)) < < 1. The coordinate value of the weight prediction position may be determined by the following formula: FirstPos ═ (ValidLength > >1) -d + Y ((ValidLength-1) > > 3). The target weight value for each pixel position of the current block may be derived by the following formula: SampleWeight [ X ] [ y ] ═ Clip3(0,8, (X < <1) + ((y < <1) > > X) -FirstPos).
For example, for the application scenarios 2, 3, and 4, the implementation principle is similar to that of the application scenario 1, but the difference is that the relevant formulas are different, and the description is not repeated here.
Application scenario 5: different from the application scenario 1, in the application scenario, two weighted prediction positions of the current block need to be obtained, for example, for an encoding end, the two weighted prediction positions of the current block are determined based on rate-distortion cost values corresponding to the two weighted prediction positions, and an encoded bitstream carries indication information of the two weighted prediction positions. For the decoding end, two weighted prediction positions are determined according to the indication information. The obtaining manner of the two weighted prediction positions is similar to that of one weighted prediction position, and the description thereof is not repeated. Let two weighted predicted positions be a weighted predicted position a and a weighted predicted position b, respectively, Y1 be the index value of the weighted predicted position a, and Y2 be the index value of the weighted predicted position b.
The effective number is determined based on the size and weight prediction angle of the current block (the effective number may be denoted as ValidLenth), and for example, the effective number may be determined by the following formula: ValidLenth ═ (N + (M > > X)) < < 1. For example, the determination formula of the effective number may also be replaced by the formulas in application scenarios 2-4, which are not described herein again.
The coordinate value of the weight prediction position a is determined based on the size of the current block, the weight prediction angle and the weight prediction position a (the coordinate value of the weight prediction position a is denoted as FirstPos _ a), and the coordinate value of the weight prediction position b is determined based on the size of the current block, the weight prediction angle and the weight prediction position b (the coordinate value of the weight prediction position b is denoted as FirstPos _ b).
For example, similar to the application scenario 1 described above, in the present application scenario, the coordinate value of the weighted prediction position a may be determined by the following formula: FirstPos _ a ═ (ValidLength > >1) -a + Y1 [ (ValidLength-1) > >3 ]. The coordinate value of the weighted prediction position b is determined by the following formula: FirstPos _ b ═ (ValidLength > >1) -a + Y2 [ (ValidLength-1) > >3 ].
For example, the formula for determining the coordinate value of the weight prediction position may be replaced by the formulas in application scenario 2-application scenario 4, except that the index value of the weight prediction position is replaced by Y1 or Y2, which is not described herein again.
Then, for each pixel position of the current block, according to the coordinate values of the peripheral matching position pointed by the pixel position, the coordinate value FirstPos _ a of the weight prediction position a and the coordinate value FirstPos _ b of the weight prediction position b, a reference weight value associated with the peripheral matching position is determined, and the reference weight value is the target weight value of the pixel position.
For example, assuming that FirstPos _ a is smaller than FirstPos _ b, and FirstPos _ c is set to (FirstPos _ a + FirstPos _ b)/2, if the coordinate value of the peripheral matching position is smaller than FirstPos _ c, SampleWeight [ X ] [ y ] ═ Clip3(0,8, (y < <1) + ((X < <1) > > X) -FirstPos _ a), in this case, the reference weight value of the peripheral position is incremented from 0 to 8. Alternatively, if the coordinate value of the peripheral matching position is not less than FirstPos _ c, SampleWeight [ X ] [ y ] — Clip3(0,8, FirstPos _ b- (y < <1) + ((X < <1) > > X)), in which case the reference weight value of the peripheral position is decreased from 8 to 0.
In the above formula, SampleWeight [ X ] [ y ] represents a target weight value of the pixel position [ X ] [ y ], (y < <1) + ((X < <1) > > X) represents a coordinate value of the peripheral matching position to which the pixel position [ X ] [ y ] points.
Based on the application scenario 5, the following effects can be achieved: the reference weight value of the peripheral position of the current block is increased from 0 to 8 and then decreased from 8 to 0; alternatively, the reference weight value of the peripheral position of the current block is decreased from 8 to 0 and then increased from 0 to 8. For example, the reference weight values of the peripheral positions of the current block may be sequentially [0.. 0000123456788.. 8876543210000.. 0], or the reference weight values of the peripheral positions of the current block may be sequentially [8.. 8888765432100.. 0012345678888.. 8], or the reference weight values of the peripheral positions of the current block may be sequentially [0.. 00024688.. 88642000.. 0], or the reference weight values of the peripheral positions of the current block may be sequentially [8.. 88864200.. 00246888.. 8], and the like, and the reference weight values are not limited. Illustratively, the reference weight value of the fusion (blending) of the weight prediction position a and the weight prediction position b does not overlap.
Example 6: in the above embodiments 1 to 3, for each pixel position of the current block, it is necessary to determine a target weight value of the pixel position according to a reference weight value associated with a peripheral matching position to which the pixel position points. In order to obtain the reference weight value associated with the peripheral matching position, in one possible embodiment, the following manner may be adopted: determining a reference weight value list of the current block, wherein the reference weight value list can comprise a plurality of reference weight values, and the plurality of reference weight values in the reference weight value list are configured in advance or according to weight configuration parameters. Selecting an effective number of reference weight values from the reference weight value list according to the target index, and setting the reference weight values of peripheral positions outside the current block according to the effective number of reference weight values. For example, the effective number may be determined based on the size and weight of the current block; the target index may be determined based on a size of the current block, a weighted prediction angle, and a weighted prediction position of the current block.
In summary, since the reference weight value has been set for the peripheral positions outside the current block, that is, each peripheral position has the reference weight value, after the peripheral matching position pointed by the pixel position is determined from the peripheral positions outside the current block, the reference weight value associated with the peripheral matching position, that is, the target weight value of the pixel position, can be determined.
The following describes the above-described process of setting the reference weight values of the peripheral positions with reference to specific implementation steps.
Step S1, determining a reference weight value list of the current block.
In one possible embodiment, the sequence-level list of reference weight values may be determined as the list of reference weight values of the current block. For example, the encoding side and the decoding side are both configured with a sequence level reference weight value list a1, and for multi-frame images at sequence level, these images all use a reference weight value list a1, that is, each current block of the multi-frame images at sequence level shares the same reference weight value list a1 no matter what the weight prediction angle and weight prediction position of the current block are. Based on this, the sequence-level reference weight value list a1 may be determined as a reference weight value list of the current block.
In another possible embodiment, a preset reference weight value list may be determined as the reference weight value list of the current block. For example, a reference weight value list is preset at both the encoding end and the decoding end, and is used for all the images of the plurality of sequences, that is, each block of all the images of the plurality of sequences shares the reference weight value list no matter what the weight prediction angle and the weight prediction position are. Obviously, the range of use of the reference weight value list is larger than that of the reference weight value list at the sequence level. Based on this, a preset reference weight value list may be determined as the reference weight value list of the current block.
In another possible implementation, the reference weight value list corresponding to the weighted prediction angle may be determined as the reference weight value list of the current block. For example, the encoding end and the decoding end are both configured with a plurality of reference weight value lists, and a plurality of weight prediction angles share the same reference weight value list. For example, a reference weight value list a2 and a reference weight value list A3 are arranged, the same reference weight value list a2 is shared by the weight prediction angle 1 and the weight prediction angle 2, and the reference weight value list A3 is used by the weight prediction angle 3. Based on this, after obtaining the weighted prediction angle of the current block, if the weighted prediction angle of the current block is weighted prediction angle 1, the reference weight value list a2 corresponding to the weighted prediction angle 1 is determined as the reference weight value list of the current block.
In another possible implementation manner, a reference weight value list corresponding to the weighted prediction angle and the weighted prediction position may be determined as the reference weight value list of the current block. For example, the encoding side and the decoding side are both configured with a plurality of reference weight value lists, and the same or different reference weight value lists may be associated with different combinations of weight prediction angles and weight prediction positions.
For example, a reference weight value list a4, a reference weight value list a5, and a reference weight value list a6 are configured, the same reference weight value list a4 is shared by the weight prediction angle 1 and the weight prediction positions 0-2, the same reference weight value list a5 is shared by the weight prediction angle 1 and the weight prediction positions 3-5, and the same reference weight value list a6 is shared by the weight prediction angle 2 and the weight prediction positions 0-5. Based on this, after the weighted prediction angle and the weighted prediction position of the current block are obtained, if the weighted prediction angle of the current block is the weighted prediction angle 1 and the weighted prediction position of the current block is the weighted prediction position 4, the reference weight value list a5 corresponding to the weighted prediction angle 1 and the weighted prediction position 4 is determined as the reference weight value list of the current block.
In another possible embodiment, the reference weight value list corresponding to the size and the weighted prediction angle of the current block may be determined as the reference weight value list of the current block. For example, the encoding end and the decoding end are both configured with a plurality of reference weight value lists, and the same or different reference weight value lists can be corresponding to different combinations of sizes and weight prediction angles.
For example, a reference weight value list a7 and a reference weight value list A8 are configured, the reference weight value list a7 is used for weight prediction angle 1 and size 1, and the reference weight value list A8 is used for weight prediction angle 1 and size 2, weight prediction angle 2, and size 1. Based on this, if the weighted prediction angle is weighted prediction angle 1 and the size of the current block is size 1, the reference weight value list a7 corresponding to the weighted prediction angle 1 and the size 1 is determined as the reference weight value list of the current block.
For example, each valid number may correspond to a reference weight value list, such as valid number s1 corresponding to reference weight value list a7 and valid number s2 corresponding to reference weight value list a 8. Based on this, the effective number may be determined by using the weighted prediction angle of the current block and the size of the current block, and a reference weight value list corresponding to the effective number may be determined as the reference weight value list of the current block. For example, the effective amount may be determined as follows: ValidLenth ═ (N + (M > > X)) < <1, where N and M are the height and width of the current block, respectively, and X is the log2 logarithm of the tan value of the weighted prediction angle of the current block, such as 0 or 1.
In addition, the effective numbers may be classified after being quantized, for example, the effective numbers 30, 31, 32 may be quantized to 32, so as to correspond to the same reference weight value list.
In summary, a reference weight value list of the current block may be determined, and the reference weight value list may include a plurality of reference weight values, and the plurality of reference weight values in the reference weight value list are configured in advance or according to the weight configuration parameter.
Regarding the reference weight value list of the current block, the number of reference weight values in the reference weight value list may be a set fixed value, and the fixed value may be arbitrarily set according to experience, which is not limited to this. Alternatively, the number of reference weight values in the reference weight value list may be related to the size of the current frame (e.g., the width or height of the current frame) where the current block is located, e.g., the number of reference weight values may be larger than the width of the current frame or the same as the width of the current frame; the number of the reference weight values may be larger than the height of the current frame, or the reference weight values may be the same as the height of the current frame, which is not limited to this, and the number of the reference weight values may be selected according to actual needs.
For example, the plurality of reference weight values in the reference weight value list may be non-uniform reference weight values, for example, the plurality of reference weight values in the reference weight value list may not be identical.
In one possible implementation, the plurality of reference weight values in the reference weight value list may be monotonically increasing, or monotonically decreasing. Or, for a plurality of reference weight values in the reference weight value list, the monotonically increasing and then monotonically decreasing may be performed first; alternatively, it may be monotonically decreasing first and then monotonically increasing. Alternatively, for a plurality of reference weight values in the reference weight value list, the plurality of reference weight values may include a plurality of first numerical values and then a plurality of second numerical values, or include a plurality of second numerical values and then a plurality of first numerical values. The above reference weight value list is described below with reference to several specific cases.
Case 1: the plurality of reference weight values in the list of reference weight values may be monotonically increasing or monotonically decreasing. For example, the reference weight value list is [8888.. 8876543210000.. 00], that is, the plurality of reference weight values in the reference weight value list are monotonically decreased. For another example, the reference weight value list is [0000.. 0012345678888.. 88], that is, a plurality of reference weight values in the reference weight value list are monotonically increased. Of course, the above is merely an example, and the reference weight value list is not limited thereto.
For example, the reference weight values in the reference weight value list may be configured in advance, or configured according to the weight configuration parameter. The weight configuration parameters may include a weight conversion rate, which may be an empirically configured value, and a start position of the weight conversion, which may also be an empirically configured value.
The plurality of reference weight values in the list of reference weight values may be monotonically increasing or monotonically decreasing. For example, if the maximum value of the reference weight values is M1 and the minimum value of the reference weight values is M2, then the maximum value M1 and the minimum value M2 monotonically decrease for the plurality of reference weight values in the reference weight value list; or monotonically increasing from the minimum value M2 to the maximum value M1. Assuming that M1 is 8 and M2 is 0, the reference weight values may monotonically decrease from 8 to 0; or monotonically increasing from 0 to 8.
For example, for the process of pre-configuring the reference weight values in the reference weight value list, a plurality of reference weight values in the reference weight value list may be arbitrarily configured, as long as the plurality of reference weight values monotonically increases or monotonically decreases.
For example, for the process of configuring the reference weight values in the reference weight value list according to the weight configuration parameters, the weight transformation rate and the start position of the weight transformation may be obtained first, and then the plurality of reference weight values in the reference weight value list may be determined according to the weight transformation rate and the start position of the weight transformation. The initial positions of the weight transformation rate and the weight transformation can be preset values, the initial positions of the weight transformation rate and the weight transformation are not limited, and the initial positions can be configured according to experience.
For example, the reference weight values in the reference weight value list may be determined as follows: y is Clip3 (minimum, maximum, a (x-s)), x represents the index of the position in the reference weight value list, for example, x is 1, represents the 1 st position in the reference weight value list, and y represents the 1 st reference weight value in the reference weight value list. a denotes a weight conversion rate, and s denotes a start position of the weight conversion. Clip3 is used to limit the reference weight value between the minimum value and the maximum value, both of which can be configured empirically, and for convenience of description, the minimum value is 0 and the maximum value is 8 as an example.
a represents a weight transformation ratio, and can be configured empirically, for example, a can be an integer different from 0, and for example, a can be-4, -3, -2, -1, 2,3,4, etc., and for convenience of description, a is taken as 1 for illustration. If a is 1, the reference weight value needs to go through 0,1,2,3,4,5,6,7,8 from 0 to 8, or the reference weight value needs to go through 8,7,6,5,4,3,2,1,0 from 8 to 0.
s represents the starting position of the weight transformation, which may be configured empirically, e.g., s may be half the total number of reference weight values in the reference weight value list; alternatively, s may be slightly less than half the total number of reference weight values, such as half the total number of reference weight values minus 4; alternatively, s may be slightly greater than half the total number of reference weight values, such as half the total number of reference weight values plus 4. Of course, the above are only a few examples of the value of s, and the value of s is not limited.
In summary, when the reference weight values in the reference weight value list are configured according to the weight configuration parameter, the following manner may be adopted: referenceweightswwhele [ x ] ═ Clip3(0,8, x-Z); alternatively, referenceweightswwhele [ x ] ═ Clip3(0,8, Z-x); alternatively, referenceweightswolle [ x ] ═ Clip3(0,4, x-Z); alternatively, referenceweightswwhele [ x ] ═ Clip3(0,4, Z-x). Of course, the above-described approaches are just a few examples, and no limitation is placed on this implementation.
In the above formula, the value range of x is 0 to WholeLenth-1, when x is 1, referenceweightswwhele [ x ] represents the 1 st reference weight value in the reference weight value list, when x is 2, referenceweightswwhele [ x ] represents the 2 nd reference weight value in the reference weight value list, and so on. Illustratively, the WholeLenth is determined based on the width of the current frame if the peripheral position outside the current block is a peripheral position of an upper line or a lower line, and is determined based on the height of the current frame if the peripheral position outside the current block is a peripheral position of a left column or a right column.
In the above formula a (x-s), if a is 1, a (x-s) is x-s, that is, x-Z in Clip3(0,8, x-Z) is equivalent to x-s, and Z indicates the start position of weight conversion. If a is-1, a ═ x (x-s) ═ s-x, i.e., Z-x in Clip3(0,8, Z-x) is equivalent to s-x, and Z indicates the start position of the weight conversion. When a is other values, the implementation process is similar as long as the reference weight values in the reference weight value list satisfy y-Clip 3 (minimum value, maximum value, a (x-s)). Clip3(0,8) is used to limit the reference weight value between 0 and 8, and Clip3(0,4) is used to limit the reference weight value between 0 and 4.
In the above formula, Z represents the start position of weight transformation, and may be configured empirically, for example, assuming that x has a value range of 0-511 and Z is 255, substituting Z into the formula referenceweightswwhele [ x ] ═ Clip3(0,8, x-Z), and for any value of 0-511, referenceweightswolle [ x ] may be obtained, that is, 512 reference weight values are obtained, and the 512 reference weight values constitute a reference weight value list. For example, when the value of x is 0-255, the reference weight value is 0, when the value of x is 256, the reference weight value is 1, and so on, when the value of x is 262, the reference weight value is 7, and when the value of x is 263-511, the reference weight value is 8. In summary, when a is 1, Clip3(0,8, x-Z) is used to make the reference weight value monotonically increasing. Similarly, substituting Z into other formulas can also obtain 512 reference weight values, and form a reference weight value list based on the 512 reference weight values, for example, when a is-1, Clip3(0,8, Z-x) is used to make the reference weight values monotonically decrease. Clip3(0,4) is used to limit the reference weight value between 0 and 4, and the description thereof is not repeated.
In summary, a reference weight value list of the current block may be obtained, and the reference weight value list may include a plurality of reference weight values, and the plurality of reference weight values in the reference weight value list may be monotonically increasing or monotonically decreasing. In one possible implementation, for the reference weight value list, the reference weight value list may further include a reference weight value of the target area, a reference weight value of a first neighboring area of the target area, and a reference weight value of a second neighboring area of the target area.
The target region includes one or more reference weight values determined based on a starting position of the weight transform. For example, a reference weight value is determined based on the start position of the weight transformation, and the reference weight value is set as the target region. For example, if the starting position s of the weight transformation is 255, the 259 th reference weight value may be used as the target region, or the 258 th reference weight value may be used as the target region, or the 260 th reference weight value may be used as the target region. For another example, based on the starting position of the weight transformation, a plurality of reference weight values are determined, and these reference weight values are used as the target area, for example, the reference weight values of the 256 th-.
For example, the target region may include a reference weight value of 4, for example, the 259 th reference weight value is 4, and thus, if the target region includes one reference weight value, the target region may include the 259 th reference weight value, or if the target region includes a plurality of reference weight values, the target region may include the 256 th and 262 th reference weight values, or the 258 th and 260 th reference weight values, which is not limited as long as the 259 th reference weight value exists in the target region.
In summary, the target area may include a reference weight value; alternatively, the target region may include a plurality of reference weight values. If the target area comprises a plurality of reference weight values, the plurality of reference weight values of the target area are monotonically increased or monotonically decreased. The monotonic increase may be a strictly monotonic increase (i.e., a plurality of reference weight values of the target region are strictly monotonic increases); the monotonic decrease may be a strictly monotonic decrease (i.e., the plurality of reference weight values of the target region are strictly monotonic decrease). For example, the plurality of reference weight values for the target region monotonically increases from 1-7, or the plurality of reference weight values for the target region monotonically decreases from 7-1.
For example, the reference weight values of the first neighboring regions are all the first reference weight values, and the reference weight values of the second neighboring regions are monotonically increasing or monotonically decreasing. For example, the reference weight values of the first neighboring region are all 0, the target region includes one reference weight value, the reference weight value is 1, and the reference weight value of the second neighboring region monotonically increases from 2 to 8.
Or, the reference weight values of the first adjacent region are both second reference weight values, the reference weight values of the second adjacent region are both third reference weight values, and the second reference weight values are different from the third reference weight values. For example, the reference weight values of the first neighboring region are all 0, the target region includes a plurality of reference weight values, the plurality of reference weight values are monotonically increased from 1 to 7, and the reference weight values of the second neighboring region are all 8, and obviously, the reference weight values of the first neighboring region are different from the reference weight values of the second neighboring region.
Or, the reference weight value of the first adjacent region is monotonically increased or monotonically decreased, and the reference weight value of the second adjacent region is monotonically increased or monotonically decreased; for example, the reference weight value of the first neighboring region monotonically increases, and the reference weight value of the second neighboring region also monotonically increases; for another example, the reference weight value of the first neighboring region monotonically decreases, and the reference weight value of the second neighboring region monotonically decreases. For example, the reference weight value of the first neighboring region monotonically increases from 0-3, the target region includes one reference weight value, the reference weight value is 4, and the reference weight value of the second neighboring region monotonically increases from 5-8.
Case 2: the plurality of reference weight values in the reference weight value list are monotonically increased and then monotonically decreased, or the plurality of reference weight values are monotonically decreased and then monotonically increased. For example, the reference weight value list is [88.. 88765432100.. 00123456788 … 88], that is, a plurality of reference weight values in the reference weight value list are monotonically decreased and then monotonically increased. For another example, the reference weight value list is [00.. 00123456788.. 88765432100 … 00], that is, a plurality of reference weight values in the reference weight value list are monotonically increased and then monotonically decreased. Of course, the above is merely an example, and the reference weight value list is not limited thereto.
For example, the reference weight values in the reference weight value list may be configured in advance, or configured according to the weight configuration parameter. The weight configuration parameters may include a weight conversion rate, which may be an empirically configured value, and a start position of the weight conversion, which may also be an empirically configured value.
For example, assuming that the maximum value of the reference weight values is M1 and the minimum value of the reference weight values is M2, the reference weight values in the reference weight value list monotonically decrease from the maximum value M1 to the minimum value M2 and then monotonically increase from the minimum value M2 to the maximum value M1. Alternatively, it monotonically increases from the minimum value M2 to the maximum value M1, and then monotonically decreases from the maximum value M1 to the minimum value M2. Assuming that M1 is 8 and M2 is 0, the reference weight values may monotonically decrease from 8 to 0 and then monotonically increase from 0 to 8; alternatively, the plurality of reference weight values monotonically increases from 0 to 8 and monotonically decreases from 8 to 0.
For example, in the process of pre-configuring the reference weight values in the reference weight value list, a plurality of reference weight values in the reference weight value list may be configured arbitrarily, as long as the plurality of reference weight values are monotonically increased and then monotonically decreased, or the plurality of reference weight values are monotonically decreased and then monotonically increased, and the plurality of reference weight values are not limited.
For example, for the process of configuring the reference weight values in the reference weight value list according to the weight configuration parameter, a first weight transformation rate, a second weight transformation rate, a starting position of the first weight transformation, and a starting position of the second weight transformation may be obtained first, and then a plurality of reference weight values in the reference weight value list may be determined according to the first weight transformation rate, the second weight transformation rate, the starting position of the first weight transformation, and the starting position of the second weight transformation. The first weight transformation rate, the second weight transformation rate, the initial position of the first weight transformation and the initial position of the second weight transformation can be preset values, and the first weight transformation rate, the second weight transformation rate, the initial position of the first weight transformation and the initial position of the second weight transformation are not limited.
For example, the reference weight values in the reference weight value list may be determined as follows: when x is located at [0, k ], y is Clip3 (min, max, a1 (x-s 1)). When x is located at [ k +1, t ], y is Clip3 (min, max, a2 (x-s 2)). x denotes a position index in the reference weight value list, and if x is 1, denotes a1 st position in the reference weight value list, and y denotes a1 st reference weight value in the reference weight value list. k is a value configured according to experience, and is not limited to this, for example, k may be half of the total number of the reference weight values in the reference weight value list, or other values, where k is only less than t, and t is the total number of the reference weight values in the reference weight value list. a1 denotes the first weight conversion rate, and a2 denotes the second weight conversion rate. s1 denotes the start position of the first weight transformation, and s2 denotes the start position of the second weight transformation.
Clip3 is used to limit the reference weight value between the minimum value and the maximum value, both of which can be configured empirically, and for convenience of description, the minimum value is 0 and the maximum value is 8 as an example.
a1 and a2 both represent weight transformation ratios and can be configured empirically, e.g., a1 can be an integer other than 0, e.g., a1 can be-4, -3, -2, -1, 2,3,4, etc., a2 can be an integer other than 0, e.g., a2 can be-4, -3, -2, -1, 2,3,4, etc. For example, a2 may be a negative integer when a1 is a positive integer, and a2 may be a positive integer when a1 is a negative integer. For example, a1 may be-a 2, i.e. the rate of change of both is consistent, reflected in the setting of the reference weight value, i.e. the gradient width of the reference weight value is consistent. For convenience of description, taking a1 as 1 and a2 as-1 as an example, the reference weight value needs to pass through 0,1,2,3,4,5,6,7,8 from 0 to 8, and then pass through 8,7,6,5,4,3,2,1,0 from 8 to 0. Alternatively, the reference weight value is from 8 to 0, through 8,7,6,5,4,3,2,1,0, and then from 0 to 8 needs to pass through 0,1,2,3,4,5,6,7, 8. Taking a1 as 2 and a2 as-2 as an example, the reference weight value needs to go through 0,2,4,6,8 from 0 to 8, then 8 to 0, and then 8,6,4,2, 0. Alternatively, the reference weight value is from 8 to 0, through 8,6,4,2,0, and then from 0 to 8 needs to pass through 0,2,4,6, 8.
s1 and s2 each represent the starting position of the weight transform, and may be configured empirically, e.g., s1 is the starting position of the weight transform for the reference weight value of the interval [0, k ], and s1 may be half of k; alternatively, s1 may be slightly less than half k, such as half k minus 4; alternatively, s1 may be slightly larger than half k, such as half k plus 4. Of course, the above are only a few examples, and the value of s1 is not limited. s2 is the starting position of the weight transform for the reference weight value for the interval [ k +1, t ], s2 may be half q (i.e., the difference between t and k + 1); alternatively, s2 may be slightly less than half q, such as half q minus 4; alternatively, s2 may be slightly larger than half q, such as half q plus 4. Of course, the above are only a few examples, and the value of s2 is not limited.
In summary, a reference weight value list of the current block may be obtained, and the reference weight value list may include a plurality of reference weight values, and the plurality of reference weight values in the reference weight value list may be monotonically increased and then monotonically decreased, or the plurality of reference weight values may be monotonically decreased and then monotonically increased. In one possible implementation, for the reference weight value list, the reference weight value list may further include a reference weight value of a first target area, a reference weight value of a second target area, a reference weight value of a first neighboring area adjacent to only the first target area, a reference weight value of a second neighboring area adjacent to both the first target area and the second target area, and a reference weight value of a third neighboring area adjacent to only the second target area.
The first target region includes one or more reference weight values determined based on a start position of the first weight transform. For example, based on the start position of the first weight transformation, a reference weight value is determined, and this reference weight value is taken as the first target region. Or, based on the start position of the first weight transformation, determining a plurality of reference weight values, and taking the plurality of reference weight values as the first target area. If the first target area includes a plurality of reference weight values, the plurality of reference weight values of the first target area are monotonically increased or monotonically decreased. The monotonic increase may be a strictly monotonic increase (the plurality of reference weight values of the first target region strictly monotonic increase); the monotonic decrease may be a strictly monotonic decrease (the plurality of reference weight values of the first target region being strictly monotonic decrease).
The second target region includes one or more reference weight values determined based on a start position of the second weight transform. For example, a reference weight value is determined based on the start position of the second weight transform, and the reference weight value is taken as the second target region. Or, based on the start position of the second weight transformation, determining a plurality of reference weight values, and regarding the plurality of reference weight values as the second target region. If the second target area includes a plurality of reference weight values, the plurality of reference weight values of the second target area are monotonically increased or monotonically decreased. The monotonic increase may be a strictly monotonic increase (the plurality of reference weight values of the second target region being strictly monotonic increases); the monotonic decrease may be a strictly monotonic decrease (the plurality of reference weight values of the second target region being strictly monotonic decrease).
If the reference weight values of the first target region are monotonically increasing (e.g., strictly monotonically increasing), the reference weight values of the second target region are monotonically decreasing (e.g., strictly monotonically decreasing). Alternatively, if the reference weight values of the first target region decrease monotonically (e.g., strictly monotonically), the reference weight values of the second target region increase monotonically (e.g., strictly monotonically).
For example, the reference weight values of the first neighboring region are all first reference weight values; the reference weight values of the second adjacent area are second reference weight values; the reference weight values of the third neighboring region are all third reference weight values. The first reference weight value and the third reference weight value may be the same, the first reference weight value and the second reference weight value may be different, and the third reference weight value and the second reference weight value may be different. For example, the reference weight values of the first neighboring region are all 0, the reference weight values of the second neighboring region are all 8, and the reference weight values of the third neighboring region are all 0; or, the reference weight values of the first neighboring region are all 8, the reference weight values of the second neighboring region are all 0, and the reference weight values of the third neighboring region are all 8.
Or, the reference weight values of the first adjacent region are all first reference weight values; the reference weight value of the second adjacent area is monotonically decreased; the reference weight value of the third neighboring area monotonically increases. For example, the reference weight values of the first neighboring region are all 8, the first target region includes a reference weight value of 7, the reference weight values of the second neighboring region monotonically decrease from 6 to 0, the second target region includes a reference weight value of 1, and the reference weight values of the third neighboring region monotonically increase from 2 to 8.
Or, the reference weight value of the first neighboring region monotonically decreases; the reference weight value of the second adjacent area is monotonically decreased and then monotonically increased; the reference weight value of the third neighboring area monotonically increases. For example, the reference weight value of the first neighboring region monotonically decreases from 8 to 5, the first target region includes a reference weight value of 4, the reference weight value of the second neighboring region monotonically decreases from 3 to 0 and then monotonically increases from 0 to 3, the second target region includes a reference weight value of 4, and the reference weight value of the third neighboring region monotonically increases from 5 to 8.
Or, the reference weight value of the first neighboring region monotonically decreases; the reference weight value of the second neighboring area monotonically increases; the reference weight values of the third neighboring region are all third reference weight values. For example, the reference weight value of the first neighboring region monotonically decreases from 8 to 1, the reference weight value of the first target region includes a reference weight value of 0, the reference weight value of the second neighboring region monotonically increases from 0 to 7, the second target region includes a reference weight value of 8, and the reference weight values of the third neighboring regions are all 8.
Or, the reference weight values of the first adjacent region are all first reference weight values; the reference weight value of the second neighboring area monotonically increases; the reference weight value of the third neighboring area monotonically decreases. For example, the reference weight values of the first neighboring region are all 0, the first target region includes a reference weight value of 1, the reference weight values of the second neighboring region monotonically increase from 1 to 8, the second target region includes a reference weight value of 7, and the reference weight values of the third neighboring region monotonically decrease from 7 to 0.
Or, the reference weight value of the first neighboring region monotonically increases; the reference weight value of the second adjacent area is increased monotonically and then decreased monotonically; the reference weight value of the third neighboring area monotonically decreases. For example, the reference weight value of the first neighboring region monotonically increases from 0 to 3, the first target region includes a reference weight value of 4, the reference weight value of the second neighboring region monotonically increases from 5 to 8 and then monotonically decreases from 8 to 5, the second target region includes a reference weight value of 4, and the reference weight value of the third neighboring region monotonically decreases from 4 to 0.
Or, the reference weight value of the first neighboring region monotonically increases; the reference weight value of the second adjacent area is monotonically decreased; the reference weight values of the third neighboring region are all third reference weight values. For example, the reference weight value of the first neighboring region monotonically increases from 0 to 7, the first target region includes a reference weight value of 8, the reference weight value of the second neighboring region monotonically decreases from 7 to 0, the second target region includes a reference weight value of 0, and the reference weight values of the third neighboring regions are all 0.
Of course, the above are only a few examples, and no limitation is made to this, as long as the multiple reference weight values in the reference weight value list satisfy the following requirements: increasing from 0 to 8, and then decreasing from 8 to 0; alternatively, the value is decreased from 8 to 0 and then increased from 0 to 8.
Case 3: for a plurality of reference weight values in the reference weight value list, the plurality of reference weight values may include a plurality of first numerical values and then a plurality of second numerical values, or include a plurality of second numerical values and then a plurality of first numerical values. For example, the reference weight value list may be [88.. 8800.. 00], and for example, the reference weight value list may be [00.. 0088.. 88 ].
For example, the reference weight values in the reference weight value list may be configured in advance, or configured according to a weight configuration parameter, the weight configuration parameter may include a start position of the weight transformation, and the start position of the weight transformation may be a value configured according to experience. For the process of pre-configuring the reference weight values in the reference weight value list, a plurality of reference weight values in the reference weight value list may be configured arbitrarily as long as the plurality of reference weight values include only the first numerical value and the second numerical value.
For the process of configuring the reference weight values in the reference weight value list according to the weight configuration parameters, the initial position of the weight transformation may be obtained first, and then the plurality of reference weight values in the reference weight value list may be determined according to the initial position of the weight transformation. For example, the start position of the weight transform represents the s-th reference weight value in the reference weight value list, and thus, all reference weight values before (excluding) the s-th reference weight value are a first value (e.g., 8), and all reference weight values after (including) the s-th reference weight value are a second value (e.g., 0). Alternatively, all reference weight values before (excluding) the s-th reference weight value are a second numerical value (e.g., 0), and all reference weight values after (including) the s-th reference weight value are a first numerical value (e.g., 8).
Based on the above several cases, the reference weight value list of the current block can be obtained, for convenience of description, in the following embodiments, the reference weight value list in case 1 is taken as an example for description, and the implementation process of the reference weight value lists in other cases is similar.
Step S2, the effective number is determined based on the size of the current block and the weighted prediction angle of the current block.
For example, the effective number refers to that an effective number of peripheral positions exist outside the current block, for example, a pixel position inside the current block only points to the effective number of peripheral positions, that is, only a reference weight value needs to be set for the effective number of peripheral positions, so that a target weight value of each pixel position inside the current block can be obtained.
See subsequent embodiments regarding determining the effective number based on the size of the current block and the weighted prediction angle of the current block.
Step S3, determining a target index based on the size, weighted prediction angle and weighted prediction position of the current block.
For example, the target index may refer to the 259 th reference weight value in the reference weight value list, for example, when the target index is 259, the 259 th reference weight value in the reference weight value list may be represented.
Regarding determining the target index based on the size, weighted prediction angle and weighted prediction position of the current block, see the subsequent embodiments.
Step S4, selecting a valid number of reference weight values from the reference weight value list according to the target index.
For example, assuming that the target index is q1 and the effective number is r, if the target index is the first reference weight value to be selected in the reference weight value list, the q1 to q2 reference weight values are selected from the reference weight value list, and the difference between q2 and q1 is r, so as to select r reference weight values from the reference weight value list. Or, if the target index is used as the last reference weight value to be selected in the reference weight value list, selecting the q3 th to q1 th reference weight values from the reference weight value list, wherein the difference between q1 and q3 is r, and thus selecting r reference weight values from the reference weight value list.
Of course, the above-mentioned manner is only an example, the target index may also be used as a reference weight value of a middle position that needs to be selected in the reference weight value list, the implementation manner is similar, and details are not repeated here, and the target index is subsequently used as a first reference weight value that needs to be selected in the reference weight value list, that is, the q1 th to q2 th reference weight values are selected.
Step S5, setting a reference weight value of a peripheral position outside the current block according to the effective number of reference weight values.
For example, the number of peripheral locations outside the current block is an effective number, and an effective number of reference weight values are selected from the reference weight value list, and thus, the number of peripheral locations is the same as the number of selected reference weight values, and the effective number of reference weight values in the reference weight value list may be set as the reference weight values of the peripheral locations outside the current block.
For example, for the 1 st reference weight value of the significant number of reference weight values, the reference weight value is set to the reference weight value of the 1 st peripheral position outside the current block, for the 2 nd reference weight value of the significant number of reference weight values, the reference weight value is set to the reference weight value of the 2 nd peripheral position outside the current block, and so on.
In one possible implementation, assuming that r reference weight values are selected from the reference weight value list, the r reference weight values are intercepted from the reference weight value list, and the r reference weight values are set as reference weight values of r peripheral positions outside the current block. For example, the reference weight value list is [0000000000000001234567888888888888888], and referring to fig. 8A, the following reference weight values [001234567888888888888] are cut out from the reference weight value list and set as the reference weight values of the peripheral positions. Referring to fig. 8B, the following reference weight values [000000123456788888] are truncated from the reference weight value list, and are set as the reference weight values of the surrounding positions. Referring to fig. 8C, the following reference weight values [000001234567888888888888888] are truncated from the reference weight value list, and are set as the reference weight values of the surrounding positions. Referring to fig. 8D, the following reference weight values [001234567888888888] are truncated from the reference weight value list, and are set as the reference weight values of the surrounding positions.
In another possible embodiment, assuming that r reference weight values are selected from the reference weight value list, the r reference weight values do not need to be intercepted from the reference weight value list, but the reference weight values in the reference weight value list are moved so that the r reference weight values serve as reference weight values of r peripheral positions outside the current block. For example, the reference weight value list is [0000000000000001234567888888888888888], as shown in fig. 8E, the reference weight values in the reference weight value list can be set to the reference weight values of r peripheral positions by shifting the reference weight values such that the following reference weight values [001234567888888888888] correspond to the r peripheral positions outside the current block, but there are still other reference weight values in the reference weight value list that are not selected in fig. 8E. Referring to fig. 8F, by shifting the reference weight values in the reference weight value list so that the reference weight values [000000123456788888] correspond to r peripheral positions outside the current block, it is possible to set the reference weight values to the reference weight values of the r peripheral positions. Referring to fig. 8G, by shifting the reference weight values in the reference weight value list so that the reference weight values [000001234567888888888888888] correspond to r peripheral positions outside the current block, it is possible to set the reference weight values to the reference weight values of the r peripheral positions. See FIG. 8H for a reference weight value list
Is shifted such that the following reference weight value [001234567888888888] is associated with r weeks outside the current block
The edge positions correspond, and then these reference weight values can be set to the reference weight values of r peripheral positions.
The process from step S1 to step S5 will be described below with reference to several specific application scenarios. Illustratively, assume that the size of the current block is M × N, M being the width of the current block and N being the height of the current block. X is the log2 logarithm of the tan value of the weighted prediction angle, such as 0 or 1. Y is an index value of the weight prediction position, and a, b, c, d, e and f are preset constant values.
For example, the reference weight value list may be set as referenceweightswwhele [ x ], a value of x ranges from 0 to wholelent-1, and when x is 1, referenceweightswwhele [ x ] represents a 1 st reference weight value in the reference weight value list, and so on. WholeLenth ═ (MAX _ SIZE < < e) -f, MAX _ SIZE is the maximum block SIZE that allows weighted prediction. The value of referenceweightswwhele [ x ] ═ Clip3(0,8, x- (HalfLenth-4)), and HalfLenth-4 may be Z in the above embodiment, that is, the starting position of weight transformation. HalfLenth ═ WholeLenth > > 1. Certainly, the value of the halfLenth-4 can be updated to be HalfLenth, or HalfLenth-2, or HalfLenth +4, etc., which is not limited, and can be arbitrarily set according to actual needs, and then HalfLenth-4 is taken as an example for explanation.
Application scenario 1: an effective number (which may also be referred to as a reference weight effective length, and may be denoted as ValidLenth) is determined based on the size of the current block and the weighted prediction angle of the current block. The target index (which may be denoted as FirstIndex) is determined based on the size of the current block, the weighted prediction angle of the current block, and the weighted prediction position of the current block.
For example, the effective amount may be determined by the following formula: ValidLenth ═ (N + (M > > X)) < <1, in the above formula, N and M are the sizes of the current block, and X is determined based on the weighted prediction angle of the current block. Of course, the above equations are merely examples.
The target index may be determined by the following formula: and the FirstIndex (HalfLenth-4) - ((ValidLength > >1) -a + Y ((ValidLength-1) > >3)), wherein the ValidLength is determined based on the size and the weight prediction angle of the current block, Y is an index value of the weight prediction position, if the weight prediction position of the current block is the weight prediction position 4, Y is 4, and the value of HalfLenth-4 is the initial position of weight transformation. Of course, the above formula is merely an example, and is not limited thereto.
Then, an effective number of reference weight values may be selected from the reference weight value list according to the target index, and the reference weight values of the peripheral positions outside the current block may be set according to the effective number of reference weight values. And aiming at each pixel position of the current block, determining a target weight value of the pixel position according to the reference weight value of the peripheral matching position pointed by the pixel position. For example, the target weight value for each pixel position of the current block may be derived by the following formula:
SampleWeight[x][y]=ReferenceWeightsWhole[(y<<1)+((x<<1)>>X)+FirstIndex]。
In the above formula, [ x ] [ y ] denotes the coordinates of the pixel position of the current block, and SampleWeight [ x ] [ y ] denotes the target weight value of the pixel position [ x ] [ y ]. (y < <1) + ((X < <1) > > X) denotes that the peripheral matching position pointed to by the pixel position [ X ] [ y ] (i.e., the peripheral matching position pointed to based on the weight prediction angle) is the 5 th peripheral position among all the peripheral positions, and (y < <1) + ((X < <1) > > X) is denoted as p. The referenceweightswwhele [ p + FirstIndex ] represents a p + FirstIndex reference weight value in the reference weight value list, that is, the p + FirstIndex reference weight value in the reference weight value list represents a reference weight value associated with the peripheral matching position, that is, a target weight value of the pixel position [ x ] [ y ].
Application scenario 2: the effective amount can be determined by the following equation: ValidLenth ═ (N + (M > > X)) < < 1. Determining a target index by the following formula: FirstIndex ═ HalfLenth-4) - ((ValidLength > >1) -b + Y ((ValidLength-1) > >3) - ((M < <1) > > X)). The target weight value for each pixel position of the current block may be derived by the following formula: SampleWeight [ X ] [ y ] ═ referenceweightsWhole [ (y < <1) - ((X < <1) > > X) + firstIndex ].
Application scenario 3: the effective amount can be determined by the following equation: ValidLenth ═ (M + (N > > X)) < < 1. Determining a target index by the following formula: FirstIndex ═ HalfLenth-4) - ((ValidLength > >1) -c + Y ((ValidLength-1) > >3) - ((N < <1) > > X)). The target weight value for each pixel position of the current block may be derived by the following formula: SampleWeight [ X ] [ y ] ═ referenceweightsWhole [ (X < <1) - ((y < <1) > > X) + firstIndex) ].
Application scenario 4: the effective amount can be determined by the following equation: ValidLenth ═ (M + (N > > X)) < < 1. The target index may be determined by the following formula: FirstIndex ═ HalfLenth-4) - ((ValidLength > >1) -d + Y ((ValidLength-1) > > 3)). The target weight value for each pixel position of the current block may be derived by the following formula: SampleWeight [ X ] [ y ] ═ referenceweight whole [ (X < <1) + ((y < <1) > > X) + FirstIndex ].
For example, for the application scenarios 2, 3, and 4, the implementation principle is similar to that of the application scenario 1, but the difference is that the relevant formulas are different, and the description is not repeated here.
Application scenario 5: the effective amount can be determined by the following equation: ValidLenth ═ (N + (M > > X)) < < 1; determining a target index by the following formula: FirstIndex ═ HalfLenth-4) - ((ValidLength > >1) -a + Y ((ValidLength-1) > > 3)). Then, after obtaining the effective number and the target index, an effective number of reference weight values may be selected from the reference weight value list according to the target index, and these reference weight values constitute a reference weight list FinalReference of the current block, that is, the reference weight list FinalReference of the current block may be set based on a unified reference weight list referenceweightswolle, and only the effective number of reference weight values are included in the reference weight list FinalReference of the current block.
For example, the reference weight list FinalReference of the current block may be determined by the following formula: the FinalReference [ x ] ═ ReferenceWeiightsWhole [ x + FirstIndex ], where x ranges from 0 to ValidLenth-1. Illustratively, x is 0, which represents the 1 st reference weight value in the FinalReference in the reference weight list, and the reference weight value of FinalReference [ x ] is the x + FirstIndex reference weight value in the referenceweightswwhele in the reference weight list, and so on.
Then, a reference weight value of a peripheral position outside the current block is set according to the reference weight list FinalReference of the current block. And aiming at each pixel position of the current block, determining a target weight value of the pixel position according to the reference weight value of the peripheral matching position pointed by the pixel position. For example, a target weight value for each pixel position of the current block is derived by the following formula:
SampleWeight[x][y]=FinalReference[(y<<1)+((x<<1)>>X)];
in the above formula, [ x ] [ y ] denotes the coordinates of the pixel position of the current block, and SampleWeight [ x ] [ y ] denotes the target weight value of the pixel position [ x ] [ y ]. (y < <1) + ((X < <1) > > X) indicates that the peripheral matching position to which the pixel position [ X ] [ y ] points (i.e., the peripheral matching position to which the angle is predicted based on the weight) is the 5 th peripheral position among all the peripheral positions, and the finalReference [ (y < <1) + ((X < <1) > > X) ] indicates the reference weight value of the peripheral position, that is, for the 5 th peripheral position, the 5 th reference weight value in the finalReference [ X ] is selected, and this reference weight value indicates the reference weight value associated with the peripheral matching position, that is, the target weight value of the pixel position [ X ] [ y ].
Application scenario 6: the effective amount is determined by the following equation: ValidLenth ═ (N + (M > > X)) < < 1; determining a target index by the following formula: firstIndex ═ halfLenth-4) - ((ValidLength > >1) -b + Y ((ValidLength-1) > >3) - ((M < <1) > > X)); determining a reference weight list FinalReference for the current block by: FinalReference [ x ] ═ referenceweightswolle [ x + FirstIndex ]; deriving a target weight value for each pixel position of the current block by: SampleWeight [ X ] [ y ] (finalReference [ (y < <1) - ((X < <1) > > X) ].
Application scenario 7: the effective amount is determined by the following equation: ValidLenth ═ (M + (N > > X)) < < 1; determining a target index by the following formula: firstIndex ═ halfLenth-4) - ((ValidLength > >1) -c + Y ((ValidLength-1) > >3) - ((N < <1) > > X)); determining a reference weight list FinalReference for the current block by: FinalReference [ x ] ═ referenceweightswolle [ x + FirstIndex ]; deriving a target weight value for each pixel position of the current block by: SampleWeight [ X ] [ y ] (finalReference [ (X < <1) - ((y <1) > > X) ].
Application scenario 8: the effective amount is determined by the following equation: ValidLenth ═ (M + (N > > X)) < < 1; determining a target index by the following formula: FirstIndex ═ HalfLenth-4) - ((ValidLength > >1) -d + Y ((ValidLength-1) > > 3)); determining a reference weight list FinalReference for the current block by: FinalReference [ x ] ═ referenceweightswolle [ x + FirstIndex ]; deriving a target weight value for each pixel position of the current block by: SampleWeight [ X ] [ y ] ═ finalReference [ (X < <1) + ((y < <1) > > X) ].
For example, for the application scenarios 6, 7, and 8, the implementation principle is similar to that of the application scenario 5, but the difference is that the relevant formulas are different, and the description is not repeated here.
In the above embodiment 6, ValidLenth is related to weight prediction angle and block size, and for simplification of the scheme, some parameters may be fixed for optimization, for example, the weight prediction angle may be set as a fixed parameter value, and ValidLenth is related to only block size. In other embodiments, ValidLenth also has a similar determination method, and the description is not repeated herein.
In the above embodiment 6, the FirstIndex is related to the weight prediction angle, the block size, and the weight prediction position, and for the sake of simplicity, some parameters may be fixed for optimization, for example, the weight prediction angle may be set as a fixed parameter value, and the FirstIndex is related to only the block size and the weight prediction position. Alternatively, the weight prediction position may be set to a fixed parameter value, and the FirstIndex is related only to the block size and the weight prediction angle. Alternatively, both the weighted prediction angle and the weighted prediction position may be set to fixed parameter values, which may be the same or different, and the FirstIndex is related to the block size only. In other embodiments, the firstIndex (or firstPos) is determined in a similar manner, and the detailed description thereof is omitted.
Example 7: in the above embodiments 1 to 3, for each pixel position of the current block, it is necessary to determine a target weight value of the pixel position according to a reference weight value associated with a peripheral matching position to which the pixel position points. In order to obtain the reference weight value associated with the peripheral matching position, in one possible embodiment, the following manner may be adopted: the reference weight value is directly set for each peripheral position outside the current block, for example, an effective number of reference weight values are obtained (an effective number of reference weight values are not selected from the reference weight value list), and the reference weight values of the peripheral positions outside the current block are set according to the effective number of reference weight values. For example, the effective number may be determined based on the size and weight of the current block. The effective number of reference weight values may be preconfigured or configured according to weight configuration parameters.
In summary, since the reference weight value has been set for the peripheral positions outside the current block, that is, each peripheral position has the reference weight value, after the peripheral matching position pointed by the pixel position is determined from the peripheral positions outside the current block, the reference weight value associated with the peripheral matching position, that is, the target weight value of the pixel position, can be determined.
The following describes the above-described process of setting the reference weight values of the peripheral positions with reference to specific implementation steps.
Step P1, a valid number of reference weight values is obtained.
Illustratively, the number of peripheral locations outside the current block is a significant number, and in step P1, a significant number of reference weight values need to be obtained. For example, the effective amount may be determined as follows: ValidLenth ═ (N + (M > > X)) < <1, where N and M are the height and width of the current block, respectively, and X is the log2 logarithm of the tan value of the weighted prediction angle of the current block, such as 0 or 1.
In one possible implementation, the reference weight values are monotonically increasing, or monotonically decreasing, for a significant number. Or, for the effective number of reference weight values, monotonically increasing first and then monotonically decreasing; or, the first is monotonically decreased and then is monotonically increased. Or, for the effective number of reference weight values, the first number of reference weight values and the second number of reference weight values are included, or the second number of reference weight values and the first number of reference weight values are included. This is explained below with reference to several specific cases.
Case 1: the reference weight values may be monotonically increasing or monotonically decreasing for a significant number of reference weight values. For example, an effective number of reference weight values are [8888.. 8876543210000.. 00], i.e., monotonically decreasing. As another example, the effective number of reference weight values is [0000.. 0012345678888.. 88], i.e., monotonically increasing. Of course, the above is merely an example, and no limitation is made thereto.
For example, the reference weight value may be configured in advance, or configured according to a weight configuration parameter. The weight configuration parameters may include a weight conversion rate, which may be a value configured empirically, and a start position of the weight conversion. The starting position of the weight transformation may be an empirically set numerical value, or may be determined from the weight prediction position, or may be determined from the weight prediction angle and the weight prediction position.
The reference weight values may be monotonically increasing or monotonically decreasing for an effective number, in order from first to last. For example, if the maximum value of the reference weight values is M1 and the minimum value of the reference weight values is M2, then the maximum value M1 monotonically decreases to the minimum value M2 for a significant number of reference weight values; or monotonically increasing from the minimum value M2 to the maximum value M1. Assuming that M1 is 8 and M2 is 0, the reference weight values may monotonically decrease from 8 to 0; or monotonically increasing from 0 to 8.
For example, for the process of configuring multiple reference weight values in advance, multiple reference weight values may be arbitrarily configured, as long as the multiple reference weight values monotonically increase or monotonically decrease. Alternatively, for the process of configuring a plurality of reference weight values according to the weight configuration parameter, the weight transformation rate and the start position of the weight transformation may be obtained first, and then the plurality of reference weight values may be determined according to the weight transformation rate and the start position of the weight transformation. The starting position of the weight transformation is determined by the weight prediction position of the current block; alternatively, the start position of the weight transform is determined by the weight prediction angle and the weight prediction position of the current block.
For example, the reference weight value may be determined as follows: y is Clip3 (minimum, maximum, a (x-s)), x represents the index of the peripheral position, i.e. x has a value range of 1-significant number value, if x is 1, it represents the 1 st peripheral position, and y represents the reference weight value of the 1 st peripheral position. a denotes a weight conversion rate, and s denotes a start position of the weight conversion.
Clip3 is used to limit the reference weight value between the minimum value and the maximum value, both of which can be configured empirically, and for convenience of description, the minimum value is 0 and the maximum value is 8 as an example.
a represents a weight transformation ratio, and can be configured empirically, for example, a can be an integer different from 0, and for example, a can be-4, -3, -2, -1, 2,3,4, etc., and for convenience of description, a is taken as 1 for illustration. If a is 1, the reference weight value needs to go through 0,1,2,3,4,5,6,7,8 from 0 to 8, or the reference weight value needs to go through 8,7,6,5,4,3,2,1,0 from 8 to 0.
For example, when a is a positive integer, a may be positively correlated with the number of peripheral locations, i.e., the value of a is larger when the number of peripheral locations outside the current block is larger. When a is a negative integer, a may be negatively correlated with the number of peripheral positions, i.e., the value of a is smaller when the peripheral positions outside the current block are more. Of course, the above is only an example of the value of a, and the value is not limited thereto.
s denotes the starting position of the weight transformation, and s can be determined from the weight prediction position, e.g., s ═ f (weight prediction position), i.e., s is a function related to the weight prediction position. For example, after the range of the peripheral positions outside the current block is determined, the effective number of the peripheral positions may be determined, and all the peripheral positions are divided into N equal parts, where the value of N may be set arbitrarily, such as 4, 6, 8, etc., and the weighted prediction position is used to indicate which peripheral position outside the current block is used as the target peripheral region of the current block, and the peripheral position corresponding to the weighted prediction position is the starting position of the weighted transformation. Alternatively, s may be determined from the weight prediction angle and the weight prediction position, for example, s ═ f (weight prediction angle, weight prediction position), i.e., s is a function related to the weight prediction angle and the weight prediction position. For example, the range of the peripheral position outside the current block may be determined according to the weighted prediction angle, after the range of the peripheral position outside the current block is determined, the effective number of the peripheral positions may be determined, and all the peripheral positions may be divided by N, the weighted prediction position is used to indicate which peripheral position outside the current block is used as the target peripheral region of the current block, and the peripheral position corresponding to the weighted prediction position is the start position of the weighted transform.
In summary, in y ═ Clip3 (minimum value, maximum value, a × (x-s)), both the weight transformation rate a and the start position s of the weight transformation are known values, and for each peripheral position outside the current block, the reference weight value for the peripheral position can be determined by the functional relationship. For example, assuming that the weight transformation ratio a is 2 and the start position s of the weight transformation is 2, the function relationship is y 2 (x-2), and a reference weight value y can be obtained for each peripheral position x outside the current block.
In summary, an effective number of reference weight values for the current block may be obtained, which may be monotonically increasing or monotonically decreasing. In one possible implementation, the reference weight values include a reference weight value of the target area, a reference weight value of a first neighboring area of the target area, and a reference weight value of a second neighboring area of the target area.
Illustratively, the target region includes one or more reference weight values determined based on a starting position of the weight transform. For example, a reference weight value is determined based on the start position of the weight transformation, and the reference weight value is set as the target region. For another example, based on the start position of the weight transformation, a plurality of reference weight values are determined, and the plurality of reference weight values are set as the target region.
If the target area comprises a plurality of reference weight values, the plurality of reference weight values of the target area are monotonically increased or monotonically decreased. The monotonic increase may be a strictly monotonic increase (i.e., a plurality of reference weight values of the target region are strictly monotonic increases); the monotonic decrease may be a strictly monotonic decrease (i.e., the plurality of reference weight values of the target region are strictly monotonic decrease). For example, the plurality of reference weight values for the target region monotonically increases from 1-7, or the plurality of reference weight values for the target region monotonically decreases from 7-1.
For example, the reference weight values of the first neighboring regions are all the first reference weight values, and the reference weight values of the second neighboring regions are monotonically increasing or monotonically decreasing. For example, the reference weight values of the first neighboring region are all 0, the target region includes one reference weight value, the reference weight value is 1, and the reference weight value of the second neighboring region monotonically increases from 2 to 8.
Or, the reference weight values of the first adjacent region are both second reference weight values, the reference weight values of the second adjacent region are both third reference weight values, and the second reference weight values are different from the third reference weight values. For example, the reference weight values of the first neighboring region are all 0, the target region includes a plurality of reference weight values, the plurality of reference weight values are monotonically increased from 1 to 7, and the reference weight values of the second neighboring region are all 8, and obviously, the reference weight values of the first neighboring region are different from the reference weight values of the second neighboring region.
Or, the reference weight value of the first adjacent region is monotonically increased or monotonically decreased, and the reference weight value of the second adjacent region is monotonically increased or monotonically decreased; for example, the reference weight value of the first neighboring region monotonically increases, and the reference weight value of the second neighboring region also monotonically increases; for another example, the reference weight value of the first neighboring region monotonically decreases, and the reference weight value of the second neighboring region monotonically decreases. For example, the reference weight value of the first neighboring region monotonically increases from 0-3, the target region includes one reference weight value, the reference weight value is 4, and the reference weight value of the second neighboring region monotonically increases from 5-8.
Case 2: for the effective number of reference weight values, the reference weight values are monotonically increased and then monotonically decreased, or are monotonically decreased and then monotonically increased. For example, an effective number of reference weight values are [88.. 88765432100.. 00123456788 … 88], i.e., monotonically decreasing and then monotonically increasing. For another example, the effective number of reference weight values is [00.. 00123456788.. 88765432100 … 00] which is monotonically increasing and then monotonically decreasing. Of course, the above is merely an example, and there is no limitation on the effective number of reference weight values.
For example, the reference weight values for the effective number may be configured in advance, or configured according to a weight configuration parameter. The weight configuration parameters include a weight conversion rate, which may be an empirically configured value, and a start position of the weight conversion. The starting position of the weight transformation may be an empirically set numerical value, or may be determined from the weight prediction position, or may be determined from the weight prediction angle and the weight prediction position.
For example, assuming that the maximum value of the reference weight values is M1 and the minimum value of the reference weight values is M2, the maximum value M1 monotonically decreases to the minimum value M2 and then monotonically increases from the minimum value M2 to the maximum value M1 for a significant number of reference weight values. Alternatively, it monotonically increases from the minimum value M2 to the maximum value M1, and then monotonically decreases from the maximum value M1 to the minimum value M2. Assuming that M1 is 8 and M2 is 0, a significant number of reference weight values may monotonically decrease from 8 to 0 and then monotonically increase from 0 to 8; alternatively, the effective number of reference weight values increases monotonically from 0 to 8 and decreases monotonically from 8 to 0.
For example, for the process of configuring multiple reference weight values in advance, multiple reference weight values may be configured arbitrarily, as long as the multiple reference weight values are monotonically increased and then monotonically decreased, or the multiple reference weight values are monotonically decreased and then monotonically increased. For the process of configuring multiple reference weight values according to the weight configuration parameters, a first weight transformation rate, a second weight transformation rate, an initial position of the first weight transformation, an initial position of the second weight transformation may be obtained, and multiple reference weight values may be determined according to the first weight transformation rate, the second weight transformation rate, the initial position of the first weight transformation, and the initial position of the second weight transformation.
For example, the plurality of reference weight values may be determined as follows: when x is located at [0, k ], y is Clip3 (min, max, a1 (x-s 1)). When x is located at [ k +1, t ], y is Clip3 (min, max, a2 (x-s 2)). x represents a position index of the peripheral position, if x is 1, it represents the 1 st peripheral position, and y represents the reference weight value of the 1 st peripheral position. k is an empirically configured number, which is not limited, and can be, for example, half the effective number, or some other number, as long as k is less than t, which is the total number of peripheral locations, i.e., the effective number. a1 denotes the first weight conversion rate, and a2 denotes the second weight conversion rate. s1 denotes the start position of the first weight transformation, and s2 denotes the start position of the second weight transformation.
Clip3 is used to limit the reference weight value between the minimum value and the maximum value, both of which can be configured empirically, and for convenience of description, the minimum value is 0 and the maximum value is 8 as an example.
a1 and a2 both represent weight transformation ratios and can be configured empirically, e.g., a1 is an integer other than 0, e.g., -4, -3, -2, -1, 2, 3, 4, etc., and a2 is an integer other than 0, e.g., -4, -3, -2, -1, 2, 3, 4, etc.
s1 and s2 each indicate the start position of weight conversion, and may be configured empirically, s1 being the start position of weight conversion of the reference weight value of the section [0, k ], and s2 being the start position of weight conversion of the reference weight value of the section [ k +1, t ].
s1 may be determined from the weighted predicted position, e.g., s1 ═ f (weighted predicted position), i.e., s1 is a function related to the weighted predicted position. For example, after the range of the peripheral position outside the current block is determined, the range [0, k ] is determined from all the peripheral positions, all the peripheral positions of the range [0, k ] are divided into N equal parts, the value of N can be set arbitrarily, the weighted prediction position is used to indicate which peripheral position in the range [0, k ] is the target peripheral region of the current block, and the peripheral position corresponding to the weighted prediction position is the start position s1 of the weighted transformation. Alternatively, s1 may be determined by the weight prediction angle and the weight prediction position, for example, s1 ═ f (weight prediction angle, weight prediction position), i.e., s1 is a function related to the weight prediction angle and the weight prediction position. For example, the range of the peripheral position outside the current block may be determined according to the weight prediction angle, the range [0, k ] may be determined from all the peripheral positions, all the peripheral positions of the range [0, k ] may be divided into N equal parts, and the weight prediction position may be used to indicate which peripheral position in the range [0, k ] is the target peripheral region of the current block, thereby obtaining the start position s1 of the weight transformation.
s2 can be determined by the weighted prediction position, or by the weighted prediction angle and the weighted prediction position, and the implementation process of s2 is as shown in the implementation process of s1, except that the range is changed, i.e. the range is [ k +1, t ], and details are not repeated here.
Of course, the above is only an example of determining the starting positions s1 and s2 of the weight transform, and is not limited thereto.
In summary, a plurality of reference weight values may be obtained, where the plurality of reference weight values may be monotonically increased and then monotonically decreased, or the plurality of reference weight values may be monotonically decreased and then monotonically increased. In one possible implementation, the plurality of reference weight values may further include a reference weight value of a first target area, a reference weight value of a second target area, a reference weight value of a first neighboring area adjacent to only the first target area, a reference weight value of a second neighboring area adjacent to both the first target area and the second target area, and a reference weight value of a third neighboring area adjacent to only the second target area.
The first target region includes one or more reference weight values determined based on a start position of the first weight transform. For example, based on the start position of the first weight transformation, a reference weight value is determined, and this reference weight value is taken as the first target region. Or, based on the start position of the first weight transformation, determining a plurality of reference weight values, and taking the plurality of reference weight values as the first target area. If the first target area includes a plurality of reference weight values, the plurality of reference weight values of the first target area are monotonically increased or monotonically decreased. The monotonic increase may be a strictly monotonic increase (the plurality of reference weight values of the first target region strictly monotonic increase); the monotonic decrease may be a strictly monotonic decrease (the plurality of reference weight values of the first target region being strictly monotonic decrease).
The second target region includes one or more reference weight values determined based on a start position of the second weight transform. For example, a reference weight value is determined based on the start position of the second weight transform, and the reference weight value is taken as the second target region. Or, based on the start position of the second weight transformation, determining a plurality of reference weight values, and regarding the plurality of reference weight values as the second target region. If the second target area includes a plurality of reference weight values, the plurality of reference weight values of the second target area are monotonically increased or monotonically decreased. The monotonic increase may be a strictly monotonic increase (the plurality of reference weight values of the second target region being strictly monotonic increases); the monotonic decrease may be a strictly monotonic decrease (the plurality of reference weight values of the second target region being strictly monotonic decrease).
If the reference weight values of the first target region are monotonically increasing (e.g., strictly monotonically increasing), the reference weight values of the second target region are monotonically decreasing (e.g., strictly monotonically decreasing). Alternatively, if the reference weight values of the first target region decrease monotonically (e.g., strictly monotonically), the reference weight values of the second target region increase monotonically (e.g., strictly monotonically).
For example, the reference weight values of the first neighboring region are all first reference weight values; the reference weight values of the second adjacent area are second reference weight values; the reference weight values of the third neighboring region are all third reference weight values. The first reference weight value and the third reference weight value may be the same, the first reference weight value and the second reference weight value may be different, and the third reference weight value and the second reference weight value may be different. For example, the reference weight values of the first neighboring region are all 0, the reference weight values of the second neighboring region are all 8, and the reference weight values of the third neighboring region are all 0; or, the reference weight values of the first neighboring region are all 8, the reference weight values of the second neighboring region are all 0, and the reference weight values of the third neighboring region are all 8.
Or, the reference weight values of the first adjacent region are all first reference weight values; the reference weight value of the second adjacent area is monotonically decreased; the reference weight value of the third neighboring area monotonically increases. Or, the reference weight value of the first neighboring region monotonically decreases; the reference weight value of the second adjacent area is monotonically decreased and then monotonically increased; the reference weight value of the third neighboring area monotonically increases. Or, the reference weight value of the first neighboring region monotonically decreases; the reference weight value of the second neighboring area monotonically increases; the reference weight values of the third neighboring region are all third reference weight values. Or, the reference weight values of the first adjacent region are all first reference weight values; the reference weight value of the second neighboring area monotonically increases; the reference weight value of the third neighboring area monotonically decreases. Or, the reference weight value of the first neighboring region monotonically increases; the reference weight value of the second adjacent area is increased monotonically and then decreased monotonically; the reference weight value of the third neighboring area monotonically decreases. Or, the reference weight value of the first neighboring region monotonically increases; the reference weight value of the second adjacent area is monotonically decreased; the reference weight values of the third neighboring region are all third reference weight values.
Of course, the above are just a few examples, and no limitation is made to this as long as the multiple reference weight values satisfy the following requirements: increasing from 0 to 8, and then decreasing from 8 to 0; alternatively, the value is decreased from 8 to 0 and then increased from 0 to 8.
Case 3: for the effective number of reference weight values, the first number of values may be included, and then the second number of values may be included, or the second number of values may be included, and then the first number of values may be included. For example, an effective number of reference weight values may be [88.. 8800.. 00] or [00.. 0088.. 88 ]. The effective number of reference weight values may be configured in advance, or the weight configuration parameter may include a start position of weight transformation according to the weight configuration parameter configuration. For the process of configuring the reference weight values according to the weight configuration parameters, the start positions of the weight transformation may be obtained, and a plurality of reference weight values may be determined according to the start positions of the weight transformation. For example, the start position of the weight transform indicates the s-th reference weight value, and thus, all reference weight values before (excluding) the s-th reference weight value are a first value (e.g., 8), and all reference weight values after (including) the s-th reference weight value are a second value (e.g., 0). Or, all reference weight values before the s-th reference weight value are the second numerical value, and all reference weight values after the s-th reference weight value are the first numerical value.
Based on the above several cases, an effective number of reference weight values can be obtained, and for convenience of description, in the following embodiments, the reference weight value in case 1 is taken as an example for description, and the implementation processes of the reference weight values in other cases are similar.
Step P2, setting reference weight values of peripheral positions outside the current block according to the effective number of reference weight values.
For example, the number of peripheral locations outside the current block is an effective number, and the number of reference weight values is an effective number, and thus, the effective number of reference weight values may be set as the reference weight values of the peripheral locations outside the current block. For example, the 1 st reference weight value is set as the reference weight value of the 1 st peripheral position outside the current block, the 2 nd reference weight value is set as the reference weight value of the 2 nd peripheral position outside the current block, and so on.
The following describes embodiments of the above process with reference to several specific application scenarios. Illustratively, assume that the size of the current block is M × N, M being the width of the current block and N being the height of the current block. X is the log2 logarithm of the tan value of the weighted prediction angle, such as 0 or 1. Y is an index value of the weight prediction position, and a, b, c and d are preset constant values.
Application scenario 1: an effective number (which may also be referred to as a reference weight effective length, and may be denoted as ValidLenth) is determined based on the size of the current block and the weight prediction angle of the current block, and a parameter FirstPos is obtained. For example, the effective amount may be determined by the following formula: ValidLenth ═ (N + (M > > X)) < < 1; the parameter FirstPos is determined by the following formula: FirstPos ═ (ValidLength > >1) -a + Y ((ValidLength-1) > > 3); the reference weight value for each peripheral position of the current block may be derived by the following formula: ReferenceWeights [ x ] ═ Clip3(0,8, x-FirstPos). Illustratively, x can range from 0 to ValidLength-1; or 1 to ValidLength. The target weight value for each pixel position of the current block may be derived by the following formula: SampleWeight [ X ] [ y ] ═ ReferenceWeights [ (y < <1) + ((X < <1) > > X) ].
Application scenario 2: the effective amount can be determined by the following equation: ValidLenth ═ (N + (M > > X)) < < 1; the parameter FirstPos is determined by the following formula: FirstPos ═ (ValidLength > >1) -b + Y ((ValidLength-1) > >3) - ((M < <1) > > X); the reference weight value for each peripheral position of the current block may be derived by the following formula: ReferenceWeights [ x ] ═ Clip3(0,8, x-FirstPos). The target weight value for each pixel position of the current block may be derived by the following formula: SampleWeight [ X ] [ y ] ═ ReferenceWeights [ (y < <1) - ((X < <1) > > X) ].
Application scenario 3: the effective amount can be determined by the following equation: ValidLenth ═ (M + (N > > X)) < < 1; the parameter FirstPos is determined by the following formula: FirstPos ═ (ValidLength > >1) -c + Y ((ValidLength-1) > >3) - ((N < <1) > > X); the reference weight value for each peripheral position of the current block may be derived by the following formula: ReferenceWeights [ x ] ═ Clip3(0,8, x-FirstPos). The target weight value for each pixel position of the current block may be derived by the following formula: SampleWeight [ X ] [ y ] ═ ReferenceWeights [ (X < <1) - ((y < <1) > > X) ].
Application scenario 4: the effective amount can be determined by the following equation: ValidLenth ═ (M + (N > > X)) < < 1; the parameter FirstPos is determined by the following formula: FirstPos ═ (ValidLength > >1) -d + Y ((ValidLength-1) > > 3); the reference weight value for each peripheral position of the current block may be derived by the following formula: ReferenceWeights [ x ] ═ Clip3(0,8, x-FirstPos). The target weight value for each pixel position of the current block may be derived by the following formula: SampleWeight [ X ] [ y ] ═ ReferenceWeights [ (X < <1) + ((y < <1) > > X) ].
Example 8: in the above-described embodiments 1 to 3, it is necessary to determine the first prediction value of the pixel position according to the first prediction mode and the second prediction value of the pixel position according to the second prediction mode, and a process thereof will be described below.
In case 1, the first prediction mode is an inter-prediction mode, the second prediction mode is an inter-prediction mode, and a motion information candidate list is constructed, where the motion information candidate list includes at least two candidate motion information. One candidate motion information is selected from the motion information candidate list as first target motion information of the current block, and another candidate motion information is selected from the motion information candidate list as second target motion information of the current block. And for each pixel position of the current block, determining a first predicted value of the pixel position according to the first target motion information, and determining a second predicted value of the pixel position according to the second target motion information.
For example, for both the encoding end and the decoding end, a motion information candidate list may be constructed, and the motion information candidate list of the encoding end is the same as the motion information candidate list of the decoding end, which is not limited to this motion information candidate list.
For example, the candidate motion information in the motion information candidate list is single hypothesis motion information, for example, only unidirectional motion information, not bidirectional motion information, is used for the candidate motion information in the motion information candidate list. Obviously, the motion information candidate list may be a uni-directional motion information candidate list, since the candidate motion information is single hypothesis motion information.
When constructing the motion information candidate list, the spatial motion information (e.g., spatial motion information vector) may be added first, and then the temporal motion information (e.g., temporal motion vector) may be added. And/or, when constructing the motion information candidate list, adding unidirectional motion information (such as unidirectional motion vector) first and then adding bidirectional motion information (such as bidirectional motion vector).
When adding the bidirectional motion information into the motion information candidate list, the bidirectional motion information is firstly split into two pieces of unidirectional motion information, and the two pieces of split unidirectional motion information are sequentially added into the motion information candidate list. Or when adding the bidirectional motion information into the motion information candidate list, firstly cutting the bidirectional motion information into one piece of unidirectional motion information, and adding the one piece of unidirectional motion information into the motion information candidate list. Illustratively, the clipping the bidirectional motion information into one unidirectional motion information includes: directly fetch unidirectional motion information in List0 (reference frame List 0); alternatively, the unidirectional motion information in List1 (reference frame List 1) is directly fetched; alternatively, the unidirectional motion information in the List0 or the List1 is determined according to the order of addition.
Of course, the above is only an example of the motion information candidate list, and the motion information in the motion information candidate list is not limited.
For the encoding end, based on the rate distortion principle, one candidate motion information may be selected from the motion information candidate list as the first target motion information of the current block, and another candidate motion information may be selected from the motion information candidate list as the second target motion information of the current block, where the first target motion information is different from the second target motion information, and this is not limited.
In a possible implementation manner, when the encoding end sends the encoded bitstream to the decoding end, the encoded bitstream may carry indication information a and indication information b, where the indication information a is used to indicate an index value 1 of the first target motion information of the current block, and the index value 1 indicates that the first target motion information is the several candidate motion information in the motion information candidate list. The indication information b is used to indicate an index value 2 of the second target motion information of the current block, and the index value 2 indicates that the second target motion information is the second candidate motion information in the motion information candidate list. Illustratively, index value 1 and index value 2 may be different.
After receiving the coded bit stream, the decoding end analyzes the indication information a and the indication information b from the coded bit stream. Based on the indication information a, the decoding end selects candidate motion information corresponding to the index value 1 from the motion information candidate list, and the candidate motion information is used as the first target motion information of the current block. Based on the indication information b, the decoding end selects candidate motion information corresponding to the index value 2 from the motion information candidate list, and the candidate motion information is used as second target motion information of the current block.
In another possible implementation, when the encoding end sends the encoded bitstream to the decoding end, the encoded bitstream may carry indication information a and indication information c, where the indication information a is used to indicate an index value 1 of the first target motion information of the current block, and the index value 1 indicates that the first target motion information is the several candidate motion information in the motion information candidate list. The indication information c is used to indicate a difference value between an index value 2 and an index value 1, where the index value 2 indicates that the second target motion information is the first candidate motion information in the motion information candidate list. Illustratively, index value 1 and index value 2 may be different.
After receiving the coded bit stream, the decoding end analyzes the indication information a and the indication information c from the coded bit stream. Based on the indication information a, the decoding end selects candidate motion information corresponding to the index value 1 from the motion information candidate list, and the candidate motion information is used as the first target motion information of the current block. Based on the indication information c, the decoding end firstly determines an index value 2 according to the difference value between the index value 2 and the index value 1, and then selects candidate motion information corresponding to the index value 2 from the motion information candidate list, wherein the candidate motion information is used as second target motion information of the current block.
For the process that the encoding end/decoding end determines the first predicted value of the pixel position according to the first target motion information and determines the second predicted value of the pixel position according to the second target motion information, reference may be made to the conventional implementation manner, which is not limited thereto.
For example, when determining the first prediction value of the pixel position according to the first target motion information, the inter weighted prediction mode may be used to obtain the first prediction value of the pixel position. For example, an initial predicted value of the pixel position is determined by using the first target motion information, and then the initial predicted value is multiplied by a preset factor to obtain an adjustment predicted value. If the adjusted predicted value is larger than the maximum predicted value, the maximum predicted value is used as the first predicted value of the current block, if the adjusted predicted value is smaller than the minimum predicted value, the minimum predicted value is used as the first predicted value of the current block, and if the adjusted predicted value is not smaller than the minimum predicted value and not larger than the maximum predicted value, the adjusted predicted value is used as the first predicted value of the current block. Of course, the above-described manner is merely an example, and is not limited thereto.
Similarly, when the second predicted value of the pixel position is determined according to the second target motion information, the second predicted value of the pixel position may also be obtained by using an inter-frame weighted prediction mode.
And 2, the first prediction mode is an inter-prediction mode, the second prediction mode is an inter-prediction mode, a first motion information candidate list and a second motion information candidate list are constructed, the first motion information candidate list comprises at least one candidate motion information, and the second motion information candidate list comprises at least one candidate motion information. One candidate motion information is selected from the first motion information candidate list as first target motion information of the current block, and one candidate motion information is selected from the second motion information candidate list as second target motion information of the current block. For each pixel position of the current block, determining a first predicted value of the pixel position according to the first target motion information; and determining a second predicted value of the pixel position according to the second target motion information.
For example, for both the encoding side and the decoding side, a first motion information candidate list and a second motion information candidate list may be constructed, where the first motion information candidate list at the encoding side is the same as the first motion information candidate list at the decoding side, and the second motion information candidate list at the encoding side is the same as the second motion information candidate list at the decoding side.
The candidate motion information in the first motion information candidate list is single hypothesis motion information, that is, only unidirectional motion information, not bidirectional motion information, is used for the candidate motion information in the first motion information candidate list. Obviously, the first motion information candidate list may be a uni-directional motion information candidate list, since the candidate motion information is all single hypothesis motion information.
The candidate motion information in the second motion information candidate list is single hypothesis motion information, that is, only unidirectional motion information, not bidirectional motion information, is used for the candidate motion information in the second motion information candidate list. Obviously, the second motion information candidate list may be a uni-directional motion information candidate list, since the candidate motion information is all single hypothesis motion information.
Illustratively, the reference frame of candidate motion information in the first motion information candidate list is from one reference frame list of the current block, and the reference frame of candidate motion information in the second motion information candidate list is from another reference frame list of the current block. For example, the reference frame of candidate motion information in the first motion information candidate List is from the List0 (reference frame List 0) of the current block, and the reference frame of candidate motion information in the second motion information candidate List is from the List1 (reference frame List 1) of the current block. Alternatively, the reference frame of the candidate motion information in the first motion information candidate List is from the List1 of the current block, and the reference frame of the candidate motion information in the second motion information candidate List is from the List0 of the current block.
When constructing the first motion information candidate list, spatial motion information (e.g., spatial motion information vector) may be added first, and then temporal motion information (e.g., temporal motion vector) may be added. And/or, unidirectional motion information (e.g., unidirectional motion vectors) may be added first, followed by bidirectional motion information (e.g., bidirectional motion vectors) when constructing the first motion information candidate list.
When constructing the second motion information candidate list, spatial motion information (e.g., spatial motion information vector) may be added first, and then temporal motion information (e.g., temporal motion vector) may be added. And/or, unidirectional motion information (e.g., unidirectional motion vectors) may be added first, followed by bidirectional motion information (e.g., bidirectional motion vectors) when constructing the second motion information candidate list.
For example, the unidirectional motion information of the List0 may be added first and then the bidirectional motion information (e.g., the unidirectional motion information of the List0 in the bidirectional motion information) may be added to the first motion information candidate List. The unidirectional motion information of the List1 may be added first and then the bidirectional motion information (e.g., the unidirectional motion information of the List1 in the bidirectional motion information) may be added to the second motion information candidate List. Alternatively, the unidirectional motion information of the List1 may be added first to the first motion information candidate List, and then the bidirectional motion information (e.g., the unidirectional motion information of the List1 in the bidirectional motion information) may be added later. The unidirectional motion information of the List0 may be added first and then the bidirectional motion information (e.g., the unidirectional motion information of the List0 in the bidirectional motion information) may be added to the second motion information candidate List.
For the encoding end, based on the rate distortion principle, one candidate motion information may be selected from the first motion information candidate list as the first target motion information of the current block, and one candidate motion information may be selected from the second motion information candidate list as the second target motion information of the current block, where the first target motion information is different from the second target motion information, and this is not limited.
When the encoding end sends the encoding bit stream to the decoding end, the encoding bit stream carries indication information a and indication information b, the indication information a is used for indicating an index value 1 of the first target motion information of the current block, and the index value 1 indicates that the first target motion information is the several candidate motion information in the first motion information candidate list. The indication information b is used to indicate an index value 2 of the second target motion information of the current block, and the index value 2 indicates that the second target motion information is the second candidate motion information in the second motion information candidate list. After receiving the coded bit stream, the decoding end analyzes the indication information a and the indication information b from the coded bit stream. Based on the indication information a, candidate motion information corresponding to the index value 1 is selected from the first motion information candidate list, and the candidate motion information is used as the first target motion information of the current block. And selecting candidate motion information corresponding to the index value 2 from the second motion information candidate list based on the indication information b, wherein the candidate motion information is used as second target motion information of the current block.
For the process that the encoding end/decoding end determines the first predicted value of the pixel position according to the first target motion information and determines the second predicted value of the pixel position according to the second target motion information, reference may be made to the conventional implementation manner, which is not limited thereto.
For example, for both the encoding end and the decoding end, a motion information candidate list may be constructed, and the motion information candidate list of the encoding end is the same as the motion information candidate list of the decoding end, which is not limited to this motion information candidate list.
Illustratively, the candidate motion information in the motion information candidate list includes single hypothesis motion information, and/or, multi-hypothesis motion information. For example, the candidate motion information in the motion information candidate list may be unidirectional motion information or bidirectional motion information, that is, the motion information candidate list supports unidirectional motion information and bidirectional motion information.
When constructing the motion information candidate list, the spatial motion information (e.g., spatial motion information vector) may be added first, and then the temporal motion information (e.g., temporal motion vector) may be added. And/or, when constructing the motion information candidate list, adding unidirectional motion information (such as unidirectional motion vector) first and then adding bidirectional motion information (such as bidirectional motion vector).
Different from the first case and the second case, since the bidirectional motion information is supported, when the bidirectional motion information is added to the motion information candidate list, the bidirectional motion information can be directly added to the motion information candidate list without splitting the bidirectional motion information into two pieces of unidirectional motion information or cutting the bidirectional motion information into one piece of unidirectional motion information.
For the encoding side, one candidate motion information may be selected from the motion information candidate list as the target motion information of the current block based on a rate distortion principle. When the encoding end sends the encoded bit stream to the decoding end, the encoded bit stream may carry indication information a, where the indication information a is used to indicate an index value 1 of the target motion information of the current block, and the index value 1 is used to indicate that the target motion information is the several candidate motion information in the motion information candidate list. After receiving the coded bit stream, the decoding end analyzes the indication information a from the coded bit stream, and selects candidate motion information corresponding to the index value 1 from the motion information candidate list based on the indication information a, wherein the candidate motion information is used as target motion information of the current block.
For example, determining the target intra mode of the current block may include, but is not limited to, the following:
the first way, the designated intra mode is determined as the target intra mode of the current block. For example, the encoding side determines a designated intra mode (e.g., Planar mode, or DC mode, or horizontal angle mode, or vertical angle mode) as a target intra mode for the current block, and the decoding side also determines the designated intra mode as the target intra mode for the current block. For example, the encoding end determines the Planar mode as the target intra-frame mode of the current block, and the decoding end also determines the Planar mode as the target intra-frame mode of the current block.
Constructing an intra-frame prediction mode candidate list, wherein the intra-frame prediction mode candidate list comprises at least one candidate intra-frame mode; one candidate intra mode is selected from the intra prediction mode candidate list as a target intra mode of the current block.
For example, for both the encoding side and the decoding side, the intra prediction mode candidate list is constructed for the current block, and the intra prediction mode candidate list of the encoding side and the intra prediction mode candidate list of the decoding side may be the same. Candidate intra modes in the intra prediction mode candidate list include, but are not limited to: planar mode, DC mode, vertical angle mode, horizontal angle mode, etc.
For the encoding side, one candidate intra mode may be selected from the intra prediction mode candidate list as the target intra mode of the current block based on a rate distortion principle. When the encoding end sends the encoded bitstream to the decoding end, the encoded bitstream may carry indication information b, where the indication information b is used to indicate an index value 2 of a target intra mode of the current block, and the index value 2 is used to indicate that the target intra mode is the several candidate intra modes in the intra prediction mode candidate list. After receiving the coded bit stream, the decoding end analyzes the indication information b from the coded bit stream, and selects a candidate intra-frame mode corresponding to the index value 2 from the intra-frame prediction mode candidate list based on the indication information b, wherein the candidate intra-frame mode is used as a target intra-frame mode of the current block.
And thirdly, determining the target intra-mode of the adjacent block of the current block (such as the upper adjacent block of the current block or the left adjacent block of the current block) as the target intra-mode of the current block. For example, if the neighboring block of the current block is predicted by using the target intra mode, the encoding end may determine the target intra mode of the neighboring block of the current block as the target intra mode of the current block, and the decoding end may also determine the target intra mode of the neighboring block of the current block as the target intra mode of the current block.
And fourthly, determining the target intra-frame mode of the current block according to the weight prediction angle and the relative position relation between the intra-frame frames.
For example, if the weighted prediction angle is a weighted prediction angle of upper-left-lower-right, and the intra region is located at lower-left of the inter region, the horizontal angle mode may be determined as the target intra mode of the current block. If the weighted prediction angle is a weighted prediction angle from top left to bottom right and the intra-frame region is located at the top right of the inter-frame region, the vertical angle mode may be determined as the target intra-frame mode of the current block. Otherwise, the Planar mode may be determined as the target intra mode of the current block.
Of course, the above manner is only an example, and there is no limitation as long as the target intra mode of the current block can be obtained.
For the process that the encoding end/decoding end determines the first predicted value of the pixel position according to the target motion information and determines the second predicted value of the pixel position according to the target intra mode, reference may be made to the conventional implementation manner, which is not limited thereto.
For example, for both the encoding end and the decoding end, a motion information candidate list may be constructed, and the motion information candidate list of the encoding end is the same as the motion information candidate list of the decoding end, which is not limited to this motion information candidate list.
As for the construction method of the motion information candidate list, see case 1 or case 3, and as for the method of the encoding end/decoding end selecting the target motion information from the motion information candidate list, see case 3, repeated description is omitted here.
For example, for both the encoding end and the decoding end, a block vector candidate list may be constructed for the current block, and the block vector candidate list at the encoding end is the same as the block vector candidate list at the decoding end, and the block vector candidate list is not limited.
For example, the candidate block vectors in the block vector candidate list are all single hypothesis block vectors, and for example, only one-way candidate block vectors are used for the candidate block vectors in the block vector candidate list. Obviously, the block vector candidate list may be a unidirectional block vector candidate list, since the candidate block vectors are all single hypothesis block vectors.
Exemplary candidate block vectors in the block vector candidate list may include, but are not limited to: a block vector of a spatial neighboring block of the current block, a History block vector in an HMVP (motion vector prediction based on History information) list corresponding to the current block, a default block vector, and the like, which are not limited thereto. After building the block vector candidate list for the current block, one candidate block vector may be selected from the block vector candidate list as the target block vector for the current block.
For the encoding end, one candidate block vector may be selected from the block vector candidate list as the target block vector of the current block based on the rate distortion principle, and details of this process are not repeated. When the encoding end sends the encoded bitstream to the decoding end, the encoded bitstream may carry indication information b, where the indication information b is used to indicate an index value 2 of a target block vector of the current block, and the index value 2 is used to indicate that the target block vector is the second candidate block vector in the block vector candidate list. After receiving the coded bit stream, the decoding end analyzes the indication information b from the coded bit stream, and selects a candidate block vector corresponding to the index value 2 from the block vector candidate list based on the indication information b, wherein the candidate block vector is used as a target block vector of the current block.
For the process that the encoding end/decoding end determines the first predicted value of the pixel position according to the target motion information and determines the second predicted value of the pixel position according to the target block vector, reference may be made to the conventional implementation, which is not limited thereto.
The construction method of the motion information candidate list may be as in case 1 or case 3, and the method for selecting the target motion information from the motion information candidate list by the encoding end/decoding end may be as in case 3, which is not repeated herein. The encoding end/the decoding end determines the second prediction value of the pixel position according to the palette mode, which can refer to the conventional implementation manner, but is not limited thereto.
In case 6, the first prediction mode is an intra prediction mode, and the second prediction mode is an inter prediction mode, a target intra mode (i.e., a target intra prediction mode among the intra prediction modes) of the current block is determined. And constructing a motion information candidate list which can comprise at least one candidate motion information, and selecting one candidate motion information from the motion information candidate list as the target motion information of the current block. And aiming at each pixel position of the current block, determining a first predicted value of the pixel position according to the target intra-frame mode, and determining a second predicted value of the pixel position according to the target motion information.
The target intra mode of the current block is determined in a manner, see case 3, for example, the designated intra mode is determined as the target intra mode of the current block; or, constructing an intra prediction mode candidate list, the intra prediction mode candidate list including at least one candidate intra mode, and selecting one candidate intra mode from the intra prediction mode candidate list as the target intra mode of the current block; or, determining the target intra-mode of a neighboring block of the current block as the target intra-mode of the current block. Or determining the target intra-frame mode of the current block according to the weight prediction angle and the relative position relationship between intra-frames, which is not described in detail herein.
For an exemplary process of constructing the motion information candidate list and selecting one candidate motion information from the motion information candidate list as the target motion information of the current block, see the case 3, which is not repeated herein.
For example, the first target intra mode and the second target intra mode may be different.
Determining the first target intra mode and the second target intra mode for the current block may include, but is not limited to:
the first mode is to determine the specified first intra mode as a first target intra mode of the current block, and to determine the specified second intra mode as a second target intra mode of the current block. For example, the encoding end determines a designated first intra mode (e.g., Planar mode, or DC mode, or horizontal angle mode, or vertical angle mode) as a first target intra mode of the current block, and the decoding end also determines the designated first intra mode as the first target intra mode of the current block. The encoding end determines a designated second intra mode (such as a Planar mode, or a DC mode, or a horizontal angle mode, or a vertical angle mode) as a second target intra mode of the current block, and the decoding end also determines the designated second intra mode as the second target intra mode of the current block.
For example, the encoding side determines the Planar mode as a first target intra-mode of the current block by default of the protocol convention, and determines the DC mode as a second target intra-mode of the current block by default of the protocol convention. And the decoding end determines the Planar mode as a first target intra-frame mode of the current block through protocol convention default, and determines the DC mode as a second target intra-frame mode of the current block through protocol convention default. Of course, the above-mentioned manner is only an example, and other conventions may be made, which are not limited to this.
Constructing an intra-frame prediction mode candidate list, wherein the intra-frame prediction mode candidate list comprises at least two candidate intra-frame modes; one candidate intra mode is selected from the intra prediction mode candidate list as a first target intra mode of the current block, and another candidate intra mode is selected from the intra prediction mode candidate list as a second target intra mode of the current block.
For example, for both the encoding side and the decoding side, the intra prediction mode candidate list is constructed for the current block, and the intra prediction mode candidate list of the encoding side and the intra prediction mode candidate list of the decoding side may be the same. Candidate intra modes in the intra prediction mode candidate list include, but are not limited to: planar mode, DC mode, vertical angle mode, horizontal angle mode, etc.
For the encoding side, one candidate intra mode may be selected from the intra prediction mode candidate list as a first target intra mode of the current block and another candidate intra mode may be selected from the intra prediction mode candidate list as a second target intra mode of the current block based on a rate distortion principle. In a possible implementation, when the encoding end sends the encoded bitstream to the decoding end, the encoded bitstream may carry indication information a and indication information b, where the indication information a is used to indicate an index value 1 of a first target intra mode of the current block, and the index value 1 is used to indicate that the first target intra mode is a few candidate intra modes in the intra prediction mode candidate list. The indication information b is used to indicate an index value 2 of the second target intra mode of the current block, the index value 2 being used to indicate that the second target intra mode is the several candidate intra modes in the intra prediction mode candidate list.
After receiving the coded bit stream, the decoding end analyzes the indication information a and the indication information b from the coded bit stream, and selects a candidate intra-frame mode corresponding to the index value 1 from the intra-frame prediction mode candidate list based on the indication information a, wherein the candidate intra-frame mode is used as a first target intra-frame mode of the current block. Based on the indication information b, the decoding end selects a candidate intra-mode corresponding to the index value 2 from the intra-prediction mode candidate list, and the candidate intra-mode is used as a second target intra-mode of the current block.
In another possible implementation, when the encoding end sends the encoded bitstream to the decoding end, the encoded bitstream may carry indication information a and indication information c, where the indication information a is used to indicate an index value 1 of a first target intra mode of the current block, and the index value 1 is used to indicate that the first target intra mode is a few candidate intra modes in the intra prediction mode candidate list. The indication information c is used to indicate a difference value between an index value 2 and an index value 1, and the index value 2 is used to indicate that the second target intra mode is the several candidate intra modes in the intra prediction mode candidate list. Illustratively, index value 1 and index value 2 are different.
After receiving the coded bit stream, the decoding end analyzes the indication information a and the indication information c from the coded bit stream. Based on the indication information a, the decoding end selects a candidate intra-mode corresponding to the index value 1 from the intra-prediction mode candidate list, which is the first target intra-mode of the current block. Based on the indication information c, the decoding end first determines an index value 2 according to the difference value between the index value 2 and the index value 1, and then selects a candidate intra-mode corresponding to the index value 2 from the intra-prediction mode candidate list, wherein the candidate intra-mode is used as a second target intra-mode of the current block.
The third mode is to construct a first intra prediction mode candidate list and a second intra prediction mode candidate list, wherein the first intra prediction mode candidate list comprises at least one candidate intra mode, and the second intra prediction mode candidate list comprises at least one candidate intra mode. One candidate intra mode is selected from the first intra prediction mode candidate list as a first target intra mode for the current block, and one candidate intra mode is selected from the second intra prediction mode candidate list as a second target intra mode for the current block.
For an encoding end and a decoding end, a first intra-frame prediction mode candidate list and a second intra-frame prediction mode candidate list are established for a current block, the first intra-frame prediction mode candidate list of the encoding end is the same as the first intra-frame prediction mode candidate list of the decoding end, and the second intra-frame prediction mode candidate list of the encoding end is the same as the second intra-frame prediction mode candidate list of the decoding end.
For the encoding side, based on the rate distortion principle, one candidate intra mode may be selected from the first intra prediction mode candidate list as the first target intra mode of the current block, and one candidate intra mode may be selected from the second intra prediction mode candidate list as the second target intra mode of the current block. When the encoding end sends the encoded bitstream to the decoding end, the encoded bitstream may carry indication information a and indication information b, where the indication information a is used to indicate an index value 1 of a first target intra mode of the current block, and the index value 1 indicates that the first target intra mode is a few candidate intra modes in the first intra prediction mode candidate list. The indication information b is used to indicate an index value 2 of the second target intra mode of the current block, the index value 2 indicating that the second target intra mode is the several candidate intra modes in the second intra prediction mode candidate list.
After receiving the coded bit stream, the decoding end analyzes the indication information a and the indication information b from the coded bit stream, and selects a candidate intra-mode corresponding to the index value 1 from the first intra-prediction mode candidate list based on the indication information a, wherein the candidate intra-mode is used as a first target intra-mode of the current block. Based on the indication information b, a candidate intra mode corresponding to the index value 2 is selected from the second intra prediction mode candidate list as the second target intra mode of the current block.
Fourth, if the target intra mode of a first neighboring block (e.g., an upper neighboring block of the current block) of the current block is different from the target intra mode of a second neighboring block (e.g., a left neighboring block of the current block) of the current block, the target intra mode of the first neighboring block may be determined as the first target intra mode of the current block, and the target intra mode of the second neighboring block may be determined as the second target intra mode of the current block. If the target intra mode of the first neighboring block is the same as the target intra mode of the second neighboring block, the target intra mode of the first neighboring block may be determined as a first target intra mode of the current block, and another intra mode different from the first target intra mode may be determined as a second target intra mode of the current block.
For example, for the encoding end or the decoding end, if the target intra mode of the first neighboring block is intra mode a and the target intra mode of the second neighboring block is intra mode B, when intra mode a is different from intra mode B, intra mode a is determined as the first target intra mode of the current block, and intra mode B is determined as the second target intra mode of the current block. When the intra mode A is the same as the intra mode B, the intra mode A is determined as a first target intra mode of the current block, and another intra mode C different from the intra mode A is determined as a second target intra mode of the current block. For example, the intra mode C may be a target intra mode of a neighboring block of the second neighboring block; for example, if the intra mode a is the Planar mode, the intra mode C is the DC mode, and the intra mode C is not limited to this as long as it is different from the intra mode a.
Of course, the above-mentioned first, second, third and fourth modes are only a few examples, and are not limited thereto, as long as the first target intra mode of the current block and the second target intra mode of the current block can be obtained.
In case 8, the first prediction mode is an intra prediction mode, the second prediction mode is an intra block copy prediction mode (i.e., IBC mode), and a target intra mode (i.e., a target intra prediction mode among the intra prediction modes) of the current block is determined. A block vector candidate list is constructed, which may include at least one candidate block vector, and one candidate block vector is selected from the block vector candidate list as a target block vector of the current block. And aiming at each pixel position of the current block, determining a first predicted value of the pixel position according to the target intra-frame mode, and determining a second predicted value of the pixel position according to the target block vector.
For the way of determining the target intra mode of the current block, it may include: determining the designated intra-mode as a target intra-mode for the current block; or, constructing an intra prediction mode candidate list, the intra prediction mode candidate list including at least one candidate intra mode, and selecting one candidate intra mode from the intra prediction mode candidate list as the target intra mode of the current block; or, determining the target intra-mode of a neighboring block of the current block as the target intra-mode of the current block. The three ways can be referred to as case 3, and the description is not repeated here. For the process of constructing the block vector candidate list and selecting the target block vector of the current block from the block vector candidate list, see the case 4, which is not repeated herein.
For example, the determining the target intra mode of the current block may further include: and determining the target intra-frame mode of the current block according to the weight prediction angle and the intra-frame IBC relative position relation. For example, if the weighted prediction angle is a weighted prediction angle from top left to bottom right, and the intra region is located at the bottom left of the IBC region, the horizontal angle mode is determined as the target intra mode of the current block. And if the weighted prediction angle is a weighted prediction angle from top left to bottom right and the intra-frame area is positioned at the upper right of the IBC area, determining the vertical angle mode as the target intra-frame mode of the current block. Otherwise, the Planar mode is determined as the target intra mode of the current block.
In case 9, the first prediction mode is an intra prediction mode, the second prediction mode is a palette mode, a target intra mode (i.e., a target intra prediction mode in the intra prediction modes) of the current block is determined, for each pixel position of the current block, a first prediction value of the pixel position is determined according to the target intra mode, and a second prediction value of the pixel position is determined according to the palette mode.
For an exemplary manner of determining the target intra mode of the current block, see case 3, and the encoding end/the decoding end determines the second prediction value of the pixel position according to the palette mode, see the conventional implementation manner, which is not limited in this regard.
In case 10, the first prediction mode is an intra block copy prediction mode (i.e., IBC mode), the second prediction mode is an inter prediction mode, a block vector candidate list is constructed, the block vector candidate list may include at least one candidate block vector, and one candidate block vector is selected from the block vector candidate list as a target block vector of the current block. And constructing a motion information candidate list which can comprise at least one candidate motion information, and selecting one candidate motion information from the motion information candidate list as the target motion information of the current block. For each pixel position of the current block, a first predictor for the pixel position may be determined from the target block vector and a second predictor for the pixel position may be determined from the target motion information.
For example, for both the encoding end and the decoding end, a block vector candidate list may be constructed for the current block, and the block vector candidate list at the encoding end is the same as the block vector candidate list at the decoding end, and the block vector candidate list is not limited.
As for the construction method of the block vector candidate list, see case 4, and as for the method of selecting the target block vector from the block vector candidate list by the encoding end/the decoding end, see case 4, repeated description is omitted here.
For example, for both the encoding end and the decoding end, a motion information candidate list may be constructed, and the motion information candidate list of the encoding end is the same as the motion information candidate list of the decoding end, which is not limited to this motion information candidate list.
As for the construction method of the motion information candidate list, see case 1 or case 3, and as for the method of the encoding end/decoding end selecting the target motion information from the motion information candidate list, see case 3, repeated description is omitted here.
In case 11, the first prediction mode is an intra block copy prediction mode (i.e., IBC mode), the second prediction mode is an intra prediction mode, a block vector candidate list is constructed, the block vector candidate list may include at least one candidate block vector, and one candidate block vector is selected from the block vector candidate list as a target block vector of the current block. A target intra mode (i.e., a target intra prediction mode among intra prediction modes) of the current block is determined. And aiming at each pixel position of the current block, determining a first predicted value of the pixel position according to the target block vector, and determining a second predicted value of the pixel position according to the target intra-frame mode.
The method for determining the target intra mode of the current block refers to cases 3 and 8, and for the process of constructing the block vector candidate list and selecting the target block vector of the current block from the block vector candidate list, see case 4, which is not repeated herein.
In case 12, where the first prediction mode is an intra block copy prediction mode (i.e., IBC mode) and the second prediction mode is an intra block copy prediction mode (i.e., IBC mode), a block vector candidate list may be constructed, which may include at least two candidate block vectors. One candidate block vector is selected from the block vector candidate list as a first target block vector of the current block, and another candidate block vector is selected from the block vector candidate list as a second target block vector of the current block, the first target block vector being different from the second target block vector. For each pixel position of the current block, determining a first predicted value of the pixel position according to the first target block vector; a second predictor value for the pixel position is determined based on the second target block vector.
For example, for both the encoding end and the decoding end, a block vector candidate list may be constructed for the current block, and the block vector candidate list at the encoding end is the same as the block vector candidate list at the decoding end, and the block vector candidate list is not limited.
For example, the candidate block vectors in the block vector candidate list are all single hypothesis block vectors, and for example, only one-way candidate block vectors are used for the candidate block vectors in the block vector candidate list. Obviously, the block vector candidate list may be a unidirectional block vector candidate list, since the candidate block vectors are all single hypothesis block vectors.
Exemplary candidate block vectors in the block vector candidate list may include, but are not limited to: the block vector of the spatial neighboring block of the current block, the historical block vector in the HMVP list corresponding to the current block, the default block vector, and the like, which are not limited herein.
For the encoding end, based on the rate-distortion principle, one candidate block vector may be selected from the block vector candidate list as the first target block vector of the current block, and another candidate block vector may be selected from the block vector candidate list as the second target block vector of the current block, which is not described again here. In a possible implementation, when the encoding end sends the encoded bitstream to the decoding end, the encoded bitstream may carry indication information a and indication information b, where the indication information a is used to indicate an index value 1 of a first target block vector of the current block, and the index value 1 is used to indicate that the first target block vector is a few candidate block vectors in the block vector candidate list. The indication information b is used to indicate an index value 2 of the second target block vector of the current block, the index value 2 being used to indicate that the second target block vector is the several candidate block vectors in the block vector candidate list.
After receiving the coded bit stream, the decoding end analyzes the indication information a and the indication information b from the coded bit stream, and selects a candidate block vector corresponding to the index value 1 from the block vector candidate list based on the indication information a, wherein the candidate block vector is used as a first target block vector of the current block. Based on the indication information b, the decoding end selects a candidate block vector corresponding to the index value 2 from the block vector candidate list, and the candidate block vector is used as a second target block vector of the current block.
In another possible implementation, when the encoding end sends the encoded bitstream to the decoding end, the encoded bitstream may carry indication information a and indication information c, where the indication information a is used to indicate an index value 1 of a first target block vector of the current block, and the index value 1 is used to indicate that the first target block vector is a few candidate block vectors in the block vector candidate list. The indication information c is used to indicate a difference value between an index value 2 and an index value 1, and the index value 2 is used to indicate that the second target block vector is the second candidate block vector in the block vector candidate list. After receiving the coded bit stream, the decoding end analyzes the indication information a and the indication information c from the coded bit stream. Based on the indication information a, the decoding end selects a candidate block vector corresponding to the index value 1 from the block vector candidate list, and the candidate block vector is used as a first target block vector of the current block. Based on the indication information c, the decoding end firstly determines an index value 2 according to the difference value between the index value 2 and the index value 1, and then selects a candidate block vector corresponding to the index value 2 from the block vector candidate list, wherein the candidate block vector is used as a second target block vector of the current block.
In case 13, the first prediction mode is an intra block copy prediction mode, the second prediction mode is an intra block copy prediction mode, and a first block vector candidate list and a second block vector candidate list of the current block are constructed, where the first block vector candidate list includes at least one candidate block vector, and the second block vector candidate list includes at least one candidate block vector. One candidate block vector is selected from the first block vector candidate list as a first target block vector of the current block, and one candidate block vector is selected from the second block vector candidate list as a second target block vector of the current block. For each pixel position of the current block, determining a first predicted value of the pixel position according to the first target block vector; a second predictor value for the pixel position is determined based on the second target block vector.
For example, for both the encoding end and the decoding end, a first block vector candidate list and a second block vector candidate list may be constructed for the current block, and the first block vector candidate list at the encoding end is the same as the first block vector candidate list at the decoding end, and the second block vector candidate list at the encoding end is the same as the second block vector candidate list at the decoding end.
For example, the candidate block vectors in the first block vector candidate list are all single hypothesis block vectors, for example, only one-way candidate block vectors for the candidate block vectors in the first block vector candidate list. Obviously, the first block vector candidate list may be a unidirectional block vector candidate list, since the candidate block vectors are all single hypothesis block vectors.
For example, the candidate block vectors in the second block vector candidate list are all single hypothesis block vectors, for example, only one-way candidate block vectors for the candidate block vectors in the second block vector candidate list. Obviously, the second block vector candidate list may be a unidirectional block vector candidate list, since the candidate block vectors are all single hypothesis block vectors.
For example, the candidate block vectors in the first block vector candidate list may include, but are not limited to: the block vector of the spatial neighboring block of the current block, the historical block vector in the HMVP list corresponding to the current block, the default block vector, and the like, which are not limited herein.
For example, the candidate block vectors in the second block vector candidate list may include, but are not limited to: the block vector of the spatial neighboring block of the current block, the historical block vector in the HMVP list corresponding to the current block, the default block vector, and the like, which are not limited herein.
For the encoding end, based on the rate-distortion principle, one candidate block vector may be selected from the first block vector candidate list as the first target block vector of the current block, and one candidate block vector may be selected from the second block vector candidate list as the second target block vector of the current block. When the encoding end sends the encoded bit stream to the decoding end, the encoded bit stream carries indication information a and indication information b, the indication information a is used for indicating an index value 1 of a first target block vector of the current block, and the index value 1 is used for indicating that the first target block vector is a few candidate block vectors in the first block vector candidate list. The indication information b is used to indicate an index value 2 of the second target block vector of the current block, the index value 2 being used to indicate that the second target block vector is the second candidate block vector in the second block vector candidate list. After receiving the coded bit stream, the decoding end analyzes the indication information a and the indication information b from the coded bit stream, and selects a candidate block vector corresponding to the index value 1 from the first block vector candidate list based on the indication information a, wherein the candidate block vector is used as a first target block vector of the current block. Based on the indication information b, a candidate block vector corresponding to the index value 2 is selected from the second block vector candidate list, and the candidate block vector is used as a second target block vector of the current block.
In case 14, the first prediction mode is an intra block copy prediction mode, the second prediction mode is a palette mode, a block vector candidate list may be constructed, the block vector candidate list may include at least one candidate block vector, and one candidate block vector may be selected from the block vector candidate list as a target block vector of the current block. And aiming at each pixel position of the current block, determining a first predicted value of the pixel position according to the target block vector, and determining a second predicted value of the pixel position according to the palette mode.
For example, regarding the construction method of the block vector candidate list, the method for the encoding end/the decoding end to select the target block vector from the block vector candidate list may be referred to as case 4, and details are not repeated here. The encoding end/the decoding end determines the second prediction value of the pixel position according to the palette mode, which can refer to the conventional implementation manner, but is not limited thereto.
In case 15, the first prediction mode is a palette mode, the second prediction mode is an inter prediction mode, and for each pixel position of the current block, the first prediction value of the pixel position is determined according to the palette mode, and the second prediction value of the pixel position is determined according to the target motion information of the inter prediction mode, which is not described again. Or, the first prediction mode is a palette mode, the second prediction mode is an intra-frame prediction mode, and for each pixel position of the current block, the first prediction value of the pixel position is determined according to the palette mode, and the second prediction value of the pixel position is determined according to a target intra-frame mode of the intra-frame prediction mode, which is not described again. Or, the first prediction mode is a palette mode, the second prediction mode is an intra-frame block copy prediction mode, and for each pixel position of the current block, the first prediction value of the pixel position is determined according to the palette mode, and the second prediction value of the pixel position is determined according to a target block vector of the intra-frame block copy prediction mode, which is not described again.
In each of the above cases, the indication information of the prediction information of the first prediction mode and the indication information of the prediction information of the second prediction mode may be interchanged, and the encoding side may be the same as the decoding side. In the case of the same prediction mode candidate list, the indication information of the prediction information of the first prediction mode and the indication information of the prediction information of the second prediction mode cannot be equal, and therefore, assuming that two index values need to be encoded, the value of the index value a is 1, and the value of the index value b is 3, when the index value a is encoded first, the index value b may be encoded only by 2 (i.e., 3-1), and when the index value b is encoded first, the index value b needs to be encoded by 3.
In summary, encoding the indication information with a small index value first can reduce the encoding cost of another larger index value, and in the candidate prediction mode list construction manner, the first prediction mode is probably from the left side, and the adjustment of the encoding side and the decoding side is performed according to the prior, so that the indication information of the prediction information of the adjacent area on the left side can be encoded first.
The following description is made in conjunction with the case one, and the implementation process in other cases is similar to that in the case one, and is not described herein again. In case one, the prediction mode candidate list may be a motion information candidate list, the prediction information of the first prediction mode may be first target motion information, and the prediction information of the second prediction mode may be second target motion information.
In the coded bitstream, indication information of the first target motion information, such as an index value a, may be coded first, and indication information of the second target motion information, such as an index value b, may be coded later. Alternatively, the indication information of the second target motion information, such as the index value b, may be encoded first, and then the indication information of the first target motion information, such as the index value a, may be encoded later. For example, if the value of the index value a is 1 and the value of the index value b is 3, the index value a may be encoded first and then the index value b may be encoded. For another example, if the index value b is 1 and the index value a is 3, the index value b may be encoded first and then the index value a may be encoded.
Example 9: in the above embodiments 1-3 and 8, how to store the target prediction information of the current block, which relates to the first prediction mode of the current block and the second prediction mode of the current block, can be implemented as follows:
in one possible embodiment, the prediction information of the first prediction mode is stored as target prediction information of the current block; or, storing the prediction information of the second prediction mode as target prediction information of the current block; or, determining the storage mode of the current block according to the weighted prediction angle and the weighted prediction position, if the storage mode is to store the prediction information of the first prediction mode, storing the prediction information of the first prediction mode as the target prediction information of the current block, and if the storage mode is to store the prediction information of the second prediction mode, storing the prediction information of the second prediction mode as the target prediction information of the current block.
In another possible embodiment, the current block may be divided into at least one prediction information storage unit, and the size of each prediction information storage unit may be configured arbitrarily, for example, 1 × 1 prediction information storage unit, 2 × 2 prediction information storage unit, 4 × 4 prediction information storage unit, and the like. The target prediction information storage unit is selected from a plurality of prediction information storage units of the current block, and the target prediction information storage unit can be any prediction information storage unit in the plurality of prediction information storage units, such as the target prediction information storage unit is positioned at the lower right corner of the current block, namely the pixel position comprising the lower right corner of the current block. Then, the storage mode of the target prediction information storage unit is determined according to the weighted prediction angle and the weighted prediction position, and the storage mode of the target prediction information storage unit is determined as the storage mode of the current block. And storing the prediction information of the first prediction mode as the target prediction information of the current block or storing the prediction information of the second prediction mode as the target prediction mode of the current block based on the storage mode of the current block. For example, if the storage mode is to store the prediction information of the first prediction mode, the prediction information of the first prediction mode is stored as the target prediction information of the current block, and if the storage mode is to store the prediction information of the second prediction mode, the prediction information of the second prediction mode is stored as the target prediction information of the current block.
In summary, after determining the storage manner of the target prediction information storage unit according to the weighted prediction angle and the weighted prediction position for all prediction information storage units (e.g., 4 × 4 size) of the current block, it is determined to store the prediction information of the first prediction mode as the target prediction information of all prediction information storage units or store the prediction information of the second prediction mode as the target prediction information of all prediction information storage units based on the storage manner of the target prediction information storage unit.
In another possible embodiment, the current block may be divided into at least one prediction information storage unit, and the size of each prediction information storage unit may be configured arbitrarily, such as 1 × 1 prediction information storage unit, 2 × 2 prediction information storage unit, 4 × 4 prediction information storage unit, and so on. For each prediction information storage unit of the current block, a storage manner of the prediction information storage unit may be determined according to the weighted prediction angle and the weighted prediction position, and based on the storage manner of the prediction information storage unit, prediction information of a first prediction mode may be stored as target prediction information of the prediction information storage unit, or prediction information of a second prediction mode may be stored as target prediction information of the prediction information storage unit. For example, if the storage mode is to store the prediction information of the first prediction mode, the prediction information of the first prediction mode is stored as the target prediction information of the prediction information storage unit, and if the storage mode is to store the prediction information of the second prediction mode, the prediction information of the second prediction mode is stored as the target prediction information of the prediction information storage unit.
In summary, for each prediction information storage unit (e.g. 4 × 4 size) of the current block, it is determined to store the prediction information of the first prediction mode as the target prediction information of the prediction information storage unit or store the prediction information of the second prediction mode as the target prediction information of the prediction information storage unit according to the relative position relationship between each prediction information storage unit and the "division result" according to the weighted prediction angle and the weighted prediction position.
In another possible embodiment, when the first prediction mode and the second prediction mode are both inter prediction modes and the prediction information of the first prediction mode (i.e., the first target motion information) and the prediction information of the second prediction mode (i.e., the second target motion information) are from different reference frame lists, the prediction information of the first prediction mode and the prediction information of the second prediction mode may be combined into new prediction information (e.g., the first target motion information of the first prediction mode and the second target motion information of the second prediction mode are both unidirectional motion information, and the two unidirectional motion information are combined into bidirectional motion information), and the new prediction information may be stored as target prediction information of the current block (or each prediction information storage unit of the current block).
For example, referring to embodiment 8, if the first prediction mode is the inter prediction mode, the prediction information of the first prediction mode is the target motion information; if the first prediction mode is an intra-frame prediction mode, the prediction information of the first prediction mode is a target intra-frame mode; if the first prediction mode is an intra block copy prediction mode, the prediction information of the first prediction mode is a target block vector.
For example, referring to embodiment 8, if the second prediction mode is the inter prediction mode, the prediction information of the second prediction mode is the target motion information; if the second prediction mode is the intra-frame prediction mode, the prediction information of the second prediction mode is the target intra-frame mode; and if the second prediction mode is the intra block copy prediction mode, the prediction information of the second prediction mode is the target block vector.
The storage process of the target prediction information is described below with reference to several specific application scenarios. Assuming that the size of the current block is M × N, M is the width of the current block, N is the height of the current block, X is the log2 logarithm of the tan value of the weighted prediction angle, such as 0 or 1, Y is the index value of the weighted prediction position, such as 0 to 6, etc., and a, b, c, d are preset constant values.
Application scenario 1: the effective number ValidLenth is determined by the following equation: ValidLenth ═ (N + (M > > X)) < < 1; the parameter FirstPos is determined by the following formula: FirstPos ═ (ValidLength > >1) -a + Y ((ValidLength-1) > > 3).
For example, the coordinates of the center position or the upper left corner position of the prediction information storage unit of the current block are (X, y), and based on this, (y < <1) + ((X < <1) > > X) is greater than or equal to FirstPos, the prediction information storage unit may store the prediction information of the first prediction mode; otherwise, the prediction information storage unit may store the prediction information of the second prediction mode.
Application scenario 2: the effective number ValidLenth is determined by the following equation: ValidLenth ═ (N + (M > > X)) < < 1; the parameter FirstPos is determined by the following formula: FirstPos ═ (ValidLength > >1) -b + Y ((ValidLength-1) > >3) - ((M < <1) > > X). Recording coordinates of a center position or an upper left corner position of the prediction information storage unit of the current block as (X, y), and based on the coordinates, if (y < <1) - ((X < <1) > > X) is greater than or equal to FirstPos, storing the prediction information of the first prediction mode in the prediction information storage unit; otherwise, the prediction information storage unit may store the prediction information of the second prediction mode.
Application scenario 3: the effective number ValidLenth is determined by the following equation: ValidLenth ═ (M + (N > > X)) < < 1; the parameter FirstPos is determined by the following formula: FirstPos ═ (ValidLength > >1) -c + Y ((ValidLength-1) > >3) - ((N < <1) > > X). Recording coordinates of a center position or an upper left corner position of the prediction information storage unit of the current block as (X, y), and based on the coordinates, if (X < <1) - ((y < <1) > > X) is greater than or equal to FirstPos, storing the prediction information of the first prediction mode in the prediction information storage unit; otherwise, the prediction information storage unit may store the prediction information of the second prediction mode.
Application scenario 4: the effective number ValidLenth is determined by the following equation: ValidLenth ═ (M + (N > > X)) < < 1; the parameter FirstPos is determined by the following formula: FirstPos ═ (ValidLength > >1) -d + Y ((ValidLength-1) > > 3).
For example, the coordinates of the center position or the upper left corner position of the prediction information storage unit of the current block are (X, y), and based on this, if (X < <1) + ((y < <1) > > X) is greater than or equal to FirstPos, the prediction information storage unit may store the prediction information of the first prediction mode; otherwise, the prediction information storage unit may store the prediction information of the second prediction mode.
Illustratively, the storage manner of each prediction information storage unit is determined based on the application scenarios 1 to 4, and based on the storage manner of the prediction information storage unit, the prediction information of the first prediction mode may be stored as the target prediction information of the prediction information storage unit, or the prediction information of the second prediction mode may be stored as the target prediction information of the prediction information storage unit.
The storage manner of the target prediction information storage unit (i.e., the prediction information storage unit in the application scenario is the target prediction information storage unit) is determined based on the application scenarios 1 to 4, and prediction information of the first prediction mode may be stored as target prediction information of the current block or prediction information of the second prediction mode may be stored as target prediction information of the current block based on the storage manner of the target prediction information storage unit.
Application scenario 5: when the first prediction mode and the second prediction mode are both inter prediction modes, and the prediction information of the first prediction mode and the prediction information of the second prediction mode are from different reference frame lists, the following storage manner may be adopted:
the effective number ValidLenth is determined by the following equation: ValidLenth ═ (M + (N > > X)) < < 1; the parameter FirstPos is determined by the following formula: FirstPos ═ (ValidLength > >1) -d + Y ((ValidLength-1) > > 3).
For example, the coordinates of the center position or the upper left corner position of the prediction information storage unit of the current block are (X, y), and based on this, (X < <1) + ((y < <1) > > X) is greater than FirstPos, the prediction information storage unit may store the prediction information of the first prediction mode; if (X < <1) + ((y < <1) > > X) is less than FirstPos, the prediction information storage unit may be stored in a manner of storing the prediction information of the second prediction mode; if (X < <1) + ((y < <1) > > X) is equal to FirstPos, the prediction information storage unit may be stored in a manner of storing new prediction information (i.e., new prediction information in which prediction information of the first prediction mode and prediction information of the second prediction mode are combined).
Application scenario 6: when the first prediction mode and the second prediction mode are both inter prediction modes, and the prediction information of the first prediction mode and the prediction information of the second prediction mode are from different reference frame lists, the following storage manner may be adopted:
the effective number ValidLenth is determined by the following equation: ValidLenth ═ (M + (N > > X)) < < 1; the parameter FirstPos is determined by the following formula: FirstPos ═ (ValidLength > >1) -d + Y ((ValidLength-1) > > 3).
Illustratively, the prediction information storage unit of the current block may store the coordinates of the center position or the upper left corner position of the prediction information storage unit as (X, y), and based on this, if (X < <1) + ((y < <1) > > X) is greater than f1, the prediction information storage unit may store the prediction information of the first prediction mode; if (X < <1) + ((y < <1) > > X) is less than f2, the prediction information storage unit may be stored in a manner of storing the prediction information of the second prediction mode; if (X < <1) + ((y < <1) > > X) is not greater than f1 and not less than f2, the prediction information storage unit may store new prediction information (i.e., new prediction information in which prediction information of the first prediction mode and prediction information of the second prediction mode are combined).
Illustratively, f1 may be larger than FirstPos, f2 may be smaller than FirstPos, and values of f1 and f2 are not limited. For example, f1 can be FirstPos +2, f2 can be FirstPos-2, and f1 and f2 can be arbitrarily set.
Illustratively, based on the application scenarios 5 and 6, the storage manner of each prediction information storage unit is determined, and based on the storage manner of the prediction information storage unit, the prediction information of the first prediction mode may be stored as the target prediction information of the prediction information storage unit, or the prediction information of the second prediction mode may be stored as the target prediction information of the prediction information storage unit, or the new prediction information may be stored as the target prediction information of the prediction information storage unit.
Based on the application scenarios 5 and 6, the storage manner of the target prediction information storage unit (i.e., the prediction information storage unit in the application scenario is the target prediction information storage unit) is determined, and based on the storage manner of the target prediction information storage unit, the prediction information of the first prediction mode may be stored as the target prediction information of the current block, or the prediction information of the second prediction mode may be stored as the target prediction information of the current block, or new prediction information may be stored as the target prediction information of the prediction information storage unit.
Example 10: Sub-Block Transform (SBT): SBT is a sub-block based transform and is called a sub-block based inter-frame transform because it is applied only to residual blocks obtained by inter-frame prediction. A complete residual block is divided into two sub-blocks, one of which needs transform coding and the other one forces 0 without transform coding. Based on a non-rectangular division pattern (NRPM), NRPM is a technique implemented with the above-described embodiments 1 to 9 of the present embodiment. Based on the above two technologies, there may be the following implementation:
1. for the limitation of the SBT enabling condition, when the current block is already an inter block predicted based on geometric sub-blocks, the current block does not enable SBT, that is, nrpm _ cu _ flag and sbtflag of the current block cannot be true at the same time. The size constraint of the SBT at least includes that the current block is not an inter block predicted based on geometric sub-blocks. For the current block, the condition that the Cu _ sbt _ flag exists in the syntax includes, but is not limited to, that the nrpm _ Cu _ flag bit of the current block is false (false). By the limitation of the condition, the encoding complexity can be reduced, the visual boundary effect caused by SBT can be reduced, and the subjective quality can be improved.
2. For the SBT enabling condition, if the current block is a block predicted by triangulation or geometric division, the current block does not use SBT. In other words, the enabling condition of the current block SBT includes two conditions, one condition that the current block is not a triangularly divided predicted block, and the other condition that the current block is not a geometrically divided predicted block.
3. For the setting of the enabling condition of the NRPM, the condition that the current block can use the NRPM mode includes, but is not limited to: the size of the current block may enable the NRPM mode when the product of the width and the height is greater than or equal to 64.
4. For the setting of the enabling condition of the NRPM, the condition that the current block can use the NRPM mode includes, but is not limited to: the current block may enable the NRPM mode when the size of the current block is 4 in width and 16 or more in height.
5. For the setting of the enabling condition of the NRPM, the condition that the current block can use the NRPM mode includes, but is not limited to: the current block may enable the NRPM mode when the size of the current block is 4 in height and 16 or more in width.
6. For nrpm _ cu _ flag syntax, the syntax is used to indicate whether the current block selects geometric partition prediction, the syntax element of the syntax adopts context-based adaptive binary arithmetic coding or context-based adaptive binary arithmetic decoding, the coding of the syntax element only adopts one context model for coding or decoding, and a plurality of context models (including determining whether the top block/left block of the current block uses geometric partition mode, whether the size of the current block exceeds a certain threshold, etc.) are adopted for coding or decoding in the relevant scheme.
7. For the nrpm _ cu _ flag syntax, which is used to indicate whether the current block selects the geometric partition prediction, the syntax elements of the syntax employ context-based adaptive binary arithmetic coding or context-based adaptive binary arithmetic decoding, the coding of the syntax element only uses at most 2 context models for coding or decoding, only judges whether the size of the current block exceeds a certain threshold value, in the related scheme, a plurality of context models (including determining whether the upper block/left block of the current block uses the geometric partition mode and whether the size of the current block exceeds a certain threshold) are used for encoding or decoding.
Example 11: based on the same application concept as the method, an embodiment of the present application further provides a coding and decoding apparatus, which is applied to an encoding end or a decoding end, and as shown in fig. 9A, is a structural diagram of the apparatus, including: an obtaining module 911, configured to obtain a weighted prediction angle of a current block when determining to start weighted prediction on the current block; a first determining module 912, configured to, for each pixel position of the current block, determine, according to the weight prediction angle, a peripheral matching position to which the pixel position points from peripheral positions outside the current block, determine, according to a reference weight value associated with the peripheral matching position, a target weight value of the pixel position, and determine, according to the target weight value of the pixel position, an associated weight value of the pixel position; a second determining module 913, configured to determine a first predicted value of the pixel position according to the first prediction mode of the current block, determine a second predicted value of the pixel position according to the second prediction mode of the current block, and determine a weighted predicted value of the pixel position according to the first predicted value, the target weight value, the second predicted value and the associated weight value; and determining the weighted prediction value of the current block according to the weighted prediction value of each pixel position of the current block.
The first determining module 912 is further configured to: acquiring a weight prediction position of the current block; determining a reference weight value list of the current block, wherein the reference weight value list comprises a plurality of reference weight values, and the plurality of reference weight values in the reference weight value list are configured in advance or according to weight configuration parameters; selecting an effective number of reference weight values from the reference weight value list according to a target index; setting reference weight values of peripheral positions outside the current block according to the effective number of reference weight values; the effective number is determined based on a size of the current block and the weighted prediction angle; the target index is determined based on a size of the current block, the weighted prediction angle, and a weighted prediction position of the current block.
The first determining module 912 is specifically configured to, when determining the reference weight value list of the current block: determining the sequence-level reference weight value list as a reference weight value list of the current block; or, determining a preset reference weight value list as the reference weight value list of the current block; or, determining a reference weight value list corresponding to the weight prediction angle as a reference weight value list of the current block; or, determining a reference weight value list corresponding to the weight prediction angle and the weight prediction position as a reference weight value list of the current block; or, determining a reference weight value list corresponding to the size of the current block and the weighted prediction angle as the reference weight value list of the current block.
The weight configuration parameters comprise weight transformation rate and the starting position of weight transformation.
A plurality of reference weight values in the list of reference weight values are monotonically increasing or monotonically decreasing.
The reference weight value list comprises a reference weight value of a target area, a reference weight value of a first adjacent area of the target area and a reference weight value of a second adjacent area of the target area; the reference weight values of the first adjacent region are all first reference weight values, the reference weight values of the second adjacent region are monotonically increased, or the reference weight values of the second adjacent region are monotonically decreased; or the reference weight values of the first adjacent region are both second reference weight values, the reference weight values of the second adjacent region are both third reference weight values, and the second reference weight values are different from the third reference weight values; or, the reference weight value of the first adjacent area is monotonically increased, and the reference weight value of the second adjacent area is monotonically increased; or, the reference weight value of the first neighboring region monotonically decreases, and the reference weight value of the second neighboring region monotonically decreases.
In one possible embodiment, the target region comprises a reference weight value; or, the target region comprises a plurality of reference weight values; if the target area comprises a plurality of reference weight values, the plurality of reference weight values of the target area are monotonically increased, or the plurality of reference weight values of the target area are monotonically decreased.
In a possible implementation, the current block outer perimeter location comprises: the peripheral position of the upper line outside the current block; or, the peripheral position of the left column outside the current block; or, the peripheral position of the lower line outside the current block; or, the peripheral position of the right column outside the current block.
The current block outer perimeter location comprises a integer pixel location; or, the current block outer perimeter location comprises a sub-pixel location; alternatively, the current block outer perimeter locations include integer pixel locations and sub-pixel locations.
The first determining module 912 is further configured to obtain a weighted prediction position of the current block; and determining a reference weight value associated with the peripheral matching position according to the coordinate value of the peripheral matching position and the coordinate value of the weight prediction position of the current block.
The first determining module 912 is specifically configured to, when determining the target weight value of the pixel position according to the reference weight value associated with the peripheral matching position: if the peripheral matching position is an integer pixel position and the integer pixel position is provided with a reference weight value, determining a target weight value of the pixel position according to the reference weight value of the integer pixel position; or,
If the peripheral matching position is an integer pixel position and the integer pixel position is not provided with a reference weight value, determining a target weight value of the pixel position according to the reference weight value of the adjacent position of the integer pixel position; or,
if the peripheral matching position is a sub-pixel position and the sub-pixel position is provided with a reference weight value, determining a target weight value of the pixel position according to the reference weight value of the sub-pixel position; or,
and if the peripheral matching position is a sub-pixel position and the sub-pixel position is not provided with a reference weight value, determining a target weight value of the pixel position according to the reference weight values of the adjacent positions of the sub-pixel position.
In a possible embodiment, the first prediction mode is any one of an intra block copy prediction mode, an intra prediction mode, an inter prediction mode, and a palette mode; the second prediction mode is any one of an intra block copy prediction mode, an intra prediction mode, an inter prediction mode and a palette mode.
If the first prediction mode is an inter prediction mode and the second prediction mode is an inter prediction mode, the second determining module 913 is specifically configured to: constructing a motion information candidate list, wherein the motion information candidate list comprises at least two candidate motion information; selecting one candidate motion information from the motion information candidate list as first target motion information of the current block, and selecting another candidate motion information from the motion information candidate list as second target motion information of the current block; determining a first predicted value of the pixel position according to the first target motion information; and determining a second predicted value of the pixel position according to the second target motion information. The candidate motion information in the motion information candidate list is single hypothesis motion information.
If the first prediction mode is an inter prediction mode and the second prediction mode is an inter prediction mode, the second determining module 913 is specifically configured to: constructing a first motion information candidate list and a second motion information candidate list, wherein the first motion information candidate list comprises at least one candidate motion information, and the second motion information candidate list comprises at least one candidate motion information; selecting one candidate motion information from a first motion information candidate list as first target motion information of the current block, and selecting one candidate motion information from a second motion information candidate list as second target motion information of the current block; determining a first predicted value of the pixel position according to the first target motion information; and determining a second predicted value of the pixel position according to the second target motion information. The candidate motion information in the first motion information candidate list is single hypothesis motion information; the candidate motion information in the second motion information candidate list is single hypothesis motion information. A reference frame of candidate motion information in the first motion information candidate list from a reference frame list of the current block; the reference frame of candidate motion information in the second motion information candidate list is from another reference frame list of the current block.
If the first prediction mode is an inter prediction mode and the second prediction mode is an intra prediction mode, the second determining module 913 is specifically configured to: constructing a motion information candidate list, wherein the motion information candidate list comprises at least one candidate motion information; selecting one candidate motion information from the motion information candidate list as the target motion information of the current block; determining a target intra-mode for the current block; determining a first predicted value of the pixel position according to the target motion information; determining a second predicted value of the pixel position according to the target intra mode. Illustratively, the candidate motion information in the motion information candidate list includes single hypothesis motion information, and/or multi-hypothesis motion information.
If the first prediction mode is an intra block copy prediction mode and the second prediction mode is an intra block copy prediction mode, the second determining module 913 is specifically configured to: constructing a block vector candidate list, wherein the block vector candidate list comprises at least two candidate block vectors; selecting one candidate block vector from the block vector candidate list as a first target block vector of the current block, and selecting another candidate block vector from the block vector candidate list as a second target block vector of the current block; determining a first predictor of the pixel position from the first target block vector; determining a second predictor of the pixel position from the second target block vector.
If the first prediction mode is an intra block copy prediction mode and the second prediction mode is an intra block copy prediction mode, the second determining module 913 is specifically configured to: constructing a first block vector candidate list and a second block vector candidate list, the first block vector candidate list comprising at least one candidate block vector, the second block vector candidate list comprising at least one candidate block vector; selecting a candidate block vector from the first block vector candidate list as a first target block vector for the current block and selecting a candidate block vector from the second block vector candidate list as a second target block vector for the current block; determining a first predictor of the pixel position from the first target block vector; determining a second predictor of the pixel position from the second target block vector.
The encoding and decoding apparatus may further include: a storage module, configured to select a target prediction information storage unit from a plurality of prediction information storage units if the current block includes the plurality of prediction information storage units; determining a storage mode of the target prediction information storage unit according to the weight prediction angle and the weight prediction position; determining the storage mode of the target prediction information storage unit as the storage mode of the current block; and storing the prediction information of the first prediction mode as the target prediction information of the current block or storing the prediction information of the second prediction mode as the target prediction information of the current block based on the storage mode of the current block.
Illustratively, the target prediction information storage unit is located at a lower right corner of the current block.
The storage module is further configured to determine, for each prediction information storage unit of the current block, a storage manner of the prediction information storage unit according to the weighted prediction angle and the weighted prediction position; and storing the prediction information of the first prediction mode as target prediction information of the prediction information storage unit or storing the prediction information of the second prediction mode as target prediction information of the prediction information storage unit based on a storage mode of the prediction information storage unit.
In the above embodiment, the weighted prediction angle is a horizontal angle; or the weight prediction angle is a vertical angle; or the absolute value of the slope of the weighted prediction angle is the nth power of 2, and n is an integer.
Based on the same application concept as the method described above, the hardware architecture diagram of the decoding-side device provided in the embodiment of the present application may specifically refer to fig. 9B from a hardware level. The method comprises the following steps: a processor 921 and a machine-readable storage medium 922, wherein: the machine-readable storage medium 922 stores machine-executable instructions that are executable by the processor 921; the processor 921 is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application.
For example, the processor 921 is configured to execute machine-executable instructions to perform the following steps:
when the weighted prediction of a current block is determined, acquiring the weighted prediction angle of the current block;
for each pixel position of the current block, determining a peripheral matching position pointed by the pixel position from peripheral positions outside the current block according to the weight prediction angle, determining a target weight value of the pixel position according to a reference weight value associated with the peripheral matching position, and determining an associated weight value of the pixel position according to the target weight value of the pixel position;
determining a first predicted value of the pixel position according to a first prediction mode of the current block, determining a second predicted value of the pixel position according to a second prediction mode of the current block, and determining a weighted predicted value of the pixel position according to the first predicted value, the target weight value, the second predicted value and the associated weight value;
and determining the weighted prediction value of the current block according to the weighted prediction value of each pixel position of the current block.
Based on the same application concept as the method described above, the hardware architecture diagram of the encoding end device provided in the embodiment of the present application may specifically refer to fig. 9C from a hardware level. The method comprises the following steps: a processor 931 and a machine-readable storage medium 932, wherein: the machine-readable storage medium 932 stores machine-executable instructions executable by the processor 931; the processor 931 is configured to execute machine executable instructions to implement the methods disclosed in the above examples of the present application.
For example, the processor 931 is configured to execute machine executable instructions to implement the following steps:
when the weighted prediction of a current block is determined, acquiring the weighted prediction angle of the current block;
for each pixel position of the current block, determining a peripheral matching position pointed by the pixel position from peripheral positions outside the current block according to the weight prediction angle, determining a target weight value of the pixel position according to a reference weight value associated with the peripheral matching position, and determining an associated weight value of the pixel position according to the target weight value of the pixel position;
determining a first predicted value of the pixel position according to a first prediction mode of the current block, determining a second predicted value of the pixel position according to a second prediction mode of the current block, and determining a weighted predicted value of the pixel position according to the first predicted value, the target weight value, the second predicted value and the associated weight value;
and determining the weighted prediction value of the current block according to the weighted prediction value of each pixel position of the current block.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where several computer instructions are stored, and when the computer instructions are executed by a processor, the method disclosed in the above example of the present application can be implemented. The machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices. For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. The present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
Claims (10)
1. A method of decoding, the method comprising:
when the weighted prediction of a current block is determined, acquiring the weighted prediction angle of the current block;
for each pixel position of the current block, determining a peripheral matching position pointed by the pixel position from peripheral positions outside the current block according to the weight prediction angle, determining a target weight value of the pixel position according to a reference weight value associated with the peripheral matching position, and determining an associated weight value of the pixel position according to the target weight value of the pixel position;
determining a first predicted value of the pixel position according to a first prediction mode of the current block, determining a second predicted value of the pixel position according to a second prediction mode of the current block, and determining a weighted predicted value of the pixel position according to the first predicted value, the target weight value, the second predicted value and the associated weight value;
determining the weighted prediction value of the current block according to the weighted prediction value of each pixel position of the current block;
wherein, if the first prediction mode is an inter prediction mode and the second prediction mode is an inter prediction mode, determining the first prediction value of the pixel position according to the first prediction mode of the current block and determining the second prediction value of the pixel position according to the second prediction mode of the current block comprises: constructing a motion information candidate list, wherein the motion information candidate list comprises at least two candidate motion information; selecting one candidate motion information from the motion information candidate list as first target motion information of the current block, and selecting another candidate motion information from the motion information candidate list as second target motion information of the current block; determining a first predicted value of the pixel position according to the first target motion information; and determining a second predicted value of the pixel position according to the second target motion information.
2. The method of claim 1,
the determining a target weight value of the pixel position according to the reference weight value associated with the peripheral matching position includes:
if the peripheral matching position is an integer pixel position and the integer pixel position is provided with a reference weight value, determining a target weight value of the pixel position according to the reference weight value of the integer pixel position; or,
and if the peripheral matching position is a sub-pixel position and the sub-pixel position is provided with a reference weight value, determining a target weight value of the pixel position according to the reference weight value of the sub-pixel position.
3. The method of claim 1,
the candidate motion information in the motion information candidate list is unidirectional motion information.
4. The method of claim 1, wherein constructing the motion information candidate list comprises:
when adding the bidirectional motion information into the motion information candidate list, firstly cutting the bidirectional motion information into one piece of unidirectional motion information, and adding the one piece of unidirectional motion information into the motion information candidate list.
5. The method of claim 1, wherein selecting one candidate motion information from the motion information candidate list as the first target motion information of the current block and another candidate motion information from the motion information candidate list as the second target motion information of the current block comprises:
Parsing first indication information and second indication information from an encoded bitstream, the first indication information indicating a first index value of first target motion information of a current block, the first index value being used for indexing the first target motion information in a motion information candidate list, the second indication information indicating a second index value of second target motion information of the current block, the second index value being used for indexing the second target motion information in the motion information candidate list;
selecting candidate motion information corresponding to the first index value from the motion information candidate list as first target motion information of the current block based on the first indication information; selecting candidate motion information corresponding to the second index value from the motion information candidate list as second target motion information of the current block based on the second indication information;
wherein the first object motion information is different from the second object motion information.
6. The method of claim 1, wherein after determining the weighted predictor of the current block according to the weighted predictor of each pixel position of the current block, the method further comprises:
Determining a storage mode of the prediction information storage unit according to the weight prediction angle and the weight prediction position aiming at each prediction information storage unit of the current block; and storing the first target motion information as the target motion information of the prediction information storage unit or storing the second target motion information as the target motion information of the prediction information storage unit based on a storage manner of the prediction information storage unit.
7. The method of claim 6,
the prediction information storage unit is a 4 × 4 prediction information storage unit.
8. A decoding device is characterized in that, applied to a decoding end,
the decoding apparatus comprises means for implementing the method of any one of claims 1-7.
9. A decoding device, characterized by comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; wherein the processor is configured to execute the machine executable instructions to implement the method of any of claims 1-7.
10. A machine-readable storage medium comprising, in combination,
The machine-readable storage medium stores machine-executable instructions executable by a processor; wherein the processor is configured to execute the machine executable instructions to implement the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111155058.XA CN113709501B (en) | 2019-12-23 | 2019-12-23 | Encoding and decoding method, device and equipment |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911343050.9A CN113099240B (en) | 2019-12-23 | 2019-12-23 | Encoding and decoding method, device and equipment |
CN202111155058.XA CN113709501B (en) | 2019-12-23 | 2019-12-23 | Encoding and decoding method, device and equipment |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911343050.9A Division CN113099240B (en) | 2019-12-23 | 2019-12-23 | Encoding and decoding method, device and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113709501A true CN113709501A (en) | 2021-11-26 |
CN113709501B CN113709501B (en) | 2022-12-23 |
Family
ID=76663256
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111152763.4A Active CN113709500B (en) | 2019-12-23 | 2019-12-23 | Encoding and decoding method, device and equipment |
CN201911343050.9A Active CN113099240B (en) | 2019-12-23 | 2019-12-23 | Encoding and decoding method, device and equipment |
CN202111155058.XA Active CN113709501B (en) | 2019-12-23 | 2019-12-23 | Encoding and decoding method, device and equipment |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111152763.4A Active CN113709500B (en) | 2019-12-23 | 2019-12-23 | Encoding and decoding method, device and equipment |
CN201911343050.9A Active CN113099240B (en) | 2019-12-23 | 2019-12-23 | Encoding and decoding method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (3) | CN113709500B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114938449A (en) * | 2022-07-20 | 2022-08-23 | 浙江大华技术股份有限公司 | Intra-frame prediction method, image encoding method, image decoding method and device |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113810686B (en) | 2020-06-01 | 2023-02-24 | 杭州海康威视数字技术股份有限公司 | Encoding and decoding method, device and equipment |
WO2023123478A1 (en) * | 2021-12-31 | 2023-07-06 | Oppo广东移动通信有限公司 | Prediction methods and apparatuses, devices, system, and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105491390A (en) * | 2015-11-30 | 2016-04-13 | 哈尔滨工业大学 | Intra-frame prediction method in hybrid video coding standard |
CN107995489A (en) * | 2017-12-20 | 2018-05-04 | 北京大学深圳研究生院 | A kind of combination forecasting method between being used for the intra frame of P frames or B frames |
CN108702515A (en) * | 2016-02-25 | 2018-10-23 | 联发科技股份有限公司 | The method and apparatus of coding and decoding video |
CN110312132A (en) * | 2019-03-11 | 2019-10-08 | 杭州海康威视数字技术股份有限公司 | A kind of decoding method, device and its equipment |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130107949A1 (en) * | 2011-10-26 | 2013-05-02 | Intellectual Discovery Co., Ltd. | Scalable video coding method and apparatus using intra prediction mode |
WO2016072775A1 (en) * | 2014-11-06 | 2016-05-12 | 삼성전자 주식회사 | Video encoding method and apparatus, and video decoding method and apparatus |
CN115134596A (en) * | 2015-06-05 | 2022-09-30 | 杜比实验室特许公司 | Image encoding and decoding method for performing inter prediction, bit stream storage method |
CN116886930A (en) * | 2016-11-28 | 2023-10-13 | 韩国电子通信研究院 | Method and apparatus for encoding/decoding image and recording medium storing bit stream |
CN109996080B (en) * | 2017-12-31 | 2023-01-06 | 华为技术有限公司 | Image prediction method and device and coder-decoder |
CN110225346A (en) * | 2018-12-28 | 2019-09-10 | 杭州海康威视数字技术股份有限公司 | A kind of decoding method and its equipment |
CN110072112B (en) * | 2019-03-12 | 2023-05-12 | 浙江大华技术股份有限公司 | Intra-frame prediction method, encoder and storage device |
-
2019
- 2019-12-23 CN CN202111152763.4A patent/CN113709500B/en active Active
- 2019-12-23 CN CN201911343050.9A patent/CN113099240B/en active Active
- 2019-12-23 CN CN202111155058.XA patent/CN113709501B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105491390A (en) * | 2015-11-30 | 2016-04-13 | 哈尔滨工业大学 | Intra-frame prediction method in hybrid video coding standard |
CN108702515A (en) * | 2016-02-25 | 2018-10-23 | 联发科技股份有限公司 | The method and apparatus of coding and decoding video |
CN107995489A (en) * | 2017-12-20 | 2018-05-04 | 北京大学深圳研究生院 | A kind of combination forecasting method between being used for the intra frame of P frames or B frames |
CN110312132A (en) * | 2019-03-11 | 2019-10-08 | 杭州海康威视数字技术股份有限公司 | A kind of decoding method, device and its equipment |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114938449A (en) * | 2022-07-20 | 2022-08-23 | 浙江大华技术股份有限公司 | Intra-frame prediction method, image encoding method, image decoding method and device |
CN114938449B (en) * | 2022-07-20 | 2023-10-27 | 浙江大华技术股份有限公司 | Intra-frame prediction method, image encoding method, image decoding method and device |
Also Published As
Publication number | Publication date |
---|---|
CN113709501B (en) | 2022-12-23 |
CN113099240B (en) | 2022-05-31 |
CN113709500A (en) | 2021-11-26 |
CN113099240A (en) | 2021-07-09 |
CN113709500B (en) | 2022-12-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11882281B2 (en) | Method and apparatus for encoding/decoding image | |
CN113099240B (en) | Encoding and decoding method, device and equipment | |
CN113709460B (en) | Encoding and decoding method, device and equipment | |
CN112369021A (en) | Image encoding/decoding method and apparatus for throughput enhancement and recording medium storing bitstream | |
JP5485851B2 (en) | Video encoding method, video decoding method, video encoding device, video decoding device, and programs thereof | |
CN112543323B (en) | Encoding and decoding method, device and equipment | |
CN112584142B (en) | Encoding and decoding method, device and equipment | |
JP7541599B2 (en) | Decoding method, decoding device, decoding side device, electronic device, and non-volatile storage medium | |
CN113709462B (en) | Encoding and decoding method, device and equipment | |
CN113810686B (en) | Encoding and decoding method, device and equipment | |
CN113709488B (en) | Encoding and decoding method, device and equipment | |
CN113709499B (en) | Encoding and decoding method, device and equipment | |
CN114598889B (en) | Encoding and decoding method, device and equipment | |
CN114650423B (en) | Encoding and decoding method, device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40064016 Country of ref document: HK |
|
GR01 | Patent grant | ||
GR01 | Patent grant |