WO2020069655A1 - 插值滤波器的训练方法、装置及视频图像编解码方法、编解码器 - Google Patents
插值滤波器的训练方法、装置及视频图像编解码方法、编解码器Info
- Publication number
- WO2020069655A1 WO2020069655A1 PCT/CN2019/108311 CN2019108311W WO2020069655A1 WO 2020069655 A1 WO2020069655 A1 WO 2020069655A1 CN 2019108311 W CN2019108311 W CN 2019108311W WO 2020069655 A1 WO2020069655 A1 WO 2020069655A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- filter
- interpolation filter
- image block
- target
- current
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/109—Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/184—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/587—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/59—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
Definitions
- the present application relates to the technical field of video coding and decoding, and in particular to a training method and device for interpolation filters, a video image coding and decoding method, and a codec.
- Digital video capabilities can be incorporated into a variety of devices, including digital TVs, digital live broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, tablet computers, e-book readers, Digital cameras, digital recording devices, digital media players, video game devices, video game consoles, cellular or satellite radio phones (so-called "smart phones"), video teleconferencing devices, video streaming devices, and the like .
- Digital video devices implement video compression technology, for example, in the standards defined by MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264 / MPEG-4 Part 10 Advanced Video Coding (AVC), The video encoding technology described in the H.265 / High Efficiency Video Coding (HEVC) standard and extensions to such standards.
- Video devices can more efficiently transmit, receive, encode, decode, and / or store digital video information by implementing such video compression techniques.
- Video compression techniques perform spatial (intra-image) prediction and / or temporal (inter-image) prediction to reduce or remove redundancy inherent in the video sequence.
- a video slice ie, a video frame or a portion of a video frame
- the image block in the to-be-intra-coded (I) slice of the image is encoded using spatial prediction with reference samples in adjacent blocks in the same image.
- An image block in an inter-coded (P or B) slice of an image may use spatial prediction relative to reference samples in neighboring blocks in the same image or temporal prediction relative to reference samples in other reference images.
- the image may be referred to as a frame, and the reference image may be referred to as a reference frame.
- various video coding standards including the high-efficiency video coding (HEVC) standard propose a predictive coding mode for image blocks, that is, a block to be coded is predicted based on already coded blocks of video data.
- HEVC high-efficiency video coding
- intra prediction mode the current block is predicted based on one or more previously decoded neighboring blocks in the same image as the current block; in inter prediction mode, the current block is predicted based on already decoded blocks in different images Piece.
- the optimal matching reference block needs to be inter-pixel interpolation.
- a fixed coefficient interpolation filter is usually used to perform sub-pixel interpolation.
- the prediction accuracy is poor, resulting in poor video image encoding and decoding performance.
- Embodiments of the present application provide an interpolation filter training method, device, video image coding and decoding method, and codec, which can improve the prediction accuracy of motion information of image blocks, thereby improving codec performance.
- an embodiment of the present application provides an interpolation filter training method, including: a computing device interpolating pixels of a sample image at integer pixel positions through a first interpolation filter to obtain the sample image at a first score The first sub-pixel image of the pixel position; input the sample image into the second interpolation filter to obtain the second sub-pixel image;
- the filter parameter of the second interpolation filter is determined by minimizing a first function representing the difference between the first sub-pixel image and the second sub-pixel image.
- the first sub-pixel image obtained by interpolation of the traditional interpolation filter is used as the label data to train the second interpolation filter, so that the obtained second interpolation filter can be directly used for interpolation to obtain the first
- the pixel value of the fractional pixel position, the label data is more accurate, and the encoding and decoding performance of the video image is improved.
- the second interpolation filter of the neural network is a non-linear filter, the prediction accuracy of the complex video signal is poor when performing prediction, which can further improve the encoding and decoding performance of the video image.
- an embodiment of the present application further provides an interpolation filter training method, including: a computing device interpolating pixels of a sample image at integer pixel positions through a first interpolation filter to obtain the sample image in the first The first sub-pixel image of the fractional pixel position; input the sample image into the second interpolation filter to obtain a second sub-pixel image; input the second sub-pixel image into the third interpolation filter through a flip operation , A first image is obtained, and the first image is obtained through a reverse operation of the flip operation to obtain a second image, wherein the second interpolation filter and the third interpolation filter share filter parameters; further, The determination is made according to a first function representing the difference between the first sub-pixel image and the second sub-pixel image and a second function representing the difference between the sample image and the second image Filter parameters.
- the embodiment of the present invention performs sub-pixel interpolation on the sample image through a traditional interpolation filter to obtain the first sub-pixel image, and uses it as label data, using the principle of reversibility of the sub-pixel, to simultaneously represent the first sub-pixel by minimizing
- the first function of the difference between the image and the second sub-pixel image and the second function used to represent the difference between the sample image and the second image determine the filter parameters, and the second interpolation is constrained by supervising the sample image
- the filter improves the accuracy of the sub-pixel interpolation by the second interpolation filter, thereby improving the encoding and decoding performance of the video image.
- the computing device is based on a first loss function representing the difference between the first sub-pixel image and the second sub-pixel image and a difference value representing the sample image and the second image
- the second function of determining the filter parameters may include but is not limited to the following two implementation manners:
- the computing device determines the filter parameters by minimizing a third function, where the third function is a difference value used to represent the first sub-pixel image and the second sub-pixel image A weighted sum of a first function of and a second function representing the difference between the sample image and the second image.
- Second implementation manner by alternately minimizing a first loss function representing the difference between the first sub-pixel image and the second sub-pixel image and representing the sample image and the second image
- the second function of the difference determines the filter parameters.
- computing devices described in the first and second aspects may be encoding devices or compression devices, and the above devices may be devices with data processing functions such as computers, servers, or terminals (eg, mobile phones, tablet computers, etc.).
- an embodiment of the present application further provides a video image encoding method, including:
- the encoder performs inter prediction on the currently coded image block to obtain motion information of the current coded image block, where the motion information of the current coded image block points to a fractional pixel position, and the inter prediction process includes: Determining a target interpolation filter for the current encoded image block in the candidate interpolation filter set;
- the coding information includes indication information of a target interpolation filter; the indication information of the target interpolation filter is used to indicate a reference block that obtains a fractional pixel position corresponding to the current encoded image block by performing sub-pixel interpolation through the target interpolation filter.
- an embodiment of the present application further provides a video image encoding method, including:
- the encoder performs inter prediction on the currently coded image block to obtain motion information of the current coded image block, where the motion information of the current coded image block points to a fractional pixel position, and the inter prediction process includes: Determining a target interpolation filter for the current encoded image block in the candidate interpolation filter set;
- the encoder may select an interpolation filter according to the content of the currently encoded image block to perform the interpolation operation, so that the obtained prediction block has a higher prediction accuracy and reduces the code stream. To increase the compression rate of video images.
- the encoder described in the third aspect or the fourth aspect may also be an encoding device including the encoder, and the encoding device may be a computer, server, or terminal (eg, mobile phone, tablet computer, etc.) having data processing functions device of.
- the encoding device may be a computer, server, or terminal (eg, mobile phone, tablet computer, etc.) having data processing functions device of.
- an encoder determines an implementation manner of a target interpolation filter for the current encoded image block from a set of candidate interpolation filters It may be that the encoder determines the target interpolation filter for the current encoded image block from the set of candidate interpolation filters according to the rate-distortion cost criterion.
- the encoder can perform interpolation operations based on the content selection rate filter with a low distortion cost of the currently encoded image block, thereby improving prediction accuracy, reducing code stream, and increasing the compression rate of the video image.
- an encoder performs inter prediction on the currently encoded image block to obtain motion information of the currently encoded image block
- the method can be:
- the encoder determines an entire pixel reference image block that optimally matches the current encoded image block
- the motion information is determined based on the prediction block, where the interpolation filter obtained by interpolation to obtain the prediction block is the target interpolation filter.
- the encoder can select the interpolation filter corresponding to the reference block with the least distortion to perform interpolation to reduce the code stream and improve the compression rate of the video image.
- the candidate interpolation filter set includes the training method of any interpolation filter described in the first aspect or the second aspect The resulting second interpolation filter.
- the filter of the target interpolation filter is a second interpolation filter obtained by any of the interpolation filter training methods described in the first aspect or the second aspect, then: the filter of the target interpolation filter The parameter is a preset filter parameter; or, the filter parameter of the target interpolation filter is a filter parameter obtained according to the interpolation filter training method of claims 1-4.
- the coding information further includes filter parameters of the target interpolation filter obtained by training; or, the coding information further includes a filter parameter difference value, and the filter parameter difference value is used for training
- the filter parameters of the target interpolation filter of the current image unit are relative to the filter parameters of the target interpolation filter of the image unit that was trained for the previous encoding.
- the encoder can perform online training on the second interpolation filter in the candidate interpolation filter set, so that the interpolation filter can be adjusted in real time according to the content of the currently encoded image unit, thereby improving prediction accuracy.
- the image unit includes an image frame, a slice, a video sequence subgroup, a coding tree unit (CTU), a coding unit (CU), or a prediction unit (PU).
- CTU coding tree unit
- CU coding unit
- PU prediction unit
- an embodiment of the present application further provides a video image decoding method, including:
- the decoder parses out the instruction information of the target interpolation filter from the code stream
- Performing a prediction process on the current decoded image block based on the motion information of the current decoded image block includes: performing sub-pixel interpolation according to the target interpolation filter indicated by the indication information to obtain the current Decode the prediction block of the image block;
- the reconstructed block of the current decoded image block is reconstructed.
- an embodiment of the present application further provides a video image decoding method, including:
- the decoder parses out the information of the currently decoded image block indicating the inter prediction mode of the currently decoded image block from the code stream;
- the inter prediction mode of the current image block is a non-target inter prediction mode
- the inter prediction mode of the current image block is a target inter prediction mode
- a prediction process is performed on the current decoded image block based on the motion information of the current decoded image block, where the prediction process includes : Determining a target interpolation filter for the current decoded image block; performing sub-pixel interpolation through the target interpolation filter to obtain the prediction block of the current decoded image block.
- the target interpolation filter determined for the current decoded image block specifically includes: determining that the image block decoded first The interpolation filter used in the decoding process is the target interpolation filter for the current decoded image block; or, the target interpolation filter for the current decoded image block is determined to be from the code stream The target interpolation filter indicated by the parsed instruction information of the target interpolation filter.
- the decoder selects the interpolation filter indicated by the indication information of the target inter mode during the inter prediction process to perform sub-pixel interpolation to obtain the prediction block of the current decoded image block, and implements the decoder according to The content of the current encoded image block is selected by an interpolation filter to perform an interpolation operation, so that the obtained prediction block has a prediction block with higher prediction accuracy, reduces the code stream, and improves the compression rate of the video image.
- the acquisition of the motion information of the currently decoded image block by the decoder may include, but is not limited to, the following three implementation manners:
- the decoder can parse out the index of the motion information of the decoded image block from the code stream; further, based on the decoded image block The index of the motion information of and the candidate motion information list of the currently decoded image block determine the motion information of the currently decoded image block.
- the decoder can parse out the motion information index and motion vector difference of the current decoded image block from the code stream; based on the current The index of the motion information of the decoded image block and the candidate motion information list of the current decoded image block determine the motion vector prediction value of the current decoded image block; further, based on the motion vector prediction value and the motion vector difference value, the obtained The motion vector of the currently decoded image block is described.
- Third embodiment In a target inter prediction mode (such as merge mode), if the inter prediction mode of the current decoded image block is merge mode, the decoder obtains the merged mode in the merge mode
- the motion information of the previously decoded image block is the motion information of the currently decoded image block.
- the target filter is the second interpolation obtained by the interpolation filter training method described in the first aspect or the second aspect Filter, then:
- the filter parameter of the target interpolation filter is a preset filter parameter; or, the filter parameter of the target interpolation filter is a filter obtained according to the interpolation filter training method described in the first aspect or the second aspect Parameter.
- the method may further include:
- the method may further include: configuring the target by using filter parameters of the target interpolation filter of the currently decoded image unit Interpolation filter.
- the method may further include:
- the filter parameter difference value is parsed from the code stream, and the filter parameter difference value is the filter parameter of the target interpolation filter for the currently decoded image unit relative to the target interpolation filter for the previously decoded image unit
- the filter parameters of the filter are used for the filter parameters of the target interpolation filter of the currently decoded image unit
- the target interpolation filter is configured by the filter parameter of the target interpolation filter of the currently decoded image unit.
- the image unit includes an image frame, a slice, a video sequence subgroup, a coding tree unit (CTU), a coding unit (CU), or a prediction unit (PU).
- CTU coding tree unit
- CU coding unit
- PU prediction unit
- an embodiment of the present application further provides an interpolation filter training device, including several functional units for implementing any method of the first aspect.
- the training device of the interpolation filter may include:
- the tag data acquisition module is configured to interpolate the pixels of the sample image at integer pixel positions through the first interpolation filter to obtain the first sub-pixel image of the sample image at the first sub-pixel position;
- An interpolation module configured to input the sample image into a second interpolation filter to obtain a second sub-pixel image
- the parameter determination module is configured to determine the filter parameter of the second interpolation filter by minimizing a first function representing the difference between the first sub-pixel image and the second sub-pixel image.
- an embodiment of the present application further provides an interpolation filter training device, including several functional units for implementing any method of the second aspect.
- the training device of the interpolation filter may include:
- the tag data acquisition module is configured to interpolate the pixels of the sample image at integer pixel positions through the first interpolation filter to obtain the first sub-pixel image of the sample image at the first sub-pixel position;
- An interpolation module configured to input the sample image into a second interpolation filter to obtain a second sub-pixel image
- An inverse interpolation module configured to input the second sub-pixel image into a third interpolation filter through a flip operation to obtain a first image, and obtain the second image through the inverse operation of the flip operation , wherein the second interpolation filter and the third interpolation filter share filter parameters;
- the parameter determination module is used for representing the difference between the first sub-pixel image and the second sub-pixel image and the first function for indicating the difference between the sample image and the second image
- the second function determines the filter parameters.
- an embodiment of the present application further provides an encoder, including several functional units for implementing any method of the third aspect.
- the encoder may include:
- An inter prediction unit configured to perform inter prediction on the current coded image block to obtain motion information of the current coded image block, where the motion information of the current coded image block points to a fractional pixel position, the inter
- the prediction unit includes a filter selection unit for determining a target interpolation filter for the current encoded image block from the set of candidate interpolation filters;
- the entropy coding unit encodes the current coded image block based on the inter prediction mode of the current coded image block and the motion information of the current coded image block, obtains the coding information, and codes the coding information into the code stream,
- the coding information includes indication information of the target interpolation filter; the indication information of the target interpolation filter is used to instruct the sub-pixel interpolation through the target interpolation filter to obtain the fractional pixel position corresponding to the current encoded image block Reference block.
- an embodiment of the present application further provides an encoder, including several functional units for implementing any method of the fourth aspect.
- the encoder may include:
- An inter prediction unit configured to perform inter prediction on the current coded image block to obtain motion information of the current coded image block, where the motion information of the current coded image block points to a fractional pixel position
- the inter The prediction unit includes a filter selection unit configured to: determine a target interpolation filter for the current encoded image block from a set of candidate interpolation filters;
- An entropy coding unit is used to encode the current coded image block based on the inter prediction mode of the current coded image block and the motion information of the current coded image block, obtain coding information, and encode the coding information into A code stream, where, if the inter prediction mode of the currently coded image block is a target inter prediction mode, the coding information does not include indication information of the target interpolation filter; if the inter frame of the current coded image block The prediction mode is a non-target inter prediction mode, and the coding information includes indication information of the target interpolation filter, and the indication information of the target interpolation filter is used to instruct the current encoded image block to use the target interpolation filter Perform sub-pixel interpolation.
- an embodiment of the present application further provides a decoder including several functional units for implementing any method of the fifth aspect.
- the encoder may include:
- the entropy decoding unit is used to parse out the indication information of the target interpolation filter from the code stream; and obtain the motion information of the currently decoded image block, where the motion information points to the fractional pixel position;
- An inter prediction unit configured to perform a prediction process on the current decoded image block based on the motion information of the current decoded image block, where the prediction process includes: performing a division according to a target interpolation filter indicated by the indication information Pixel interpolation to obtain the prediction block of the current decoded image block.
- the reconstruction unit is configured to reconstruct the reconstruction block of the current decoded image block based on the prediction block of the current decoded image block.
- an embodiment of the present application further provides a decoder including several functional units for implementing any method of the sixth aspect.
- the encoder may include:
- An entropy decoding unit used to parse out the information of the current decoded image block indicating the inter prediction mode of the current decoded image block from the code stream;
- the inter prediction unit is used to obtain the motion information of the current decoded image block, where the motion information points to a fractional pixel position; if the inter prediction mode of the current image block is a non-target inter prediction mode, based on the The motion information of the current decoded image block performs a prediction process on the current decoded image block, wherein the prediction process includes: target interpolation filtering indicated by indication information of the target interpolation filter parsed from the code stream The device performs sub-pixel interpolation to obtain the prediction block of the current decoded image block;
- the reconstruction unit is configured to reconstruct the current decoded image block based on the prediction block of the current decoded image block.
- an embodiment of the present application further provides an interpolation filter training device, including a memory and a processor; the memory is used to store program code, and the processor is used to call the program code, and execute Part or all of the steps of any one of the interpolation filter training methods described in the first aspect or the second aspect.
- the filter parameter of the second interpolation filter is determined by minimizing a first function representing the difference between the first sub-pixel image and the second sub-pixel image.
- the processor executes the first loss function according to the difference between the first sub-pixel image and the second sub-pixel image and the second loss image to represent the sample image and the second image
- the second function of the difference in determining the filter parameters may include but is not limited to the following two implementation manners:
- the filter parameter is determined by minimizing a third function, where the third function is the first for indicating the difference between the first sub-pixel image and the second sub-pixel image A function and a weighted sum of a second function representing the difference between the sample image and the second image.
- Second implementation manner by alternately minimizing a first loss function representing the difference between the first sub-pixel image and the second sub-pixel image and representing the sample image and the second image
- the second function of the difference determines the filter parameters.
- the training device for the interpolation filter in the first aspect and the second aspect may be an encoding device or a compression device, and the above device may be a computer, server, or terminal (eg, mobile phone, tablet computer, etc.) that has data processing functions device of.
- an embodiment of the present application further provides an encoding device, including a memory and a processor; the memory is used to store program code; and the processor is used to call the program code to perform the third aspect or Part or all of the steps of any video image coding method described in the fourth aspect.
- performing: performing inter prediction on the current coded image block to obtain motion information of the current coded image block, where the motion information of the current coded image block points to a fractional pixel position, and the inter prediction process includes : Determining a target interpolation filter for the current encoded image block from the set of candidate interpolation filters;
- the coding information includes indication information of a target interpolation filter; the indication information of the target interpolation filter is used to indicate a reference block that obtains a fractional pixel position corresponding to the current encoded image block by performing sub-pixel interpolation through the target interpolation filter.
- the method includes: determining a target interpolation filter for the current encoded image block from the set of candidate interpolation filters;
- the encoder described in the fourteenth aspect may also be an encoding device including the encoder, and the encoding device may be a device with a data processing function such as a computer, a server, or a terminal (eg, mobile phone, tablet computer, etc.).
- a data processing function such as a computer, a server, or a terminal (eg, mobile phone, tablet computer, etc.).
- an implementation manner in which the processor determines the target interpolation filter for the current encoded image block from the candidate interpolation filter set may be: The encoder determines a target interpolation filter for the current encoded image block from the set of candidate interpolation filters according to the rate-distortion cost criterion.
- an implementation manner in which the processor performs inter prediction on the current coded image block to obtain motion information of the current coded image block may be :
- the motion information is determined based on the prediction block, where the interpolation filter obtained by interpolation to obtain the prediction block is the target interpolation filter.
- the candidate interpolation filter set includes the first one obtained by any of the interpolation filter training methods described in the first aspect or the second aspect Two interpolation filters.
- the filter of the target interpolation filter is a second interpolation filter obtained by any of the interpolation filter training methods described in the first aspect or the second aspect, then: the filter of the target interpolation filter The parameter is a preset filter parameter; or, the filter parameter of the target interpolation filter is a filter parameter obtained according to the interpolation filter training method of claims 1-4.
- the coding information further includes filter parameters of the target interpolation filter obtained by training; or, the coding information further includes a filter parameter difference value, and the filter parameter difference value is used for training
- the filter parameters of the target interpolation filter of the current image unit are relative to the filter parameters of the target interpolation filter of the image unit that was trained for the previous encoding.
- the processor can perform online training on the second interpolation filter in the candidate interpolation filter set, so that the interpolation filter can be adjusted in real time according to the content of the currently encoded image unit, and the prediction accuracy is improved.
- the image unit includes an image frame, a slice, a video sequence subgroup, a coding tree unit (CTU), a coding unit (CU), or a prediction unit (PU).
- CTU coding tree unit
- CU coding unit
- PU prediction unit
- an embodiment of the present application further provides a decoding device, including a memory and a processor; the memory is used to store program code; and the processor is used to call the program code to perform the fifth aspect or Part or all of the steps of any video image decoding method described in the sixth aspect.
- the instruction information of the target interpolation filter is parsed from the code stream
- Performing a prediction process on the current decoded image block based on the motion information of the current decoded image block where the prediction process includes: performing sub-pixel interpolation according to the target interpolation filter indicated by the indication information to obtain the current Decode the prediction block of the image block.
- the reconstructed block of the current decoded image block is reconstructed.
- the inter prediction mode of the current image block is a non-target inter prediction mode
- the inter prediction mode of the current image block is a target inter prediction mode
- a prediction process is performed on the current decoded image block based on the motion information of the current decoded image block, where the prediction process includes : Determining a target interpolation filter for the current decoded image block; performing sub-pixel interpolation through the target interpolation filter to obtain the prediction block of the current decoded image block.
- the processor determining the target interpolation filter for the current decoded image block specifically includes: determining the image block decoded first
- the interpolation filter used in the decoding process is the target interpolation filter for the current decoded image block; or, the target interpolation filter for the current decoded image block is determined to be from the code stream
- the processor acquiring the motion information of the currently decoded image block may include, but is not limited to, the following three implementation manners:
- the processor can parse out the index of the motion information of the decoded image block from the code stream;
- the index of the motion information of and the candidate motion information list of the currently decoded image block determine the motion information of the currently decoded image block.
- the processor can parse out the motion information index and motion vector difference of the current decoded image block from the code stream; based on the current The index of the motion information of the decoded image block and the candidate motion information list of the current decoded image block determine the motion vector prediction value of the current decoded image block; further, based on the motion vector prediction value and the motion vector difference value, the obtained The motion vector of the currently decoded image block is described.
- the processor obtains the merged mode in the merge mode
- the motion information of the previously decoded image block is the motion information of the currently decoded image block.
- the target filter is a second interpolation filter obtained by the interpolation filter training method described in the first aspect or the second aspect, then:
- the filter parameter of the target interpolation filter is a preset filter parameter; or, the filter parameter of the target interpolation filter is a filter obtained according to the interpolation filter training method described in the first aspect or the second aspect Parameter.
- the processor may further execute:
- the method may further include: configuring the target by using filter parameters of the target interpolation filter of the currently decoded image unit Interpolation filter.
- the processor may further execute:
- the filter parameter difference value is parsed from the code stream, and the filter parameter difference value is the filter parameter of the target interpolation filter for the currently decoded image unit relative to the target interpolation filter for the previously decoded image unit
- the filter parameters of the filter are used for the filter parameters of the target interpolation filter of the currently decoded image unit
- the target interpolation filter is configured by the filter parameter of the target interpolation filter of the currently decoded image unit.
- the image unit includes an image frame, a slice, a video sequence subgroup, a coding tree unit (CTU), a coding unit (CU), or a prediction unit (PU).
- CTU coding tree unit
- CU coding unit
- PU prediction unit
- an embodiment of the present application also provides a computer-readable storage medium, including program code, which when executed on a computer, causes the computer to execute the program according to the first aspect or the second aspect Some or all steps of any interpolation filter training method.
- an embodiment of the present application provides a computer program product that, when the computer program product runs on a computer, causes the computer to perform any of the interpolation filter training described in the first aspect or the second aspect Some or all steps of the method.
- an embodiment of the present application further provides a computer-readable storage medium, including program code, which when executed on a computer, causes the computer to execute the computer program according to the third aspect or the fourth aspect Part or all steps of any video image coding method.
- an embodiment of the present application provides a computer program product that, when the computer program product runs on a computer, causes the computer to perform any video image encoding as described in the third aspect or the fourth aspect Some or all steps of the method.
- an embodiment of the present application further provides a computer-readable storage medium, including program code, which, when run on a computer, causes the computer to execute the program according to the fifth aspect or the sixth aspect Part or all steps of any video image decoding method.
- an embodiment of the present application provides a computer program product, which, when the computer program product runs on a computer, causes the computer to execute any video image as described in the fifth aspect or the sixth aspect Part or all steps of the decoding method.
- FIG. 1 is a schematic block diagram of a video encoding and decoding system according to an embodiment of this application;
- FIG. 2 is a schematic block diagram of an encoder in an embodiment of this application.
- FIG. 3 is a schematic block diagram of a decoder in an embodiment of this application.
- FIG. 5 is a schematic explanatory diagram of the principle of the reversibility of sub-pixel interpolation in an embodiment of the present application
- 6A is a schematic flowchart of an interpolation filter training method in an embodiment of the present application.
- 6B is a schematic explanatory diagram of a training training process of an interpolation filter in an embodiment of the present application
- 6C is a schematic flowchart of another interpolation filter training method in an embodiment of the present application.
- 6D is a schematic explanatory diagram of another training process of interpolation filters in an embodiment of the present application.
- FIG. 7 is a schematic flowchart of a video image encoding method in an embodiment of this application.
- FIG. 8 is a schematic flowchart of another video image encoding method in an embodiment of the present application.
- FIG. 9 is a schematic flowchart of a video image decoding method in an embodiment of this application.
- FIG. 10 is a schematic flowchart of another video image decoding method in an embodiment of the present application.
- FIG. 11 is a schematic flowchart of yet another video image decoding method in an embodiment of the present application.
- FIG. 12 is a schematic block diagram of an interpolation filter training device provided by an embodiment of the present invention.
- FIG. 13 is a schematic block diagram of an interpolation filter training device provided by an embodiment of the present invention.
- FIG. 14 is a schematic block diagram of another interpolation filter training device provided by an embodiment of the present invention.
- 15 is a schematic block diagram of another encoder in an embodiment of this application.
- 16 is a schematic block diagram of another decoder in an embodiment of the present application.
- FIG. 17 is a schematic block diagram of an encoding device or a decoding device in an embodiment of the present application.
- the disclosure in conjunction with the described method may be equally applicable to the corresponding device or system for performing the method, and vice versa.
- the corresponding device may include one or more units such as functional units to perform the one or more method steps described (eg, one unit performs one or more steps , Or multiple units, each of which performs one or more of multiple steps), even if such one or more units are not explicitly described or illustrated in the drawings.
- the corresponding method may include a step to perform the functionality of one or more units (eg, one step executes one or more units Functionality, or multiple steps, each of which performs the functionality of one or more of the multiple units), even if such one or more steps are not explicitly described or illustrated in the drawings.
- a step to perform the functionality of one or more units eg, one step executes one or more units Functionality, or multiple steps, each of which performs the functionality of one or more of the multiple units
- the features of the exemplary embodiments and / or aspects described herein may be combined with each other.
- Video coding usually refers to processing a sequence of pictures that form a video or video sequence.
- picture In the field of video coding, the terms “picture”, “frame” or “image” may be used as synonyms.
- the video encoding used in this application means video encoding or video decoding.
- Video encoding is performed on the source side, and usually involves processing (eg, by compressing) the original video picture to reduce the amount of data required to represent the video picture (thereby storing and / or transmitting more efficiently).
- Video decoding is performed on the destination side and usually involves inverse processing relative to the encoder to reconstruct the video picture.
- the “encoding” of video pictures should be understood as referring to “encoding” or “decoding” of video sequences.
- the combination of the encoding part and the decoding part is also called codec (encoding and decoding).
- the original video picture can be reconstructed, that is, the reconstructed video picture has the same quality as the original video picture (assuming no transmission loss or other data loss during storage or transmission).
- further compression is performed by, for example, quantization to reduce the amount of data required to represent the video picture, but the decoder side cannot fully reconstruct the video picture, that is, the quality of the reconstructed video picture is better than the original video picture The quality is lower or worse.
- Several video coding standards of H.261 belong to "lossy hybrid video codec” (ie, combining spatial and temporal prediction in the sample domain with 2D transform coding for applying quantization in the transform domain).
- Each picture of a video sequence is usually divided into non-overlapping block sets, which are usually encoded at the block level.
- the encoder side usually processes the encoded video at the block (video block) level.
- the prediction block is generated by spatial (intra-picture) prediction and temporal (inter-picture) prediction.
- the encoder duplicates the decoder processing loop so that the encoder and decoder generate the same prediction (eg, intra-frame prediction and inter-frame prediction) and / or reconstruction for processing, ie encoding subsequent blocks.
- the terms "block”, “image block” and “picture block” may be used as synonyms and may be part of a picture or frame.
- Joint Video Collaboration Working Group Joint Collaboration Team
- Video Coding by the ITU-T Video Coding Experts Group (VCEG) and the ISO / IEC Motion Picture Experts Group (Motion) Pictures Experts Group (MPEG)
- JCT-VC Joint Video Collaboration Team
- High-Efficiency Video Coding HEVC
- ITU-T H.266 / next-generation video coding Very high-generation video coding
- HEVC High Efficiency Video Coding
- VVC VVC
- the CTU is split into multiple CUs by using a quadtree structure represented as a coding tree.
- a decision is made at the CU level whether to use inter-picture (temporal) or intra-picture (spatial) prediction to encode picture regions.
- Each CU can be further split into one, two, or four PUs according to the PU split type. The same prediction process is applied within a PU, and related information is transmitted to the decoder on the basis of the PU.
- the CU may be divided into transform units (TU) according to other quadtree structures similar to the coding tree used for the CU.
- quad-tree and binary-tree Quad-tree and binary tree, QTBT
- the CU may have a square or rectangular shape.
- the coding tree unit (CTU) is first divided by a quadtree structure.
- the quad leaf nodes are further divided by a binary tree structure.
- the binary leaf node is called a coding unit (CU), and the segmentation is used for prediction and transformation processing without any other segmentation.
- CU coding unit
- This means that CU, PU and TU have the same block size in the QTBT coding block structure.
- encoded image block is an image block applied at the encoding end
- decoded image block is an image block applied at the decoding end
- Current coded image block may also be represented as “currently to be coded image block” or “currently encoded block”, etc.
- currently decoded image block may also be represented as “currently to be decoded image block” or “currently decoded block” and so on.
- Reference block can also be expressed as “reference image block”;
- prediction block can be expressed as “prediction image block”, and in some scenes can also be expressed as “optimal matching block” or “matching block” and so on.
- the "first interpolation filter” is an interpolation filter provided in the prior art, and may be a fixed coefficient interpolation filter, for example, a bilinear interpolation filter, a bicubic interpolation filter, etc .; or It is a content adaptive interpolation filter or other kinds of interpolation filters.
- a 6-tap finite corresponding filter is used to generate half-pixel samples, and simple bilinear interpolation is used to generate quarter pixels.
- the interpolation filter in HEVC has made many improvements compared to H.264 / AVC.
- An 8-tap filter is used to generate half-pixels, while a quarter-pixel uses a 7-tap interpolation filter.
- a typical adaptive interpolation filter estimates the filter coefficients at the encoding end according to the error of motion compensation prediction, and then encodes the filter coefficients into the code stream.
- a separable adaptive interpolation filter is proposed, which can achieve a significant reduction in complexity while basically maintaining the coding performance.
- second interpolation filter and “third interpolation filter” are interpolation filters obtained based on the interpolation filter training method provided in the embodiments of the present application.
- the second interpolation filter and / or the third interpolation filter may be a support vector machine (support vector machine (SVM), a neural network (NN), a convolutional neural network (CNN) Or other forms, this embodiment of the present application is not limited.
- SVM support vector machine
- NN neural network
- CNN convolutional neural network
- the "target interpolation filter” is the selected interpolation filter in the set of candidate filters.
- the “candidate interpolation filter set” may include one or more interpolation filters, the types of the plurality of interpolation filters are different, and may include but not limited to the second interpolation filter herein.
- the plurality of interpolation filters included in the candidate filter set may not include the second interpolation filter.
- the motion information may include a motion vector, which is an important parameter in the inter prediction process, which represents the spatial displacement of the previously coded image block relative to the current coded image block.
- Motion estimation methods such as motion search can be used to obtain motion vectors.
- the bits representing the motion vector were included in the encoded bit stream to allow the decoder to reproduce the prediction block, thereby obtaining the reconstructed block.
- the reference motion vector it was later proposed to use the reference motion vector to differentially encode the motion vector, that is, instead of encoding the entire motion vector, only the difference between the motion vector and the reference motion vector is encoded.
- the reference motion vector may be selected from previously used motion vectors in the video stream. Selecting the previously used motion vector to encode the current motion vector may further reduce the number of bits included in the encoded video bitstream .
- Embodiments of the encoder 100, decoder 200, and encoding system 300 are described below based on FIGS. 1 to 3.
- FIG. 1 is a conceptual or schematic block diagram illustrating an exemplary encoding system 300, for example, a video encoding system 300 that can utilize the technology of the present application (this disclosure).
- the encoder 100 eg, video encoder 100
- the decoder 200 eg, video decoder 200
- the encoding system 300 includes a source device 310 for providing encoded data 330, for example, an encoded picture 330, to a destination device 320 that decodes the encoded data 330, for example.
- the source device 310 includes the encoder 100, and optionally, may also include a picture source 312, such as a preprocessing unit 314 of the picture preprocessing unit 314, and a communication interface or communication unit 318.
- a picture source 312 such as a preprocessing unit 314 of the picture preprocessing unit 314, and a communication interface or communication unit 318.
- the image source 312 may include or may be any type of image capture device, for example, for capturing real-world images, and / or any type of image or comment (for screen content encoding, some text on the screen is also considered to be an image to be encoded Or a part of the image) generating equipment, for example, a computer graphics processor for generating computer animation pictures, or for acquiring and / or providing real-world pictures, computer animation pictures (for example, screen content, virtual reality (VR) ) Pictures) in any category of equipment, and / or any combination thereof (for example, augmented reality (AR) pictures).
- image capture device for example, for capturing real-world images, and / or any type of image or comment (for screen content encoding, some text on the screen is also considered to be an image to be encoded Or a part of the image)
- generating equipment for example, a computer graphics processor for generating computer animation pictures, or for acquiring and / or providing real-world pictures, computer animation pictures (for example, screen content, virtual reality (VR)
- the (digital) picture is or can be regarded as a two-dimensional array or matrix of sampling points with luminance values.
- the sampling points in the array may also be called pixels (short for picture element) or pixels (pel).
- the number of sampling points in the horizontal or vertical direction (or axis) of the array or picture defines the size and / or resolution of the picture.
- three color components are usually used, that is, a picture can be represented or contain three sampling arrays.
- the picture includes corresponding red, green and blue sampling arrays.
- each pixel is usually expressed in a luma / chroma format or color space, for example, YCbCr, including the luminance component indicated by Y (sometimes also indicated by L) and the two chromaticities indicated by Cb and Cr Weight.
- Luminance (abbreviated as luma) component Y represents brightness or gray-scale horizontal intensity (for example, the two are the same in gray-scale pictures), while two chroma (abbreviated as chroma) components Cb and Cr represent chroma or color information components .
- the YCbCr format picture includes a luminance sampling array of luminance sampling values (Y), and two chrominance sampling arrays of chrominance values (Cb and Cr). RGB format pictures can be converted or transformed into YCbCr format and vice versa, this process is also called color transformation or conversion. If the picture is black shell, the picture may include only the brightness sampling array.
- the picture source 312 may be, for example, a camera for capturing pictures, a memory such as a picture memory, including or storing previously captured or generated pictures, and / or any category (internal Or external) interface.
- the camera may be, for example, an integrated camera local or integrated in the source device, and the memory may be an integrated memory local or for example integrated in the source device.
- the interface may be, for example, an external interface that receives pictures from an external video source.
- the external video source is, for example, an external picture capture device, such as a camera, external memory, or external picture generation device.
- the external picture generation device is, for example, an external computer graphics processor, computer Or server.
- the interface may be any type of interface according to any proprietary or standardized interface protocol, such as a wired or wireless interface, an optical interface.
- the interface for acquiring the picture data 313 may be the same interface as the communication interface 318 or a part of the communication interface 318.
- the picture or picture data 313 (for example, the video data 312) may also be referred to as the original picture or the original picture data 313.
- the pre-processing unit 314 is used to receive (original) picture data 313 and perform pre-processing on the picture data 313 to obtain the pre-processed picture 315 or the pre-processed picture data 315.
- the preprocessing performed by the preprocessing unit 314 may include trimming, color format conversion (for example, conversion from RGB to YCbCr), color adjustment, or denoising. It can be understood that the pre-processing unit 314 may be an optional component.
- An encoder 100 (eg, video encoder 100) is used to receive pre-processed picture data 315 and provide encoded picture data 171 (details will be described further below, for example, based on FIG. 2, FIG. 7, or FIG. 8).
- the encoder 100 may be used to perform a video image encoding method, and in another embodiment, the encoder 100 may also be used for training of interpolation filters.
- the communication interface 318 of the source device 310 may be used to receive the encoded picture data 171 and transmit it to other devices, for example, the destination device 320 or any other device, for storage or direct reconstruction, or for storing the corresponding
- the encoded data 330 and / or the encoded picture data 171 is processed before transmission of the encoded data 330 to other devices, such as the destination device 320 or any other device for decoding or storage.
- the destination device 320 includes a decoder 200 (for example, a video decoder 200), and optionally, may also include a communication interface or communication unit 322, a post-processing unit 326, and a display device 328.
- the communication interface 322 of the destination device 320 is used, for example, to receive the encoded picture data 171 or the encoded data 330 directly from the source device 310 or any other source, such as a storage device, and the storage device, such as an encoded picture data storage device.
- the communication interface 318 and the communication interface 322 may be used to directly communicate through the direct communication link between the source device 310 and the destination device 320 or through any type of network to transmit or receive the encoded picture data 171 or the encoded data 330
- the link is, for example, a direct wired or wireless connection, and any kind of network is, for example, a wired or wireless network or any combination thereof, or any kind of private and public networks, or any combination thereof.
- the communication interface 318 may be used, for example, to encapsulate the encoded picture data 171 into a suitable format, such as a packet, for transmission on a communication link or communication network.
- the communication interface 322 forming a corresponding part of the communication interface 318 may be used, for example, to depacketize the encoded data 330 to obtain the encoded picture data 171.
- Both the communication interface 318 and the communication interface 322 may be configured as a one-way communication interface, as indicated by the arrow for the encoded picture data 330 from the source device 310 to the destination device 320 in FIG. 1, or as a two-way communication interface, and It can be used, for example, to send and receive messages to establish connections, confirm and exchange any other information related to the communication link and / or data transmission such as the transmission of encoded picture data.
- the decoder 200 is used to receive the encoded picture data 171 and provide the decoded picture data 231 or the decoded picture 231 (details will be described further below, for example, based on FIG. 3, FIG. 9, FIG. 10, or FIG. 11). In one example, the decoder 200 may be used to perform a video image decoding method described below.
- the post-processor 326 of the destination device 320 is used to post-process decoded picture data 231 (also referred to as reconstructed picture data), for example, decoded picture 131, to obtain post-processed picture data 327, for example, post-processing Picture 327.
- the post-processing performed by the post-processing unit 326 may include, for example, color format conversion (eg, conversion from YCbCr to RGB), color adjustment, trimming, or resampling, or any other processing for, for example, preparing the decoded picture data 231 to
- the display device 328 displays.
- the display device 328 of the destination device 320 is used to receive post-processed picture data 327 to display pictures to, for example, a user or a viewer.
- the display device 328 may be or may include any type of display for presenting reconstructed pictures, for example, an integrated or external display or monitor.
- the display may include a liquid crystal display (LCD), an organic light emitting diode (OLED) display, a plasma display, a projector, a micro LED display, a liquid crystal on silicon (LCoS), Digital light processor (digital light processor, DLP) or any other type of display.
- FIG. 1 illustrates the source device 310 and the destination device 320 as separate devices
- device embodiments may also include the functionality of the source device 310 and the destination device 320 or both, ie, the source device 310 or the corresponding And the destination device 320 or the corresponding functionality.
- the same hardware and / or software, or separate hardware and / or software, or any combination thereof may be used to implement the source device 310 or corresponding functionality and the destination device 320 or corresponding functionality .
- Both the encoder 100 and the decoder 200 may be implemented as any of various suitable circuits, for example, one or more microprocessors, digital signal processors (digital signal processor, DSP), application-specific integrated circuit (ASIC), field-programmable gate array (FPGA), discrete logic, hardware, or any combination thereof.
- the device may store the instructions of the software in a suitable non-transitory computer-readable storage medium, and may use one or more processors to execute the instructions in hardware to perform the techniques of the present disclosure . Any one of the foregoing (including hardware, software, a combination of hardware and software, etc.) can be regarded as one or more processors.
- Each of the video encoder 100 and the video decoder 200 may be included in one or more encoders or decoders, any of which may be integrated as a combined encoder / decoder in the corresponding device Part of the codec (codec).
- the source device 310 may be referred to as a video encoding device or a video encoding device.
- the destination device 320 may be referred to as a video decoding device or a video decoding device.
- the source device 310 and the destination device 320 may be examples of video encoding devices or video encoding devices.
- Source device 310 and destination device 320 may include any of a variety of devices, including any type of handheld or stationary device, for example, notebook or laptop computers, mobile phones, smart phones, tablets or tablet computers, cameras, desktops Computers, set-top boxes, televisions, display devices, digital media players, video game consoles, video streaming devices (such as content service servers or content distribution servers), broadcast receiver devices, broadcast transmitter devices, etc., and may not be used Or use any kind of operating system.
- any type of handheld or stationary device for example, notebook or laptop computers, mobile phones, smart phones, tablets or tablet computers, cameras, desktops Computers, set-top boxes, televisions, display devices, digital media players, video game consoles, video streaming devices (such as content service servers or content distribution servers), broadcast receiver devices, broadcast transmitter devices, etc., and may not be used Or use any kind of operating system.
- source device 310 and destination device 320 may be equipped for wireless communication. Therefore, the source device 310 and the destination device 320 may be wireless communication devices.
- the video encoding system 300 shown in FIG. 1 is only an example, and the technology of the present application may be applied to video encoding settings that do not necessarily include any data communication between encoding and decoding devices (eg, video encoding or video decoding) .
- data can be retrieved from local storage, streamed on the network, and so on.
- the video encoding device may encode the data and store the data to the memory, and / or the video decoding device may retrieve the data from the memory and decode the data.
- encoding and decoding are performed by devices that do not communicate with each other but only encode data to and / or retrieve data from memory and decode the data.
- video decoder 200 may be used to perform the reverse process.
- the video decoder 200 may be used to receive and parse such syntax elements, and decode relevant video data accordingly.
- the video encoder 100 may entropy encode one or more syntax elements, such as indication information defining the target filter and parameter information of the interpolation filter, into an encoded video bitstream (also referred to as a codestream).
- video decoder 200 may parse such syntax elements and decode relevant video data accordingly.
- the video encoder 100 includes a residual calculation unit 104, a transform processing unit 106, a quantization unit 108, an inverse quantization unit 110, an inverse transform processing unit 112, a reconstruction unit 114, a buffer 116, and loop filtering
- the prediction processing unit 160 may include an inter prediction unit 144, an intra prediction unit 154, and a mode selection unit 162.
- the inter prediction unit 144 may include a motion estimation unit and a motion compensation unit (not shown).
- the video encoder 100 shown in FIG. 2 may also be referred to as a hybrid video encoder or a video encoder based on a hybrid video codec.
- the residual calculation unit 104, the transform processing unit 106, the quantization unit 108, the prediction processing unit 160, and the entropy encoding unit 170 form the forward signal path of the encoder 100, and for example, the inverse quantization unit 110, the inverse transform processing unit 112, the heavy
- the structural unit 114, the buffer 116, the loop filter 120, the decoded picture buffer (DPB) 130, and the prediction processing unit 160 form a backward signal path of the encoder, where the backward signal path of the encoder corresponds The signal path of the decoder (see decoder 200 in FIG. 3).
- the encoder 100 receives a picture 101 or a block 103 of the picture 101 through, for example, an input 102, for example, forming a picture in a picture sequence of a video or a video sequence.
- the picture block 103 may also be called a current picture block or a picture block to be coded
- the picture 101 may be called a current picture or a picture to be coded (especially when the current picture is distinguished from other pictures in video coding, other pictures such as the same video sequence That is, the previously encoded and / or decoded pictures in the video sequence of the current picture are also included.
- An embodiment of the encoder 100 may include a division unit (not shown in FIG. 2) for dividing the picture 101 into a plurality of blocks such as the block 103, usually into a plurality of non-overlapping blocks.
- the segmentation unit can be used to use the same block size and the corresponding grid that defines the block size for all pictures in the video sequence, or to change the block size between pictures or subsets or picture groups, and divide each picture into The corresponding block.
- the prediction processing unit 160 of the video encoder 100 may be used to perform any combination of the above-mentioned segmentation techniques.
- block 103 is also or can be regarded as a two-dimensional array or matrix of sampling points with luminance values (sample values), although its size is smaller than that of picture 101.
- the block 103 may include, for example, one sampling array (for example, the brightness array in the case of black-and-white pictures 101) or three sampling arrays (for example, one brightness array and two chroma arrays in the case of color pictures) or basis An array of any other number and / or category of color formats applied.
- the number of sampling points in the horizontal and vertical direction (or axis) of the block 103 defines the size of the block 103.
- the encoder 100 shown in FIG. 2 is used to encode the picture 101 block by block, for example, to perform encoding and prediction on each block 103.
- the residual calculation unit 104 is used to calculate the residual block 105 based on the picture block 103 and the prediction block 165 (further details of the prediction block 165 are provided below), for example, by subtracting the prediction of the sample value of the picture block 103 by sample (pixel by pixel) The sample values of block 165 to obtain the residual block 105 in the sample domain.
- the transform processing unit 106 is used to apply a transform such as discrete cosine transform (DCT) or discrete sine transform (DST) on the sample values of the residual block 105 to obtain transform coefficients 107 in the transform domain .
- the transform coefficient 107 may also be called a transform residual coefficient, and represents a residual block 105 in the transform domain.
- the transform processing unit 106 may be used to apply integer approximations of DCT / DST, such as the transform specified by HEVC / H.265. Compared with the orthogonal DCT transform, this integer approximation is usually scaled by a factor. In order to maintain the norm of the residual block processed by the forward and inverse transform, an additional scaling factor is applied as part of the transform process.
- the scaling factor is usually selected based on certain constraints, for example, the scaling factor is a power of two used for the shift operation, the bit depth of the transform coefficient, the accuracy, and the trade-off between implementation cost and so on.
- a specific scaling factor is specified for the inverse transform by the inverse transform processing unit 212 on the decoder 200 side (and a corresponding inverse transform by the inverse transform processing unit 112 on the encoder 100 side), and accordingly, The 100 side specifies the corresponding scaling factor for the positive transform through the transform processing unit 106.
- the quantization unit 108 is used to quantize the transform coefficient 107, for example, by applying scalar quantization or vector quantization, to obtain the quantized transform coefficient 109.
- the quantized transform coefficient 109 may also be referred to as a quantized residual coefficient 109.
- the quantization process can reduce the bit depth associated with some or all of the transform coefficients 107. For example, n-bit transform coefficients can be rounded down to m-bit transform coefficients during quantization, where n is greater than m.
- the degree of quantization can be modified by adjusting the quantization parameter (QP). For example, for scalar quantization, different scales can be applied to achieve thinner or coarser quantization.
- QP quantization parameter
- a smaller quantization step size corresponds to a finer quantization
- a larger quantization step size corresponds to a coarser quantization
- a suitable quantization step size can be indicated by a quantization parameter (QP).
- the quantization parameter may be an index of a predefined set of suitable quantization steps.
- smaller quantization parameters may correspond to fine quantization (smaller quantization step size)
- larger quantization parameters may correspond to coarse quantization (larger quantization step size)
- the quantization may include dividing by the quantization step size and the corresponding quantization or inverse quantization performed by, for example, inverse quantization 110, or may include multiplying the quantization step size.
- Embodiments according to some standards such as HEVC may use quantization parameters to determine the quantization step size.
- the quantization step size can be calculated based on the quantization parameter using fixed-point approximation including equations for division. Additional scaling factors can be introduced for quantization and inverse quantization to restore the norm of the residual block that may be modified due to the scale used in the fixed-point approximation of the equations for quantization step size and quantization parameter.
- the scale of inverse transform and inverse quantization may be combined.
- a custom quantization table can be used and signaled from the encoder to the decoder in the bitstream, for example.
- Quantization is a lossy operation, where the larger the quantization step, the greater the loss.
- the inverse quantization unit 110 is used to apply the inverse quantization of the quantization unit 108 on the quantized coefficient to obtain the inverse quantization coefficient 111, for example, based on or using the same quantization step size as the quantization unit 108, apply the quantization scheme applied by the quantization unit 108 Inverse quantization scheme.
- the inverse quantized coefficient 111 may also be referred to as the inverse quantized residual coefficient 111, which corresponds to the transform coefficient 107, although the loss due to quantization is usually not the same as the transform coefficient.
- the inverse transform processing unit 112 is used to apply the inverse transform of the transform applied by the transform processing unit 106, for example, an inverse discrete cosine transform (DCT) or an inverse discrete sine transform (DST), in the sample domain
- the inverse transform block 113 is obtained.
- the inverse transform block 113 may also be referred to as an inverse transform dequantized block 113 or an inverse transform residual block 113.
- the reconstruction unit 114 (eg, the summer 114) is used to add the inverse transform block 113 (ie, the reconstructed residual block 113) to the prediction block 165 to obtain the reconstructed block 115 in the sample domain, for example, The sample value of the reconstructed residual block 113 and the sample value of the prediction block 165 are added.
- a buffer unit 116 such as a line buffer 116 is used to buffer or store the reconstructed block 115 and corresponding sample values for, for example, intra prediction.
- the encoder may be used to use the unfiltered reconstructed blocks and / or corresponding sample values stored in the buffer unit 116 for any type of estimation and / or prediction, such as intra prediction.
- an embodiment of the encoder 100 may be configured such that the buffer unit 116 is used not only for storing the reconstructed block 115 for intra prediction 154, but also for the loop filter unit 120 (not shown in FIG. 2) Out), and / or, for example, causing the buffer unit 116 and the decoded picture buffer unit 130 to form a buffer.
- Other embodiments may be used to use the filtered block 121 and / or blocks or samples from the decoded picture buffer 130 (neither shown in FIG. 2) as an input or basis for intra prediction 154.
- the loop filter unit 120 (or simply "loop filter” 120) is used to filter the reconstructed block 115 to obtain the filtered block 121, so as to smoothly perform pixel conversion or improve video quality.
- the loop filter unit 120 is intended to represent one or more loop filters, such as deblocking filters, sample-adaptive offset (SAO) filters, or other filters, such as bilateral filters, Adaptive loop filter (adaptive loop filter, ALF), or sharpening or smoothing filter, or collaborative filter.
- the loop filter unit 120 is shown as an in-loop filter in FIG. 2, in other configurations, the loop filter unit 120 may be implemented as a post-loop filter.
- the filtered block 121 may also be referred to as the filtered reconstructed block 121.
- the decoded picture buffer 130 may store the reconstructed coding block after the loop filter unit 120 performs a filtering operation on the reconstructed coding block.
- Embodiments of the encoder 100 may be used to output loop filter parameters (eg, sample adaptive offset information), for example, directly output or by the entropy encoding unit 170 or any other
- the entropy coding unit outputs after entropy coding, for example, so that the decoder 200 can receive and apply the same loop filter parameters for decoding.
- the decoded picture buffer (DPB) 130 may be a reference picture memory for storing reference picture data for the video encoder 100 to encode video data.
- DPB 130 can be formed by any of a variety of memory devices, such as dynamic random access memory (dynamic random access memory (DRAM) (including synchronous DRAM (synchronous DRAM, SDRAM), magnetoresistive RAM (magnetoresistive RAM, MRAM), resistive RAM (resistive RAM, RRAM)) or other types of memory devices.
- DRAM dynamic random access memory
- the DPB 130 and the buffer 116 may be provided by the same memory device or separate memory devices.
- a decoded picture buffer (DPB) 130 is used to store the filtered block 121.
- the decoded picture buffer 130 may be further used to store other previous filtered blocks of the same current picture or different pictures such as previous reconstructed pictures, such as the previously reconstructed and filtered block 121, and may provide a complete previous The reconstructed ie decoded pictures (and corresponding reference blocks and samples) and / or partially reconstructed current pictures (and corresponding reference blocks and samples), for example for inter prediction.
- a decoded picture buffer (DPB) 130 is used to store the reconstructed block 115 if the reconstructed block 115 is reconstructed without in-loop filtering.
- the prediction processing unit 160 also known as the block prediction processing unit 160, is used to receive or acquire the block 103 (currently encoded image block 103 of the current picture 101) and reconstructed picture data, such as the same (current) picture from the buffer 116 Reference samples and / or reference picture data 231 of one or more previously decoded pictures from the decoded picture buffer 130, and used to process such data for prediction, that is, to provide an inter prediction block 145 or The prediction block 165 of the intra prediction block 155.
- the block prediction processing unit 160 also known as the block prediction processing unit 160, is used to receive or acquire the block 103 (currently encoded image block 103 of the current picture 101) and reconstructed picture data, such as the same (current) picture from the buffer 116 Reference samples and / or reference picture data 231 of one or more previously decoded pictures from the decoded picture buffer 130, and used to process such data for prediction, that is, to provide an inter prediction block 145 or The prediction block 165 of the intra prediction block 155.
- the inter prediction unit 145 may include a candidate interpolation filter set 151 and a filter selection unit 152, and the candidate interpolation filter set 151 may include multiple kinds of interpolation filters, for example, including discrete cosine-based Transformed interpolation filter (DCT-based interpolation filter, DCTIF) and invertibility-based interpolation filter (invertibility-based interpolation filter, also referred to as InvIF in this article).
- DCT-based interpolation filter DCTIF
- invertibility-based interpolation filter also referred to as InvIF in this article.
- InvIF refers to the interpolation filter obtained through the interpolation filter training method described in FIG. 6A or FIG. 6C of the present application.
- the filter selection unit 152 is used to realize or combine with other units (such as the transformation processing unit 106, the quantization unit 108, the inverse transformation processing unit 112, the reconstruction unit 114, the loop filter unit 120, the transformation processing unit 106, etc.) To realize the selection of an interpolation filter (for example, DCTIF or InvIF) from the candidate interpolation filter set 151 and / or for the entropy coding unit to entropy encode the indication information of the selected interpolation filter (also referred to herein as a target interpolation filter) .
- the candidate interpolation filter set 151 may include multiple types of interpolation filters, which may all be interpolation filters provided in the prior art, or may include interpolation shown in FIG.
- the interpolation filter obtained by the filter training method may include a single interpolation filter obtained by the interpolation filter training method shown in FIG. 6A or FIG. 6C provided by the embodiment of the present application.
- the candidate interpolation filter set 151 may be used in the process of motion estimation. In another embodiment of the present application, the candidate interpolation filter set 151 may also be used in other scenarios that require interpolation operations.
- the encoder 100 may further include a training unit (not shown in FIG. 2) for training the interpolation filter.
- the training unit may be provided inside or outside the inter prediction module 145. It can be understood that the training unit may be provided in the inter prediction unit 145, or may be provided in other positions of the encoder 100, and coupled with one or more interpolation filters in the inter prediction unit to implement the training of the interpolation filter , The update of the filter parameters of the interpolation filter, etc. It can be understood that the training unit may also be located outside the encoder or in other devices (devices that do not include the encoder 100). The encoder may configure the interpolation filter by receiving filter parameters.
- the mode selection unit 162 may be used to select a prediction mode (eg, intra or inter prediction mode) and / or the corresponding prediction block 145 or 155 used as the prediction block 165 to calculate the residual block 105 and reconstruct the reconstructed block 115.
- a prediction mode eg, intra or inter prediction mode
- / or the corresponding prediction block 145 or 155 used as the prediction block 165 to calculate the residual block 105 and reconstruct the reconstructed block 115.
- An embodiment of the mode selection unit 162 may be used to select a prediction mode (for example, from those prediction modes supported by the prediction processing unit 160), which provides the best match or the minimum residual (the minimum residual means Better compression in transmission or storage), or provide minimum signaling overhead (minimum signaling overhead means better compression in transmission or storage), or consider or balance both at the same time.
- the mode selection unit 162 may be used to determine a prediction mode based on rate distortion optimization (RDO), that is, to select a prediction mode that provides minimum bit rate distortion optimization, or to select a prediction mode in which the related rate distortion at least meets the prediction mode selection criteria .
- RDO rate distortion optimization
- the prediction process performed by the example of the encoder 100 for example, by the prediction processing unit 160
- the mode selection for example, by the mode selection unit 162
- the encoder 100 is used to determine or select the best or optimal prediction mode from the (predetermined) prediction mode set.
- the prediction mode set may include, for example, intra prediction modes and / or inter prediction modes.
- the set of intra prediction modes may include 35 different intra prediction modes, for example, non-directional modes such as DC (or mean) mode and planar mode, or directional modes as defined in H.265, or may include 67 Different intra prediction modes, for example, non-directional modes such as DC (or mean) mode and planar mode, or directional modes as defined in the developing H.266.
- the set of inter prediction modes depends on the available reference pictures (ie, for example, the aforementioned at least partially decoded pictures stored in DBP 130) and other inter prediction parameters, such as whether to use the entire reference picture or only the reference A part of the picture, for example a search window area surrounding the area of the current block, to search for the best matching reference block, and / or for example depends on whether a pixel picture such as half-pixel and / or quarter-pixel interpolation is applied.
- skip mode and / or direct mode can also be applied.
- the prediction processing unit 160 may be further used to split the block 103 into smaller block partitions or sub-blocks, for example, by iteratively using quad-tree (QT) splitting, binary-tree (BT) splitting or Triple-tree (TT) partitioning, or any combination thereof, and for performing prediction for each of block partitions or sub-blocks, for example, where mode selection includes selecting the tree structure of the partitioned block 103 and the selection applied to the block The prediction mode of each of the partitions or sub-blocks.
- QT quad-tree
- BT binary-tree
- TT Triple-tree
- the inter prediction unit 144 may include a motion estimation (ME) unit (not shown in FIG. 2) and a motion compensation (MC) unit (not shown in FIG. 2).
- the motion estimation unit is used to receive or acquire the picture block 103 (the current picture block 103 of the current picture 101) and the decoded picture 131, or at least one or more previously reconstructed blocks, for example, one or more other / different previous warp
- the reconstructed block of the picture 231 is decoded to perform motion estimation.
- the video sequence may include the current picture and the previously decoded picture 231, or in other words, the current picture and the previously decoded picture 231 may be part of the picture sequence forming the video sequence, or form the picture sequence.
- the encoder 100 may be used to select a reference block from multiple reference blocks of the same or different pictures in multiple other pictures and provide a reference picture (or reference picture index) to a motion estimation unit (not shown in FIG. 2) Etc.) and / or provide the offset (spatial offset) between the position of the reference block (X, Y coordinates) and the position of the current block (spatial offset) as the inter prediction parameter.
- This offset is also known as motion vector (MV).
- the motion estimation unit may include a candidate interpolation filter set, and the motion estimation unit is further used to select a target interpolation filter for the current encoded image block from the candidate interpolation filter set according to a rate-distortion cost criterion.
- the motion estimation unit is further used to: perform sub-pixel interpolation on the entire pixel reference image block optimally matched with the current encoded image block through each interpolation filter in the candidate interpolation filter set to obtain N sub-pixel reference image blocks, Further, the prediction block that best matches the current encoded image block is determined from the whole pixel reference image block and the N sub-pixel reference image blocks, and the interpolation filter of the prediction block selected from the candidate interpolation filter set is the target. Interpolation filter.
- the motion compensation unit is used to acquire, for example, receive inter prediction parameters, and perform inter prediction based on or using inter prediction parameters to obtain inter prediction blocks 145.
- the motion compensation performed by the motion compensation unit may include extracting or generating a prediction block based on a motion / block vector determined by motion estimation (possibly performing interpolation of sub-pixel accuracy). Interpolation filtering can generate additional pixel samples from known pixel samples, potentially increasing the number of candidate prediction blocks that can be used to encode picture blocks.
- the motion compensation unit 146 may locate the prediction block pointed to by the motion vector in a reference picture list. Motion compensation unit 146 may also generate syntax elements associated with blocks and video slices for use by video decoder 200 when decoding picture blocks of video slices.
- the intra prediction unit 154 is used to obtain, for example, a picture block 103 (current picture block) of the same picture and one or more previously reconstructed blocks, such as reconstructed neighboring blocks, for intra estimation.
- the encoder 100 may be used to select an intra prediction mode from multiple (predetermined) intra prediction modes.
- Embodiments of the encoder 100 may be used to select an intra prediction mode based on optimization criteria, for example, based on a minimum residual (eg, an intra prediction mode that provides the prediction block 155 most similar to the current picture block 103) or minimum rate distortion.
- a minimum residual eg, an intra prediction mode that provides the prediction block 155 most similar to the current picture block 103
- minimum rate distortion e.g., a minimum rate distortion
- the intra prediction unit 154 is further used to determine the intra prediction block 155 based on the intra prediction parameters of the intra prediction mode as selected. In any case, after selecting the intra-prediction mode for the block, the intra-prediction unit 154 is also used to provide the intra-prediction parameters to the entropy encoding unit 170, that is, to provide an indication of the selected intra-prediction mode for the block Information. In one example, the intra prediction unit 154 may be used to perform any combination of intra prediction techniques described below.
- the entropy coding unit 170 is used to convert the entropy coding algorithm or scheme (for example, variable length coding (VLC) scheme, context adaptive VLC (context adaptive VLC, CAVLC) scheme, arithmetic coding scheme, context adaptive binary arithmetic) Encoding (context adaptive) binary arithmetic coding (CABAC), syntax-based context-adaptive binary arithmetic coding (SBAC), probability interval entropy (probability interval interpartitioning entropy, PIPE) encoding or other entropy Encoding method or technique) applied to a single or all of the quantized residual coefficients 109, inter prediction parameters, intra prediction parameters, and / or loop filter parameters (or not applied) to obtain output 172 to For example, encoded picture data 171 output in the form of an encoded bit stream 171.
- VLC variable length coding
- CABAC context adaptive binary arithmetic
- SBAC syntax-based context-adaptive binary arithmetic coding
- the encoded bitstream may be transmitted to the video decoder 200 or archived for later transmission or retrieval by the video decoder 200.
- the entropy encoding unit 170 may also be used to entropy encode other syntax elements of the current video slice being encoded.
- the entropy encoding unit 170 is further used to entropy encode the indication information of the target interpolation filter and / or the filter parameters of the interpolation filter.
- the training unit is configured to train the interpolation filter based on the machine learning included in the inter prediction unit 144 based on the sample image to determine or optimize the filter parameters of the interpolation filter.
- video encoder 100 may be used to encode video streams.
- the non-transform based encoder 100 can directly quantize the residual signal without the transform processing unit 106 for certain blocks or frames.
- the encoder 100 may have a quantization unit 108 and an inverse quantization unit 110 combined into a single unit.
- FIG. 3 shows an exemplary video decoder 200 for implementing the technology of the present application, that is, a video image decoding method.
- the video decoder 200 is used to receive encoded picture data (eg, encoded bitstream) 171, for example, encoded by the encoder 100, to obtain the decoded picture 131.
- encoded picture data eg, encoded bitstream
- video decoder 200 receives video data from video encoder 100, such as an encoded video bitstream (also referred to as a codestream) and associated syntax elements that represent picture blocks of the encoded video slice.
- an encoded video bitstream also referred to as a codestream
- associated syntax elements that represent picture blocks of the encoded video slice.
- the decoder 200 includes an entropy decoding unit 204, an inverse quantization unit 210, an inverse transform processing unit 212, a reconstruction unit 214 (such as a summer 214), a buffer 216, a loop filter 220, a Decode picture buffer 230 and prediction processing unit 260.
- the prediction processing unit 260 may include an inter prediction unit 244, an intra prediction unit 254, and a mode selection unit 262.
- video decoder 200 may perform a decoding pass that is generally reciprocal to the encoding pass described with reference to video encoder 100 of FIG. 2.
- the entropy decoding unit 204 is used to perform entropy decoding on the encoded picture data (for example, code stream or current decoded image block) 171 to obtain, for example, quantized coefficients 209 and / or decoded encoding parameters (also called encoding information, FIG. 3 Not shown in), for example, in inter-prediction, intra-prediction parameters, loop filter parameters, target filter indication information, filter parameters, and / or information indicating inter prediction modes (decoded) ) Any one or all of them.
- the entropy decoding unit 204 is further used to forward syntax elements such as inter prediction parameters, intra prediction parameters, indication information of the target filter, filter parameters, and / or information indicating inter prediction modes to the prediction processing unit 260.
- the video decoder 200 may receive syntax elements at the video slice level and / or the video block level.
- the inverse quantization unit 210 may be functionally the same as the inverse quantization unit 110, the inverse transform processing unit 212 may be functionally the same as the inverse transform processing unit 112, the reconstruction unit 214 may be functionally the same as the reconstruction unit 114, and the buffer 216 may be functionally Like the buffer 116, the loop filter 220 may be functionally the same as the loop filter 120, and the decoded picture buffer 230 may be functionally the same as the decoded picture buffer 130.
- the prediction processing unit 260 may include an inter prediction unit 244 and an intra prediction unit 254, where the inter prediction unit 244 may be functionally similar to the inter prediction unit 144, and the intra prediction unit 254 may be similar to the intra prediction unit 154 .
- the prediction processing unit 260 is generally used to perform block prediction and / or obtain the prediction block 265 from the encoded data 171, and to receive or obtain prediction-related parameters and / or information about the entropy decoding unit 204 (explicitly or implicitly). Information about the selected prediction mode.
- the intra prediction unit 254 of the prediction processing unit 260 is used to signal-based the intra prediction mode and the previous decoded block from the current frame or picture. Data to generate a prediction block 265 for the picture block of the current video slice.
- the inter prediction unit 244 eg, motion compensation unit
- Other syntax elements generate prediction block 265 for the video block of the current video slice.
- a prediction block may be generated from a reference picture in a reference picture list.
- the video decoder 200 may construct the reference frame lists: list 0 and list 1 based on the reference pictures stored in the DPB 230 using default construction techniques.
- the prediction processing unit 260 is used to determine the syntax elements such as indication information, filter parameters, or information indicating the inter prediction mode of the target interpolation filter used to obtain the prediction block by parsing the motion vector, and performing sub-pixel interpolation.
- the prediction processing unit 260 uses some received syntax elements to determine a prediction mode (eg, intra or inter prediction) of a video block used to encode a video slice, an inter prediction slice type (eg, B slice, (P slice or GPB slice), construction information of one or more of the reference picture lists for slices, motion vectors for each inter-coded video block for slices, each warp for slices
- a prediction mode eg, intra or inter prediction
- an inter prediction slice type eg, B slice, (P slice or GPB slice
- construction information of one or more of the reference picture lists for slices motion vectors for each inter-coded video block for slices, each warp for slices
- the inter prediction state of the inter-coded video block the indication information of the target filter used to perform sub-pixel interpolation to obtain the prediction block, and other information to decode the video block of the current video slice.
- the prediction processing unit 260 may include a candidate interpolation filter set 251 and a filter selection unit 252.
- the candidate interpolation filter set 251 includes one or more interpolation filters, for example, DCTIF and InvIF.
- the filter selection unit 252 is used for determining the target interpolation filter indicated by the parsed target interpolation filter indication information from the candidate interpolation filter set 251 if the motion information points to the fractional pixel position, and indicated by the indication information The target interpolation filter performs sub-pixel interpolation to obtain the prediction block.
- the inverse quantization unit 210 may be used to inverse quantize (ie, inverse quantize) the quantized transform coefficients provided in the bitstream and decoded by the entropy decoding unit 204.
- the inverse quantization process may include using the quantization parameters calculated by the video encoder 100 for each video block in the video slice to determine the degree of quantization that should be applied and also determine the degree of inverse quantization that should be applied.
- the inverse transform processing unit 212 is used to apply an inverse transform (for example, inverse DCT, inverse integer transform, or conceptually similar inverse transform process) to the transform coefficients so as to generate a residual block in the pixel domain.
- an inverse transform for example, inverse DCT, inverse integer transform, or conceptually similar inverse transform process
- the reconstruction unit 214 (eg, summer 214) is used to add the inverse transform block 213 (ie, the reconstructed residual block 213) to the prediction block 265 to obtain the reconstructed block 215 in the sample domain, for example by The sample values of the reconstructed residual block 213 and the sample values of the prediction block 265 are added.
- the loop filter unit 220 (during the encoding loop or after the encoding loop) is used to filter the reconstructed block 215 to obtain the filtered block 221 to smoothly perform pixel conversion or improve video quality.
- the loop filter unit 220 may be used to perform any combination of filtering techniques described below.
- the loop filter unit 220 is intended to represent one or more loop filters, such as deblocking filters, sample-adaptive offset (SAO) filters, or other filters, such as bilateral filters, self-adaptive filters Adaptive loop filter (adaptive loop filter, ALF), or sharpening or smoothing filter, or collaborative filter.
- the loop filter unit 220 is shown as an in-loop filter in FIG. 3, in other configurations, the loop filter unit 220 may be implemented as a post-loop filter.
- the decoded video block 221 in a given frame or picture is then stored in a decoded picture buffer 230 that stores reference pictures for subsequent motion compensation.
- the decoder 200 is used, for example, to output the decoded picture 231 through the output 232 for presentation to the user or for the user to view.
- video decoder 200 may be used to decode the compressed bitstream.
- the decoder 200 may generate an output video stream without the loop filter unit 220.
- the non-transform based decoder 200 may directly inversely quantize the residual signal without the inverse transform processing unit 212 for certain blocks or frames.
- the video decoder 200 may have an inverse quantization unit 210 and an inverse transform processing unit 212 combined into a single unit.
- FIGS. 2 and 3 show specific encoders 100 and decoders 200
- the encoders 100 and decoders 200 may also include various other functional units, modules, or components that are not depicted. Furthermore, it is not limited to the specific components and / or the manner in which various components shown in FIGS. 2 and 3 are arranged.
- the various units of the system described herein may be implemented in software, firmware, and / or hardware, and / or any combination thereof.
- each digital image can be regarded as a two-dimensional array of m rows and n columns, including m * n samples (each sample, the position of each sample is called a sample position, and the value of each sample becomes a sample value).
- m * n is called the resolution of the image, that is, the number of samples contained in the image.
- 2K image resolution is 1920 * 1080
- 4K video image resolution is 3840 * 2160.
- a sample is also called a pixel
- the sample value is also called a pixel value, so each pixel also contains two pieces of information, the pixel position and the pixel value.
- Digital video coding aims to remove redundant information in digital video, making digital video more conducive to storage and transmission in the network.
- the redundancy of digital video includes spatial redundancy, temporal redundancy, statistical redundancy, and visual redundancy.
- the current block-based hybrid coding framework will introduce inter-frame prediction technology, predicted by the encoded frame The current frame to be encoded, thereby greatly saving the encoding bit rate.
- the current picture to be encoded is first divided into several non-overlapping coding units (CU). Each CU has its own coding mode. Each CU can be further divided into several prediction units (PUs), and each PU has its own prediction mode, such as prediction direction, motion vector (MV), and so on. At the encoding end, each PU searches for a matching block in the reference frame, and the location of the matching block is identified using MV. Because the sample values of some positions in the image are not sampled during digital sampling (referring to fractional pixel positions), therefore, the current block may not be able to search for a perfectly matched block in the reference frame, and this time will use sub-pixel interpolation Techniques to interpolate pixel values that produce fractional pixel positions.
- Figure 4 shows a schematic diagram of the position of whole pixels and sub-pixels.
- the position indicated by capital letters represents integer pixel positions, and the remaining lower case letters represent different fractional pixel positions.
- the pixel values of different fractional pixel positions are generated by interpolation using different pixels through different interpolation filters, and then used as a reference block.
- the motion vector accuracy is one-quarter accuracy.
- the whole pixel image block that best matches the current image block to be encoded is first searched. Then use a 1/2 precision interpolation filter to generate 4 1 / 2-pixel sub-pixel image blocks.
- the current image block to be encoded is matched with four 1 / 2-pixel sub-pixel image blocks and whole pixel image blocks to obtain an optimally matched 1 / 2-precision motion vector.
- the most-matched pixel block pointed by the above-mentioned best-match 1 / 2-precision motion vector is interpolated using a 1 / 4-precision interpolation filter to obtain eight 1 / 4-precision sub-pixel blocks.
- the reason for the existence of whole pixels and sub-pixels is due to the discrete nature of digital sampling.
- the dots represent the sampled samples
- the dotted line indicates that the original analog signal is s (t)
- the solid line indicates the interpolated signal (referred to as the interpolated signal).
- Interpolation is the inverse process of digital sampling. The purpose of interpolation is to use discrete sample values to recover the original continuous signal as completely as possible, and to obtain sample values at specific positions (fractional pixel positions).
- the target position ⁇ fractional pixel position
- the interpolation process is described as follows:
- s i represents the sample value at the integer pixel position i
- i is the index of the integer pixel position
- i is an integer
- f ⁇ is the interpolation filter corresponding to the fractional pixel position ⁇
- M and N are positive integers
- -M ⁇ ⁇ ⁇ N are positive integers
- u 0 f ⁇ (s ′ - ⁇ -M , s ′ - ⁇ -M + 1 ,..., s ′ - ⁇ , s ′ - ⁇ + 1 ,..., s ′ - ⁇ + N ) (2)
- u 0 f ⁇ (s ⁇ + M , s ⁇ + M-1 ,..., s ⁇ , s ⁇ -1 ,..., s ⁇ -N ) (3)
- the interpolation filter in part (b) of Figure 5 can also recover integer pixel positions from fractional pixel positions, namely:
- the embodiments of the present invention provide two training methods for interpolation filters.
- the interpolation filter training method can be run in an encoder or a computing device.
- the computing device may include, but is not limited to, a computer, a cloud computing device, a server, a terminal device, and so on.
- interpolation filters with fixed coefficients are generally used, for example, bilinear interpolation filters, bicubic interpolation filters, and the like.
- interpolation filters with fixed coefficients are commonly used in video coding standards.
- H.264 / AVC a 6-tap finite corresponding filter is used to generate half-pixel samples, and simple bilinear interpolation is used to generate quarter pixels.
- the interpolation filter in HEVC has made many improvements compared to H.264 / AVC.
- An 8-tap filter is used to generate half pixels, while a quarter-pixel uses a 7-tap interpolation filter.
- the fixed coefficient interpolation filter is easy to implement and has low complexity, so it is widely used. However, due to the diversity and non-stationarity of the video signal, the performance of the fixed coefficient filter is very limited.
- a typical adaptive interpolation filter estimates the filter coefficients at the encoding end according to the error of motion compensation prediction, and then encodes the filter coefficients into the code stream.
- a separable adaptive interpolation filter is proposed, which can achieve a significant reduction in complexity while basically maintaining the coding performance.
- some adaptive interpolation filters are usually designed to assume that the image is isotropic.
- the adaptive interpolation filter is content-adaptive, it is still based on the linear interpolation filter.
- encoding the filter coefficients still requires some bits.
- the embodiment of the present application is based on the above technical problem, and proposes a training method of the interpolation filter. Please refer to the schematic flowchart of an interpolation filter training method provided in the embodiment of the present application shown in FIG. 6A, and the schematic explanatory diagram of the training process shown in FIG. 6B.
- the method includes but is not limited to some or all of the following steps:
- S612 Interpolate the pixels of the sample image at integer pixel positions through the first interpolation filter to obtain the first sub-pixel image of the sample image at the first sub-pixel position.
- S614 Input the sample image into the second interpolation filter to obtain a second sub-pixel image.
- S616 Determine the filter parameter of the second interpolation filter by minimizing a first function representing the difference between the first sub-pixel image block and the second sub-pixel image block.
- steps S64 and S66 are an iterative process in the training process.
- the sample image may be the original image X or the image X ′ after the original image X is encoded and compressed by the encoder.
- the sample image input to the first interpolation filter is the original image
- the image input to the second interpolation filter may be an image after the sample image is encoded and compressed by the encoder.
- the first interpolation filter is any interpolation filter in the prior art that can interpolate and generate pixel values at the first fractional pixel position.
- the first interpolation filter may be an interpolation filter with a fixed coefficient, a Adaptive interpolation filters or other types of interpolation filters are not limited in the embodiments of the present application.
- the first fractional pixel position may be any fractional pixel position. It can be seen that in the embodiment of the present application, the first interpolation pixel image is used as the label data to train the second interpolation filter, so that the obtained second interpolation filter can be directly used for interpolation to obtain the pixel value of the first fractional pixel position.
- the second interpolation filter may be a support vector machine (SVM), neural network (NN), convolutional neural network (CNN), or other forms.
- SVM support vector machine
- NN neural network
- CNN convolutional neural network
- the application examples are not limited.
- the first function may be a function for representing the difference between the first fractional pixel image and the second fractional pixel image.
- the first function may be a loss function, an objective function, a cost function, etc., which is not limited in the embodiments of the present application.
- the first function is a regularization loss function, and the first function can be expressed as:
- ⁇ is the index of the fractional pixel position
- L reg ⁇ is the first function corresponding to the fractional pixel position ⁇
- X is the sample image
- X ′ is the image after the sample image is compressed by the encoder
- TIF ⁇ is the fractional pixel position ⁇ corresponds to the first interpolation filter
- F is the second interpolation filter
- TIF ⁇ (X) is the first sub-pixel image corresponding to the fractional pixel position ⁇
- X f, ⁇ F ⁇ (X ′) is the fractional pixel position
- norm symbol Where i is the index of the pixel in x.
- the first function may also be specifically expressed in other ways, for example, a log loss function, a square loss function, an exponential loss function, or another form of loss function, which is not limited in this embodiment of the present application.
- sample image may be one image or multiple images.
- a sample image may be a frame image, a coding unit (CU), or a prediction unit (PU), which is not limited in the present invention.
- the filter parameters of the second interpolation filter can be obtained by minimizing the loss function, and the training process can be expressed as:
- n is the total number of sample images, n is a positive integer; k is the index of the sample image, k is a positive integer, k ⁇ n, ⁇ * is the optimal filter parameter, and ⁇ is the filter parameter, It is the first function corresponding to the sample image k.
- n may be equal to 1 or other positive integer.
- the filter parameters of the second interpolation filter may be solved by least square method, linear regression, linear regression, gradient descent, or other methods.
- the interpolation filter obtained by machine learning has no label data.
- the label data used in the prior art uses a "blur-sampling" method to blur the sample image using a low-pass filter, so that the correlation between adjacent pixels increases. Then the image is sampled into several sub-images according to different phases. Consider phase 0 as a whole pixel, and other phases as sub-pixels at different positions. However, the label data obtained by this method is designed manually, so it is not optimal.
- the interpolation filter training method performs sub-pixel interpolation on the sample image through a traditional interpolation filter to obtain a first sub-pixel image, which is used as label data and supervises the first sub-pixel image to train the interpolation
- the filter (second interpolation filter) obtains an interpolation filter, which can improve the performance of encoding and decoding.
- interpolation filters corresponding to multiple fractional pixel positions can be jointly trained.
- the specific implementation method includes but is not limited to the following steps:
- S1 Interpolate the pixels of the sample image at integer pixel positions through the first interpolation filter corresponding to the fractional pixel position ⁇ to obtain the first sub-pixel image of the sample image at the fractional pixel position ⁇ , where the fractional pixel position ⁇ is Any one of the Q fractional pixel positions, Q is a positive integer.
- Q may be the total number of fractional pixel positions, or other numerical values.
- FIG. 6C Please refer to the schematic flowchart of another interpolation filter training method provided in the embodiment of the present application shown in FIG. 6C, and the schematic explanatory diagram of the training process shown in FIG. 6D.
- the method includes but is not limited to some or all of the following steps:
- S622 Perform sub-pixel interpolation on the sample image through the first interpolation filter to obtain the first sub-pixel image of the sample image at the first sub-pixel position.
- S624 Input the sample image into the second interpolation filter to obtain a second sub-pixel image.
- the second sub-pixel image is input into the third interpolation filter through the inversion operation to obtain the first image, and the first image is obtained through the inverse operation of the inversion operation to obtain the second image.
- the three interpolation filters share filter parameters.
- S628 Determine the filter parameter according to a first function representing the difference between the first sub-pixel image and the second sub-pixel image and a second function representing the difference between the sample image and the second image.
- steps S76 and S78 are an iterative process in the training process.
- the sub-pixel image X f generated by the sub-pixel interpolation undergoes a flip operation T and then undergoes a third interpolation filter to perform the sub-pixel interpolation to obtain the first image, and the first image is then subjected to the inverse operation T -1 of the flip operation T
- the reconstructed image of the sample image, that is, the second image is obtained.
- the first image and the second image are all pixel images
- the flip operation includes horizontal flip, vertical flip and diagonal flip.
- the selection of the type of flip operation can be determined by the following formula:
- y f and x f respectively represent the sub-pixel displacement of the flipped image relative to the second sub-pixel image in the vertical and horizontal directions.
- the second function may be a function for representing the difference between the sample image and the second image.
- the second function may be a loss function, an objective function, a cost function, etc., which is not limited in the embodiments of the present application.
- the second function can be expressed as:
- L rec ⁇ is the second function
- X is the sample image
- ⁇ indicates the position of the first fractional pixel
- TIF ⁇ is the first interpolation filter
- F is the second interpolation filter
- TIF ⁇ (X) is the first fraction Pixel image
- X f, ⁇ is the second sub-pixel image.
- TT -1 E
- E is the identity matrix.
- the first function may also be specifically expressed in other ways, for example, a log loss function, a square loss function, an exponential loss function, or another form of loss function, which is not limited in this embodiment of the present application.
- step S628 may be:
- the filter parameters are determined by minimizing a third function, where the third function is a first function used to represent the difference between the first sub-pixel image and the second sub-pixel image and used to represent A weighted sum of the second function of the difference between the sample image and the second image.
- the joint loss function (also called the third function) is defined as follows:
- the filter parameters of the second interpolation filter can be obtained by minimizing the joint loss function, and the training process can be expressed as:
- step S48 may be: by alternately minimizing a first loss function representing the difference between the first sub-pixel image and the second sub-pixel image And a second function representing the difference between the sample image and the second image determines the filter parameter.
- the sample image may be one image or multiple images.
- a sample image may be a frame image, a coding unit (CU), or a prediction unit (PU), which is not limited in the present invention.
- the filter parameters of the second interpolation filter can be obtained by minimizing the loss function.
- the training process can be expressed as:
- n is the total number of sample images, n is a positive integer; k is the index of the sample image, k is a positive integer, k ⁇ n, ⁇ * is the optimal filter parameter, and ⁇ is the filter parameter, Is the first function corresponding to the sample image k, It is the second function corresponding to the sample image k.
- n may be equal to 1 or other positive integer.
- the interpolation filter training method performs sub-pixel interpolation on the sample image through a traditional interpolation filter to obtain a first sub-pixel image, which is used as label data.
- a traditional interpolation filter to obtain a first sub-pixel image, which is used as label data.
- the second interpolation filter is constrained by supervising the sample image, and the accuracy of the sub-pixel interpolation by the second interpolation filter is improved.
- This method can be performed by the video encoder 100.
- the method is described as a series of steps or operations. It should be understood that the steps may be performed in various orders and / or simultaneously, and are not limited to the order of execution shown in FIG. 6.
- the video data stream with multiple video frames is using a video encoder, perform a process including the following steps to predict the motion information of the currently encoded image block of the current video frame, and based on the inter prediction mode and the current encoding of the currently encoded image block
- the motion information of the image block encodes the currently encoded image block.
- S72 Perform inter-frame prediction on the current coded image block to obtain motion information of the current coded image block, where the motion information of the current coded image block points to a fractional pixel position.
- the inter-frame prediction process includes: determining from the set of candidate interpolation filters The target interpolation filter for the currently encoded image block.
- S74 Encode the current encoded image block based on the inter prediction mode of the current encoded image block and the motion information of the current encoded image block to obtain the encoded information, and encode the encoded information into the code stream, where the encoded information includes the target interpolation filter ’s Indication information; the indication information of the target interpolation filter is used to instruct the reference block for obtaining the fractional pixel position corresponding to the current encoded image block through the pixel interpolation by the target interpolation filter.
- the video encoder needs to encode the indication information of the target filter into the code stream, so that the decoding end knows the encoding The type of target filter for the prediction block.
- S82 Perform inter prediction on the current encoded image block to obtain the motion information of the current encoded image block, where the motion information of the current encoded image block points to the position of the fractional pixel.
- the inter prediction process includes: The target interpolation filter for the currently encoded image block.
- S84 Encode the current encoded image block based on the inter prediction mode of the current encoded image block and the motion information of the current encoded image block, obtain the encoded information, and encode the encoded information into the code stream, where, if the frame of the currently encoded image block The inter prediction mode is the target inter prediction mode, and the coding information does not include the indication information of the target interpolation filter; if the inter prediction mode of the currently encoded image block is a non-target inter prediction mode, the coding information includes the indication information of the target interpolation filter The indication information of the target interpolation filter is used to instruct the current encoded image block to use the target interpolation filter to perform sub-pixel interpolation.
- the video encoder includes a set of candidate interpolation filters.
- the set of candidate interpolation filters may include multiple types of interpolation filters, and for each type of interpolation filter, one or more interpolation filters may be included. .
- the video codec When the video codec performs inter prediction on the current encoded image block, it may select one of the interpolation filters to perform sub-pixel interpolation to obtain the predicted block of the current encoded image block.
- the target interpolation filter is an interpolation filter that performs sub-pixel interpolation to obtain a prediction block or a type of interpolation filter that obtains the prediction block. That is to say, the instruction information of the target interpolation filter indicates the interpolation filter that obtains the prediction block, and may also indicate the type of interpolation filter that obtains the prediction block.
- the candidate interpolation filter set includes two types of interpolation filters, for example, a first type interpolation filter and a second type interpolation filter.
- the indication information may be "0 "
- the target interpolation filter is the second type of interpolation filter
- the indication information may be" 1 ".
- first-type interpolation filter or the second-type interpolation filter may include second interpolation filters respectively corresponding to one or more fractional pixel positions trained by the interpolation filter.
- the determination of the target interpolation filter may include but is not limited to the following two implementation manners.
- the target interpolation filter for the current encoded image block is determined from the set of candidate interpolation filters according to the rate-distortion cost criterion.
- the specific implementation is as follows: calculating the rate-distortion cost of the sub-pixel image block generated by each type of interpolation filter, and determining the interpolation filter with the lowest rate-distortion cost as the target interpolation filter that is finally used for the sub-pixel interpolation to obtain the prediction block corresponding to the current encoded image block Device.
- the video encoder can search for an entire pixel reference block that optimally matches the current encoded image block; in turn, through the first type of interpolation filter (candidate interpolation filter set) Any kind of interpolation filter), sub-pixel interpolation is performed on the entire pixel reference image block to obtain P sub-pixel reference image blocks, the prediction block is determined, the motion information of the prediction block is obtained and the residual is calculated, and the residual, motion information, etc.
- Encoding information is encoded into the code stream, and image block reconstruction is performed according to the code stream.
- the mean square deviation of the reconstructed image block and the current image block of the enemy is used as distortion, and the obtained code stream university is used as the code rate.
- the rate obtains the rate-distortion cost of the first-type interpolation filter.
- the calculation of the rate-distortion cost is in the prior art, and will not be repeated here. It should be understood that in the present application, although the current encoding image block is completely encoded and reconstructed during the inter-frame prediction process, this process is a test process, and the encoding information obtained by this process may not necessarily be written into the code stream . Optionally, only the coding information obtained by the coding process involving a type of interpolation filter with the lowest rate-distortion cost is written to the code stream.
- P is a positive integer, which is determined by the accuracy of the sub-pixel interpolation performed by the first-type interpolation filter.
- the video encoder can search for an integer pixel reference block that optimally matches the current encoded image block; the integer pixel reference image is matched by each interpolation filter in the candidate interpolation filter set
- the block performs sub-pixel interpolation to obtain N sub-pixel reference image blocks, where N is a positive integer; determine the prediction block that best matches the current encoded image block among the entire pixel reference image block and the N sub-pixel reference image blocks; based on prediction
- the block determines the motion information.
- the target filter is an interpolation filter that interpolates the prediction block or a type of interpolation filter that obtains the prediction block.
- the motion information points to integer pixels
- the candidate interpolation filter set may include a second interpolation filter obtained by any one of the above interpolation filter training methods.
- the filter parameter of the target interpolation filter is a preset filter parameter; or, the target interpolation
- the filter parameters of the filter are filter parameters obtained by online training according to any of the above-mentioned interpolation filter training methods.
- the coding information further includes filter parameters of the trained target interpolation filter; or, the coding information further includes filter parameter differences, and the filter parameter differences are the trained target interpolation filters for the current image unit
- the filter parameters of the filter are relative to the filter parameters of the training target interpolation filter for the previously encoded image unit.
- the image unit includes image frames, slices, video sequence subgroups, coding tree units (CTU), coding units (CU) or prediction units (PU), and so on. That is to say, the video encoder can encode every frame of an image fast, a slice of other image blocks, a video sequence subgroup, a coding tree unit (CTU), a coding unit (CU) or a prediction unit (PU), that is Train once.
- CTU coding tree unit
- CU coding unit
- PU prediction unit
- the video encoder may obtain the current encoded image block as a sample image every time, and train the second interpolation filter in the candidate interpolation filter set.
- the following describes two specific implementation processes of the video image decoding method provided by the embodiments of the present invention based on FIGS. 9 and 10.
- the method may be performed by the video decoder 200.
- the method is described as a series of steps or operations. It should be understood that the steps may be performed in various orders and / or simultaneously, and are not limited to the execution shown in FIG. 9 or FIG. 10. order.
- the implementation process may include some or all of the following steps:
- S96 Perform a prediction process on the current decoded image block based on the motion information of the current decoded image block, where the prediction process includes: performing sub-pixel interpolation according to the target interpolation filter indicated by the indication information to obtain The prediction block of the currently decoded image block is described.
- S98 Reconstruct the reconstructed block of the currently decoded image block based on the predicted block of the currently decoded image block.
- step S92 may be executed before or after step S94, or may be executed simultaneously with step S94, which is not limited in this embodiment of the present invention.
- the encoded information parsed by the code stream includes the indication information of the target interpolation filter.
- the inter prediction mode it is related to the inter prediction mode to analyze whether the mutual encoding information contains motion information.
- the inter prediction mode is the merge mode
- the video decoder can inherit the motion information of the previously decoded image block merged in the merge mode; when the inter prediction mode is the non-merge mode, the video decoder can start from the code stream
- the index of the motion information of the decoded image block is parsed out, or the index of the motion information of the decoded image block and the difference of the motion vector are parsed from the code stream to obtain the motion information.
- an implementation of step S94 may include: the video decoder may parse out the index of the motion information of the decoded image block from the code stream; The motion information index of the block and the candidate motion information list of the currently decoded image block determine the motion information of the currently decoded image block.
- step S94 may include: the video decoder may parse out the index and motion vector difference of the motion information of the decoded image block from the code stream; in turn, based on the motion information of the decoded image block The index and the candidate motion information list of the currently decoded image block determine the motion vector prediction value of the current decoded image block; based on the motion vector prediction value and the motion vector difference value, the motion vector of the current decoded image block is obtained.
- step S94 may include: the video decoder may inherit the motion information of the previously decoded image block merged in the merge mode. It can be understood that the motion information of the image block decoded on this line is consistent with the motion information of the currently decoded image block.
- the video decoder may first perform S94, and when the acquired motion information of the currently decoded image block points to the fractional pixel position, then perform step S92 to parse out the instruction information of the target interpolation filter from the code stream. It can be understood that when the motion information points to an integer pixel position, the coding information corresponding to the current image block parsed from the code stream does not include the indication information of the target interpolation filter; and when the obtained motion information of the current decoded image block points to an integer For the pixel position, the video decoder does not need to perform S92, at this time, it can make prediction based on the acquired motion information
- the implementation process may include some or all of the following steps:
- S102 Analyze the information of the currently decoded image block indicating the inter prediction mode of the currently decoded image block from the code stream;
- inter prediction mode of the current image block is a non-target inter prediction mode, perform an inter prediction process on the current decoded image block based on the motion information of the current decoded image block, where the prediction process
- the method includes: performing sub-pixel interpolation according to the target interpolation filter indicated by the instruction information of the target interpolation filter parsed from the code stream to obtain the prediction block of the current decoded image block;
- S108 Reconstruct the current decoded image block based on the prediction block of the current decoded image block.
- the inter prediction mode is distinguished.
- the target interpolation filter is only needed when the inter prediction mode is a non-target prediction mode (such as a non-merge mode) and the motion information points to the fractional pixel position
- the indication information of the code is encoded into the code stream; when the inter prediction mode is the target inter prediction mode (such as the merge mode), there is no need to include the motion information, the index of the motion information, the motion vector difference, and the target interpolation filter indication information Code into the code stream.
- the target interpolation filter indication needs to be resolved Information; however, when the inter prediction mode is the target prediction mode (eg, merge mode) and the motion information of the currently decoded image block points to the fractional pixel position, the motion information of the previously decoded image block merged in the merge mode can be inherited And the instruction information of the target interpolation filter.
- the target prediction mode eg, merge mode
- step S104 may include: the video decoder may parse out the code stream The index of the motion information of the current decoded image block; further, the motion information of the current decoded image block is determined based on the index of the motion information of the current decoded image block and the candidate motion information list of the current decoded image block.
- step S104 may include: the video decoder may parse out the index and motion vector difference value of the motion information of the decoded image block from the code stream; in turn, based on the motion information of the decoded image block The index and the candidate motion information list of the currently decoded image block determine the motion vector prediction value of the current decoded image block; based on the motion vector prediction value and the motion vector difference value, the motion vector of the current decoded image block is obtained.
- the video decoder needs to parse out the indication information of the target interpolation filter from the code stream, which is needed in the process of the video decoder performing inter prediction, according to the The target interpolation filter indicated by the indication information performs sub-pixel interpolation to obtain the prediction block of the currently decoded image block. If the motion information of the currently decoded image points to an integer pixel position, the video decoder directly obtains the prediction block pointed to by the motion information according to the motion information.
- step S104 may include: the video decoder can inherit the merged mode in the merge mode. The motion information of the image block decoded first.
- the video decoder also needs to inherit the indication information of the interpolation filter used in the decoding process of the previously decoded image block merged in the merge mode, and then determine The target interpolation filter specified by the indication information is required during the inter-frame prediction process of the video decoder, and the pixel interpolation is performed according to the target interpolation filter indicated by the indication information to obtain the prediction block of the currently decoded image block. If the motion information of the currently decoded image points to an integer pixel position, the video decoder directly obtains the prediction block pointed to by the motion information according to the motion information.
- the indication information of the target interpolation filter may also be used To encode.
- the decoding end can determine the indication information of the target interpolation filter parsed from the code stream to determine the indication information The indicated target filter is used to interpolate through the target filter to obtain the prediction block of the currently decoded image block during inter prediction.
- the indication information is indication information that may indicate a target interpolation filter that obtains the prediction block of the current decoded image block through sub-pixel interpolation, or may indicate the type of the target interpolation filter. If the indication information is the type of the target interpolation filter, an implementation method of the video decoder performing sub-pixel interpolation according to the target interpolation filter indicated by the indication information is: the video decoder performs interpolation filtering determined by the indication information according to the motion information The target interpolation filter for the prediction block indicated by the motion information by sub-pixel interpolation is determined in the type of the filter.
- the filter parameter of the target interpolation filter at the video encoding end may be a preset filter parameter Is consistent with the filter parameter of the target interpolation filter at the video encoding end; or, the filter parameter of the target interpolation filter at the video encoding end may also be a filter parameter obtained according to the training method of the interpolation filter described above.
- the encoding information further includes filter parameters of the target interpolation filter for the currently encoded image unit.
- the video decoder at the decoding end can also parse the filter parameters from the code stream, and the filter parameters can be the filter parameters of the target interpolation filter for the currently decoded image unit obtained by the above filter training method, video encoding
- the target interpolation filter Before performing the sub-pixel interpolation according to the target interpolation filter indicated by the indication information to obtain the prediction block of the currently decoded image block, the target interpolation filter can also be configured through the filter parameters of the target interpolation filter for the currently decoded image unit Device.
- the encoding information further includes the filter parameter difference.
- the video decoder at the decoding end can also parse into the filter parameter interpolation from the code stream, and the filter parameter difference value is the filter parameter of the target interpolation filter obtained by the training for the currently decoded image unit relative to the training.
- the video encoder performs sub-pixel interpolation according to the target interpolation filter indicated by the indication information to obtain the prediction of the currently decoded image block
- the filter parameters of the target interpolation filter for the currently decoded image unit can also be obtained from the filter parameters of the target interpolation filter for the previously decoded image unit and the filter parameter difference; further, The target interpolation filter is configured by the filter parameter of the target interpolation filter for the currently decoded image unit.
- the filter parameters of the target interpolation filter at the encoding end and the decoding end may be fixed as the predicted filter parameters.
- the coding information may not include the filter parameter or the filter parameter difference of the target interpolation filter, and the decoding end does not need to parse the filter parameter or the filter parameter difference of the target interpolation filter.
- the image unit is an image frame, a slice, a video sequence subgroup, a coding tree unit (CTU), a coding unit (CU), or a prediction unit (PU), or the like. That is, the video decoder updates the filter parameters in units of the time duration required for decoding the image unit.
- CTU coding tree unit
- CU coding unit
- PU prediction unit
- the following provides a schematic flowchart of yet another video image encoding method provided by the present application, as shown in FIG. 11.
- the method may include, but is not limited to, some or all of the following steps:
- S1101 Analyze the information of the currently decoded image block indicating the inter prediction mode of the currently decoded image block from the code stream.
- S1102 Determine whether the inter prediction mode specified by the information indicating the inter prediction mode of the currently decoded image block is a merge mode.
- step S1103 is executed; otherwise, it is used to indicate the current decoded image block
- the inter prediction mode specified by the information of the inter prediction mode is the non-merging mode, and step S1105 is executed.
- S1103 Acquire the motion information of the previously decoded image block merged in the merge mode and the indication information of the target interpolation filter.
- the motion information of the previously decoded image block merged in the merge mode is the motion information of the currently decoded image block.
- S1104 Determine whether the motion information of the currently decoded image block points to an integer pixel position.
- step S1104 may be executed. If the judgment result of S1104 is yes, the image block corresponding to the integer pixel position pointed by the motion information is the prediction block of the currently decoded image block, and the video decoder may execute step S1109, otherwise, execute step S1108.
- S1105 Parse the motion information of the currently decoded image block from the code stream.
- S1106 Determine whether the motion information of the currently decoded image block points to an integer pixel position.
- step S1106 may be executed. If the judgment result of S1106 is yes, it means that the image block corresponding to the integer pixel position pointed by the motion information is the prediction block of the currently decoded image block, and the video decoder can execute step S1109, otherwise, it shows the current decoded image block
- the prediction block of is a sub-pixel image, and the video decoder executes step S1107.
- S1107 Analyze the indication information of the target interpolation filter used for the current encoded image block.
- S1108 Perform sub-pixel interpolation according to the target interpolation filter indicated by the indication information of the target interpolation filter to obtain the prediction block of the decoded image block.
- S1109 Reconstruct the reconstructed block of the currently decoded image block based on the predicted block of the currently decoded image block.
- the video decoder judges whether the decoded image block in the above process is the last image block, and if so, the decoding process ends, otherwise, the above decoding process can be performed on the next image block to be decoded.
- FIG. 12 is a schematic block diagram of an interpolation filter training device provided by an embodiment of the present invention.
- the interpolation filter training device 1200 may be connected to an inter prediction unit or set in the encoder 100. In other units in the encoder 100.
- the training of the interpolation filter may also be implemented by a computing device, which may be a computer, server, or other device or device that includes data processing.
- the interpolation filter 1200 may include, but is not limited to, a tag data acquisition module 1201, an interpolation module 1202, and a parameter determination module 1203. among them:
- the tag data acquisition module 1201 is configured to interpolate the pixels of the sample image at integer pixel positions through the first interpolation filter to obtain the first sub-pixel image of the sample image at the first sub-pixel position;
- the interpolation module 1202 is configured to input the sample image into a second interpolation filter to obtain a second sub-pixel image
- the parameter determining module 1203 is configured to determine the filter parameter of the second interpolation filter by minimizing a first function representing the difference between the first sub-pixel image and the second sub-pixel image.
- FIG. 13 is a schematic block diagram of another interpolation filter training apparatus provided by an embodiment of the present invention.
- the interpolation filter 1300 may include but is not limited to a tag data acquisition module 1301, an interpolation module 1302, and an inverse interpolation module 1303 and the parameter determination module 1304.
- the tag data acquisition module 1301 is configured to interpolate the pixels of the sample image at integer pixel positions through the first interpolation filter to obtain the first sub-pixel image of the sample image at the first sub-pixel position;
- the interpolation module 1302 is configured to input the sample image into a second interpolation filter to obtain a second sub-pixel image
- the inverse interpolation module 1303 is configured to input the second sub-pixel image into a third interpolation filter through a flip operation to obtain a first image, and obtain the second image through the inverse operation of the flip operation An image, wherein the second interpolation filter and the third interpolation filter share filter parameters;
- the parameter determining module 1304 is used for representing the difference between the first sub-pixel image and the second sub-pixel image and the difference between the sample image and the second image according to a first function
- the parameter determination module 1304 is specifically configured to determine the filter parameter by minimizing a third function, where the third function is used to represent the first sub-pixel image and all A weighted sum of a first function of the difference between the second sub-pixel image and a second function representing the difference between the sample image and the second image.
- the parameter determination module 1304 is specifically configured to: by alternately minimizing the first loss function representing the difference between the first sub-pixel image and the second sub-pixel image and used to A second function representing the difference between the sample image and the second image determines the filter parameter.
- FIG. 14 is a schematic block diagram of yet another interpolation filter training apparatus provided by an embodiment of the present application.
- the apparatus 1400 may include a processor 1410 and a memory 1420.
- the memory 1420 is connected to the processor 1410 through a bus 1430.
- the memory 1420 is used to store program codes for implementing any of the above interpolation filter training methods.
- the processor 1410 is used to call the program codes stored in the memory to execute various interpolation filter training methods described in this application. The related description in the embodiment of the interpolation filter method described in FIGS. 6A-6D will not be repeated in the embodiment of the present application.
- the apparatus 1400 may take the form of a computing system containing multiple computing devices, or take the form of a single computing device such as a mobile phone, tablet computer, laptop computer, notebook computer, desktop computer, and so on.
- the processor 1410 in the apparatus 1400 may be a central processor.
- the processor 1410 may be any other type of device or multiple devices that can manipulate or process information currently or will be developed in the future.
- FIG. 14 although a single processor such as processor 1410 may be used to practice the disclosed embodiments, using more than one processor may achieve advantages in speed and efficiency.
- the memory 1420 in the apparatus 140 may be a read-only memory (Read Only Memory, ROM) device or a random access memory (random access memory, RAM) device. Any other suitable type of storage device may be used as the memory 1420.
- the memory 1420 may include code and data 1401 (eg, sample images) accessed by the processor 1410 using the bus 1430.
- the memory 1420 may further include an operating system 1402 and an application program 1403, and the application program 1403 includes at least one program that permits the processor 1410 to perform the method described herein.
- the application 1403 may include applications 1 to N, and applications 1 to N further include video coding applications that perform the methods described herein.
- Apparatus 1400 may also include additional memory in the form of secondary memory 1402, which may be, for example, a memory card used with a mobile computing device. Because the video communication session may contain a large amount of information, the information may be stored in whole or in part in the slave memory 1420 and loaded into the memory 1420 for processing as needed.
- secondary memory 1402 may be, for example, a memory card used with a mobile computing device. Because the video communication session may contain a large amount of information, the information may be stored in whole or in part in the slave memory 1420 and loaded into the memory 1420 for processing as needed.
- the device 1400 may also include, but is not limited to, a communication interface or module, an input / output device, etc.
- the communication interface or module is used to implement data exchange between the device 1400 and other devices (for example, encoding devices or decoding devices).
- the input device is used to realize the input of information (text, images, sound, etc.) or commands, and may include but not limited to a touch screen, a keyboard, a camera, a recorder, and the like.
- the output device is used to realize the output of information (text, images, sound, etc.) or commands, and may include but not limited to a display, a microphone, etc., and this application is not limited.
- FIG. 15 is a schematic block diagram of an encoder for implementing the video image encoding method described in FIG. 7 or FIG. 8 according to an embodiment of the present application.
- each unit in the encoder 1500 is as follows:
- the inter prediction unit 1501 is configured to perform inter prediction on the current coded image block to obtain motion information of the current coded image block, where the motion information of the current coded image block points to a fractional pixel position, the frame
- the inter prediction unit includes a filter selection unit 1502, and the filter selection unit 1502 is specifically configured to: determine a target interpolation filter for the current encoded image block from the candidate interpolation filter set;
- the entropy encoding unit 1503 encodes the current coded image block based on the inter prediction mode of the current coded image block and the motion information of the current coded image block, obtains the coded information, and codes the coded information into the code stream
- the coding information includes indication information of the target interpolation filter; the indication information of the target interpolation filter is used to instruct the sub-pixel interpolation through the target interpolation filter to obtain the fractional pixels corresponding to the current encoded image block Reference block for location.
- each unit in the encoder 1500 is as follows:
- the inter prediction unit 1501 is configured to perform inter prediction on the current coded image block to obtain motion information of the current coded image block, where the motion information of the current coded image block points to a fractional pixel position, where
- the inter prediction unit includes a filter selection unit 1502, and the filter selection unit 1502 is configured to: determine a target interpolation filter for the current encoded image block from a set of candidate interpolation filters;
- the entropy coding unit 1503 is configured to encode the current coded image block based on the inter prediction mode of the current coded image block and the motion information of the current coded image block, obtain coding information, and encode the coding information into To the code stream, where, if the inter prediction mode of the currently coded image block is the target inter prediction mode, the coding information does not include indication information of the target interpolation filter; if the frame of the current coded image block
- the inter prediction mode is a non-target inter prediction mode, and the coding information includes indication information of the target interpolation filter, and the indication information of the target interpolation filter is used to instruct the current encoded image block to use the target interpolation filter
- the device performs sub-pixel interpolation.
- the filter selection unit 1502 is specifically configured to determine a target interpolation filter for the current encoded image block from the candidate interpolation filter set according to a rate-distortion cost criterion.
- the inter prediction unit 1501 is specifically configured to:
- the motion information is determined based on the prediction block, where the interpolation filter obtained by interpolation to obtain the prediction block is the target interpolation filter.
- the candidate interpolation filter set includes a second interpolation filter obtained by any of the interpolation filter training methods described in FIGS. 6A-6D.
- the filter parameter of the target interpolation filter is A preset filter parameter; or, the filter parameter of the target interpolation filter is a filter parameter obtained according to any of the interpolation filter training methods described above in FIGS. 6A-6D.
- the coding information further includes filter parameters of the target interpolation filter obtained by training; or, the coding information further includes a filter parameter difference value, and the filter parameter difference value is used for training
- the filter parameters of the target interpolation filter of the current image unit are relative to the filter parameters of the target interpolation filter of the image unit that was trained for the previous encoding.
- the image unit includes an image frame, a slice, a video sequence subgroup, a coding tree unit (CTU), a coding unit (CU), or a prediction unit (PU).
- CTU coding tree unit
- CU coding unit
- PU prediction unit
- FIG. 16 is a schematic block diagram of a decoder for implementing the video image decoding method described in FIGS. 8-10 according to an embodiment of the present application.
- each unit in the decoder 1600 is as follows:
- the entropy decoding unit 1601 is used to parse out the instruction information of the target interpolation filter from the code stream; and, obtain the motion information of the currently decoded image block, where the motion information points to the fractional pixel position;
- the inter prediction unit 1602 is configured to perform a prediction process on the current decoded image block based on the motion information of the current decoded image block, where the prediction process includes: performing according to the target interpolation filter indicated by the indication information Sub-pixel interpolation to obtain the prediction block of the current decoded image block.
- the reconstruction unit 1603 is configured to reconstruct the reconstruction block of the current decoded image block based on the prediction block of the current decoded image block.
- FIG. 3 A schematic block diagram of the encoder shown in FIG. 3. Corresponding to the video image decoding method shown in FIG. 10 or FIG. 11, in another embodiment of the present application, the specific functions of each unit in the decoder 200 are as follows:
- the entropy decoding unit 1601 is configured to parse out the information of the currently decoded image block indicating the inter prediction mode of the currently decoded image block from the code stream;
- the inter prediction unit 1602 is used to obtain the motion information of the current decoded image block, where the motion information points to a fractional pixel position; if the inter prediction mode of the current image block is a non-target inter prediction mode, based on The motion information of the current decoded image block performs a prediction process on the current decoded image block, where the prediction process includes: target interpolation indicated by the indication information of the target interpolation filter parsed from the code stream The filter performs sub-pixel interpolation to obtain the prediction block of the current decoded image block;
- the reconstruction unit 1603 is configured to reconstruct the current decoded image block based on the prediction block of the current decoded image block.
- the inter prediction unit 1602 is further configured to: if the inter prediction mode of the current image block is the target inter prediction mode, perform prediction on the current decoded image block based on the motion information of the current decoded image block Process, wherein the prediction process includes: determining a target interpolation filter for the current decoded image block; performing sub-pixel interpolation through the target interpolation filter to obtain the prediction block of the current decoded image block.
- the inter prediction unit 1602 determines a target interpolation filter for the current decoded image block, specifically including: The interpolation filter used in the decoding process of the image block decoded first is the target interpolation filter used for the current decoded image block; or, the target interpolation filter used for the current decoded image block is determined as The target interpolation filter indicated by the indication information of the target interpolation filter parsed from the code stream.
- the decoder 1600 acquiring the motion information of the currently decoded image block may include but is not limited to the following three implementation manners:
- the entropy decoding unit 1601 is specifically configured to parse out the index of the motion information of the decoded image block from the code stream;
- the inter prediction unit 1602 is further configured to determine the motion information of the currently decoded image block based on the index of the motion information of the decoded image block and the candidate motion information list of the current decoded image block.
- the entropy decoding unit 1601 is specifically configured to: parse out the motion information index and motion vector difference of the current decoded image block from the code stream;
- the inter prediction unit 1602 is further configured to: determine the motion vector prediction value of the current decoded image block based on the index of the motion information of the current decoded image block and the candidate motion information list of the current decoded image block; and, based on the The motion vector prediction value and the motion vector difference value to obtain the motion vector of the current decoded image block.
- the entropy decoding unit 1601 is specifically used to parse out the index and motion vector difference of the motion information of the decoded image block from the code stream value;
- the inter prediction unit 1602 is further configured to determine the motion vector prediction value of the current decoded image block based on the index of the motion information of the current decoded image block and the candidate motion information list of the current decoded image block; further, based on the The motion vector prediction value and the motion vector difference value to obtain the motion vector of the current decoded image block.
- the target filter is a second interpolation filter obtained by any of the interpolation filter training methods described in FIGS. 6A-6D
- the filter parameter of the target interpolation filter is a preset filter parameter; or, the filter parameter of the target interpolation filter is a filter obtained by any of the interpolation filter training methods described above in FIGS. 6A-6D Parameter.
- the entropy decoding unit 1601 is further configured to: parse out the filter parameters of the target interpolation filter for the currently decoded image unit from the code stream;
- the decoder 1600 further includes a configuration unit 1604 for configuring the target interpolation filter through the filter parameter of the target interpolation filter of the currently decoded image unit.
- the entropy decoding unit 1601 is further configured to: parse out the filter parameter difference value from the code stream, and the filter parameter difference value is an image unit used for current decoding
- the filter parameters of the target interpolation filter of Relative to the filter parameters of the target interpolation filter for the previously decoded image unit are used for the filter parameters of the target interpolation filter of the currently decoded image unit;
- the decoder further includes: a configuration unit 1604, configured to obtain the target interpolation of the currently decoded image unit according to the filter parameter of the target interpolation filter of the previously decoded image unit and the filter parameter difference Filter parameters of the filter; and, configuring the target interpolation filter through the filter parameters of the target interpolation filter of the currently decoded image unit.
- the image unit includes an image frame, a slice, a video sequence subgroup, a coding tree unit (CTU), a coding unit (CU), or a prediction unit (PU).
- CTU coding tree unit
- CU coding unit
- PU prediction unit
- the device 1700 may include a processor 1702 and a memory 1720, the memory 1704 is connected to the processor 1710 through a bus 1712, and the memory 1720 is used to store program codes for implementing any of the above video image encoding methods,
- the processor 1710 is configured to call the program code stored in the memory to perform various video image encoding / decoding methods described in this application. For details, please refer to the relevant descriptions in the video image encoding method method embodiments described in FIGS. 7-11. The application examples are not repeated here.
- the device 1700 may take the form of a computing system containing multiple computing devices, or in the form of a single computing device such as a mobile phone, tablet computer, laptop computer, notebook computer, desktop computer, or the like.
- the processor 1702 in the device 1700 may be a central processor.
- the processor 1702 may be any other type of device or multiple devices that can manipulate or process information currently or will be developed in the future.
- a single processor such as processor 1702 can be used to practice the disclosed embodiments, the use of more than one processor can achieve advantages in speed and efficiency.
- the memory 1704 in the device 1700 may be a read-only memory (Read Only Memory, ROM) device or a random access memory (random access memory, RAM) device. Any other suitable type of storage device may be used as the memory 1704.
- the memory 1704 may include code and data 1706 accessed by the processor 1702 using the bus 1712.
- the memory 1704 may further include an operating system 1708 and an application program 1710.
- the application program 1710 includes at least one program that permits the processor 1702 to perform the method described herein.
- the application 1710 may include applications 1 to N, and applications 1 to N further include video coding applications that perform the methods described herein.
- the device 1700 may also include additional memory in the form of a secondary memory 1714, which may be, for example, a memory card used with a mobile computing device. Because the video communication session may contain a large amount of information, the information may be stored in whole or in part in the slave memory 1714 and loaded into the memory 1704 as needed for processing.
- a secondary memory 1714 may be, for example, a memory card used with a mobile computing device. Because the video communication session may contain a large amount of information, the information may be stored in whole or in part in the slave memory 1714 and loaded into the memory 1704 as needed for processing.
- Device 1700 may also include one or more output devices, such as display 1718.
- the display 1718 may be a touch-sensitive display that combines a display and a touch-sensitive element operable to sense touch input.
- the display 1718 may be coupled to the processor 1702 through the bus 1712.
- other output devices allowing the user to program the device 1700 or otherwise use the device 1700 or provide other output devices as an alternative to the display 1718 may be provided.
- the display can be implemented in different ways, including through a liquid crystal display (liquid crystal) display (LCD), a cathode-ray tube (CRT) display, a plasma display or a light emitting diode (light emitting diode) diode (LED) display, such as organic LED (organic LED, OLED) display.
- LCD liquid crystal display
- CTR cathode-ray tube
- plasma display or a light emitting diode (light emitting diode) diode (LED) display, such as organic LED (organic LED, OLED) display.
- OLED organic LED
- the device 1700 may also include or be in communication with an image sensing device 1720, such as a camera or any other image sensing device 1720 that can sense an image, such as a camera or an existing or future development This is an image of the user running the device 1700.
- the image sensing device 1720 may be placed directly facing the user who runs the device 1700.
- the position and optical axis of the image sensing device 1720 can be configured so that its field of view includes an area immediately adjacent to the display 1718 and the display 1718 is visible from the area.
- the device 1700 may also include or be in communication with a sound sensing device 1722, such as a microphone or any other sound sensing device that can sense sound in the vicinity of the device 1700 or is currently or will be developed in the future.
- the sound sensing device 1722 may be placed to directly face the user who runs the apparatus 1700, and may be used to receive sounds made by the user when the apparatus 1700 is operated, such as voice or other utterances.
- the processor 1702 and the memory 1704 of the device 1700 are illustrated in FIG. 17 as being integrated in a single unit, other configurations may also be used.
- the operation of the processor 1702 may be distributed among multiple directly-coupled machines (each machine has one or more processors), or distributed in a local area or other network.
- the memory 1704 may be distributed among multiple machines, such as network-based memory or memory among multiple machines running the device 1700. Although only a single bus is shown here, the bus 1712 of the device 1700 may be formed by multiple buses.
- the slave memory 1714 may be directly coupled to other components of the device 1700 or may be accessed through a network, and may include a single integrated unit, such as one memory card, or multiple units, such as multiple memory cards. Therefore, the device set 1700 can be implemented in various configurations.
- the processor may be a central processing unit (Central Processing Unit, referred to as "CPU"), the processor may also be other general-purpose processors, digital signal processors (DSP), application specific integrated circuits (ASIC), Ready-made programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
- the general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
- the memory may include a read only memory (ROM) device or a random access memory (RAM) device. Any other suitable type of storage device can also be used as the memory.
- the memory may include code and data accessed by the processor using the bus.
- the memory may further include an operating system and an application program including at least one program that allows the processor to execute the video encoding or decoding method described in the present application (in particular, the inter prediction method or the motion information prediction method described in the present application).
- the application program may include applications 1 to N, which further include a video encoding or decoding application (referred to simply as a video decoding application) that performs the video image encoding or decoding method described in this application.
- the bus system may also include a power bus, a control bus, and a status signal bus.
- a power bus may also include a power bus, a control bus, and a status signal bus.
- various buses are marked as bus systems in the figure.
- the decoding device may also include one or more output devices, such as a display.
- the display may be a tactile display that combines the display with a tactile unit that operably senses touch input.
- the display can be connected to the processor via a bus.
- Computer-readable media may include computer-readable storage media, which corresponds to tangible media, such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another (eg, according to a communication protocol).
- computer-readable media may generally correspond to (1) non-transitory tangible computer-readable storage media, or (2) communication media, such as signals or carrier waves.
- Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and / or data structures for implementation of the techniques described in this application.
- the computer program product may include a computer-readable medium.
- such computer-readable storage media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, flash memory, or may be used to store instructions or data structures
- any connection is properly called a computer-readable medium.
- a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and microwave are used to transmit instructions from a website, server, or other remote source
- the coaxial cable Wire, fiber optic cable, twisted pair, DSL or wireless technologies such as infrared, radio and microwave are included in the definition of media.
- the computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other temporary media, but are actually directed to non-transitory tangible storage media.
- magnetic disks and optical discs include compact discs (CDs), laser discs, optical discs, digital versatile discs (DVDs), and Blu-ray discs, where magnetic discs usually reproduce data magnetically, while optical discs reproduce optically using laser data. Combinations of the above should also be included in the scope of computer-readable media.
- DSP digital signal processors
- ASIC application specific integrated circuits
- FPGA field programmable logic arrays
- DSP digital signal processors
- ASSIC application specific integrated circuits
- FPGA field programmable logic arrays
- DSP digital signal processors
- ASCI application specific integrated circuits
- FPGA field programmable logic arrays
- processor may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein.
- the functions described in the various illustrative logical blocks, modules, and steps described herein may be provided within dedicated hardware and / or software modules configured for encoding and decoding, or in combination Into the combined codec.
- the techniques can be fully implemented in one or more circuits or logic elements.
- the technology of the present application may be implemented in a variety of devices or equipment, including wireless handsets, integrated circuits (ICs), or a set of ICs (eg, chipsets).
- ICs integrated circuits
- a set of ICs eg, chipsets
- Various components, modules or units are described in this application to emphasize the functional aspects of the device for performing the disclosed technology, but do not necessarily need to be implemented by different hardware units.
- various units can be combined in a codec hardware unit in combination with suitable software and / or firmware, or by interoperating hardware units (including one or more processors as described above) provide.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (76)
- 一种插值滤波器训练方法,其特征在于,包括:通过第一插值滤波器对样本图像在整数像素位置的像素进行插值,得到所述样本图像在第一分数像素位置的第一分像素图像;将所述样本图像输入到第二插值滤波器中,得到第二分像素图像;通过最小化用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数确定所述第二插值滤波器的滤波器参数。
- 一种基于插值滤波器训练方法,其特征在于,包括:通过第一插值滤波器对样本图像在整数像素位置的像素进行插值,得到所述样本图像在第一分数像素位置的第一分像素图像;将所述样本图像输入到第二插值滤波器中,得到第二分像素图像;将所述第二分像素图像经过翻转运算输入到第三插值滤波器中,得到第一图像,并将所述第一图像通过所述翻转运算的逆运算得到第二图像,其中,所述第二插值滤波器和所述第三插值滤波器共享滤波器参数;根据用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数。
- 根据权利要求2所述的方法,其特征在于,所述根据用于表示所述第一分像素图像与所述第二分像素图像的差值的第一损失函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数,具体包括:通过最小化第三函数确定所述滤波器参数,其中,所述第三函数为用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数和用于表示所述样本图像与所述第二图像的差值的第二函数的加权求和。
- 根据权利要求2所述的方法,其特征在于,所述根据用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数,具体包括:通过交替最小化用于表示所述第一分像素图像与所述第二分像素图像的差值的第一损失函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数。
- 一种视频图像编码方法,其特征在于,包括:对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息,其中,所述当前编码图像块的运动信息指向分数像素位置,所述帧间预测过程包括:从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器;基于所述当前编码图像块的帧间预测模式和所述当前编码图像块的运动信息对所述当前编码图像块进行编码,得到编码信息,将所述编码信息编入码流,其中,所述编码信息 包括目标插值滤波器的指示信息;所述目标插值滤波器的指示信息用于指示通过所述目标插值滤波器进行分像素插值得到所述当前编码图像块对应的分数像素位置的参考块。
- 根据权利要求5所述的方法,其特征在于,所述从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器,包括:根据率失真代价准则从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器。
- 根据权利要求5所述的方法,其特征在于,所述对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息,包括:确定与所述当前编码图像块最优匹配的整像素参考图像块;通过候选插值滤波器集合中每一个插值滤波器对所述整像素参考图像块进行分像素插值,得到N个分像素参考图像块,N为正整数;在所述整像素参考图像块和所述N个分像素参考图像块中确定与所述当前编码图像块最优匹配的预测块;基于所述预测块确定所述运动信息,其中,插值得到所述预测块的插值滤波器即为目标插值滤波器。
- 根据权利要求5-7任一项所述的方法,其特征在于,所述候选插值滤波器集合包括通过如权利要求1-4任意权利要求所述插值滤波器的训练方法得到的第二插值滤波器。
- 根据权利要求8所述的方法,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,则:所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据权利要求1-4所述的插值滤波器的训练方法得到的滤波器参数。
- 根据权利要求9所述的方法,其特征在于,所述编码信息还包括训练得到的所述目标插值滤波器的滤波器参数;或者,所述编码信息还包括滤波器参数差值,所述滤波器参数差值为训练得到的用于当前图像单元的目标插值滤波器的滤波器参数相对于训练得到的用于在先编码的图像单元的目标插值滤波器的滤波器参数。
- 根据权利要求10所述的方法,其特征在于,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
- 一种视频图像编码方法,其特征在于,包括:对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息,其中,所述当前编码图像块的运动信息指向分数像素位置,所述帧间预测过程包括:从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器;基于所述当前编码图像块的帧间预测模式和所述当前编码图像块的运动信息对所述当前编码图像块进行编码,得到编码信息,将所述编码信息编入到码流,其中,若所述当前编码图像块的帧间预测模式是目标帧间预测模式,所述编码信息不包括所述目标插值滤波器的指示信息;若所述当前编码图像块的帧间预测模式为非目标帧间预测模式,所述编码信息包括所述目标插值滤波器的指示信息,所述目标插值滤波器的指示信息用于指示所述当前编码图像块采用所述目标插值滤波器进行分像素插值。
- 根据权利要求12所述的方法,其特征在于,所述从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器,包括:根据率失真代价准则从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器。
- 根据权利要求12所述的方法,其特征在于,所述对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息,包括:确定与所述当前编码图像块最优匹配的整像素参考图像块;通过候选插值滤波器集合中每一个插值滤波器对所述整像素参考图像块进行分像素插值,得到N个分像素参考图像块,N为正整数;在所述整像素参考图像块和所述N个分像素参考图像块中确定与所述当前编码图像块最优匹配的预测块;基于所述预测块确定所述运动信息,其中,插值得到所述预测块的插值滤波器即为目标插值滤波器。
- 根据权利要求12-14任一项所述的方法,其特征在于,所述候选插值滤波器集合包括通过如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器。
- 根据权利要求15所述的方法,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,则:所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据权利要求1-4所述的插值滤波器的训练方法得到的滤波器参数。
- 根据权利要求16所述的方法,其特征在于,所述编码信息还包括训练得到的所述目标插值滤波器的滤波器参数;或者,所述编码信息还包括滤波器参数差值,所述滤波器参数差值为训练得到的用于当前编码的图像单元的目标插值滤波器的滤波器参数相对于训练得到的用于在先编码的图像单元的目标插值滤波器的滤波器参数。
- 根据权利要求17所述的方法,其特征在于,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
- 一种视频图像解码方法,其特征在于,包括:从码流中解析出目标插值滤波器的指示信息;获取当前解码图像块的运动信息,其中,所述运动信息指向分数像素位置;基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:根据所述指示信息所指示的目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块;基于所述当前解码图像块的预测块,重建所述当前解码图像块的重建块。
- 根据权利要求19所述的方法,其特征在于,所述获取当前解码图像块的运动信息,包括:从码流中解析出所述当解码图像块的运动信息的索引;基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定所述当前解码图像块的运动信息。
- 根据权利要求19所述的方法,其特征在于,所述获取当前解码图像块的运动信息,包括:从码流中解析出所述当解码图像块的运动信息的索引和运动矢量差值;基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定当前解码图像块的运动矢量预测值;基于所述运动矢量预测值和所述运动矢量差值,得到所述当前解码图像块的运动矢量。
- 根据权利要求19所述的方法,其特征在于,所述获取当前解码图像块的运动信息包括:若所述当前解码图像块的帧间预测模式为合并模式(merge mode),获取在所述合并模式下合并到的在先解码的图像块的运动信息,即为当前解码图像块的运动信息。
- 根据权利要求20-22任一项所述的方法,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,则:所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据权利要求1-4所述的插值滤波器的训练方法得到的滤波器参数。
- 根据权利要求23所述的方法,其特征在于,所述方法还包括:从码流中解析出用于当前解码的图像单元的目标插值滤波器的滤波器参数;通过所述用于当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
- 根据权利要求23所述的方法,其特征在于,所述方法还包括:从码流中解析出滤波器参数差值,所述滤波器参数差值为用于当前解码的图像单元的目标插值滤波器的滤波器参数相对于用于在先解码的图像单元的目标插值滤波器的滤波器参数用于当前解码的图像单元的目标插值滤波器的滤波器参数;根据所述在先解码的图像单元的目标插值滤波器的滤波器参数和所述滤波器参数差值得到所述当前解码的图像单元的目标插值滤波器的滤波器参数;通过所述当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
- 根据权利要求24或25所述的方法,其特征在于,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
- 一种视频图像解码方法,其特征在于,包括:从码流中解析出当前解码图像块的用于指示所述当前解码图像块的帧间预测模式的信息;获取所述当前解码图像块的运动信息,其中,所述运动信息指向分数像素位置;若所述当前图像块的帧间预测模式为非目标帧间预测模式,基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:根据从所述码流中解析出的目标插值滤波器的指示信息所指示的目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块;基于所述当前解码图像块的预测块,对所述当前解码图像块进行重建。
- 根据权利要求27所述的方法,其特征在于,所述获取当前解码图像块的运动信息,包括:从码流中解析出所述当解码图像块的运动信息的索引;基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定所述当前解码图像块的运动信息。
- 根据权利要求27所述的方法,其特征在于,所述获取当前解码图像块的运动信息,包括:从码流中解析出所述当解码图像块的运动信息的索引和运动矢量差值;基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定所述当前解码图像块的运动矢量预测值;基于所述运动矢量预测值和所述运动矢量差值,得到所述当前解码图像块的运动矢量。
- 根据权利要求27所述的方法,其特征在于,若所述当前图像块的帧间预测模式是目标帧间预测模式,基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:确定用于所述当前解码图像块的目标插值滤波器;通过 所述目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块。
- 根据权利要求30所述的方法,其特征在于,所述目标帧间预测模式为合并模式,其中,所述获取所述当前解码图像块的运动信息,包括:获取在所述合并模式下合并到的在先解码的图像块的运动信息;所述确定用于所述当前解码图像块的目标插值滤波器,包括:确定在所述先解码的图像块在解码过程中使用的插值滤波器为所述用于所述当前解码图像块的目标插值滤波器;或,确定所述用于所述当前解码图像块的目标插值滤波器为从所述码流中解析出的目标插值滤波器的指示信息所指示的目标插值滤波器。
- 根据权利要求27-31任一项所述的方法,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,则:所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据权利要求1-4所述的插值滤波器的训练方法得到的滤波器参数。
- 根据权利要求27-32任一项所述的方法,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,所述方法还包括:从码流中解析出用于当前解码的图像单元的目标插值滤波器的滤波器参数;通过所述当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
- 根据权利要求27-32任一项所述的方法,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,所述方法还包括:从码流中解析出滤波器参数差值,所述滤波器参数差值为用于当前解码的图像单元的目标插值滤波器的滤波器参数相对于用于在先解码的图像单元的目标插值滤波器的滤波器参数用于当前解码的图像单元的目标插值滤波器的滤波器参数;根据所述在先解码的图像单元的目标插值滤波器的滤波器参数和所述滤波器参数差值得到所述当前解码的图像单元的目标插值滤波器的滤波器参数;通过所述当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
- 根据权利要求33或34所述的方法,其特征在于,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
- 一种插值滤波器训练装置,其特征在于,包括:标签数据获取模块,用于通过第一插值滤波器对样本图像在整数像素位置的像素进行插值,得到所述样本图像在第一分数像素位置的第一分像素图像;插值模块,用于将所述样本图像输入到第二插值滤波器中,得到第二分像素图像;参数确定模块,用于通过最小化用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数确定所述第二插值滤波器的滤波器参数。
- 一种插值滤波器训练装置,其特征在于,包括:标签数据获取模块,用于通过第一插值滤波器对样本图像在整数像素位置的像素进行插值,得到所述样本图像在第一分数像素位置的第一分像素图像;插值模块,用于将所述样本图像输入到第二插值滤波器中,得到第二分像素图像;逆插值模块,用于将所述第二分像素图像经过翻转运算输入到第三插值滤波器中,得到第一图像,并将所述第一图像通过所述翻转运算的逆运算得到第二图像,其中,所述第二插值滤波器和所述第三插值滤波器共享滤波器参数;参数确定模块,用于根据用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数。
- 根据权利要求37所述的装置,其特征在于,参数确定模块具体用于:通过最小化第三函数确定所述滤波器参数,其中,所述第三函数为用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数和用于表示所述样本图像与所述第二图像的差值的第二函数的加权求和。
- 根据权利要求37所述的装置,其特征在于,参数确定模块具体用于:通过交替最小化用于表示所述第一分像素图像与所述第二分像素图像的差值的第一损失函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数。
- 一种编码器,其特征在于,包括:帧间预测单元,用于对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息,其中,所述当前编码图像块的运动信息指向分数像素位置,所述帧间预测单元包括滤波器选择单元,用于从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器;熵编码单元,基于所述当前编码图像块的帧间预测模式和所述当前编码图像块的运动信息对所述当前编码图像块进行编码,得到编码信息,将所述编码信息编入码流,其中,所述编码信息包括目标插值滤波器的指示信息;所述目标插值滤波器的指示信息用于指示通过所述目标插值滤波器进行分像素插值得到所述当前编码图像块对应的分数像素位置的参考块。
- 根据权利要求40所述的编码器,其特征在于,所述滤波器选择单元具体用于根据率失真代价准则从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器。
- 根据权利要求40所述的编码器,其特征在于,所述帧间预测单元具体用于:确定与所述当前编码图像块最优匹配的整像素参考图像块;通过候选插值滤波器集合中每一个插值滤波器对所述整像素参考图像块进行分像素插值,得到N个分像素参考图像块,N为正整数;在所述整像素参考图像块和所述N个分像素参考图像块中确定与所述当前编码图像块最优匹配的预测块;基于所述预测块确定所述运动信息,其中,插值得到所述预测块的插值滤波器即为目标插值滤波器。
- 根据权利要求40-42任一项所述的编码器,其特征在于,所述候选插值滤波器集合包括通过如权利要求1-4任意权利要求所述插值滤波器的训练方法得到的第二插值滤波器。
- 根据权利要求43所述的编码器,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,则:所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据权利要求1-4所述的插值滤波器的训练方法得到的滤波器参数。
- 根据权利要求44所述的编码器,其特征在于,所述编码信息还包括训练得到的所述目标插值滤波器的滤波器参数;或者,所述编码信息还包括滤波器参数差值,所述滤波器参数差值为训练得到的用于当前图像单元的目标插值滤波器的滤波器参数相对于训练得到的用于在先编码的图像单元的目标插值滤波器的滤波器参数。
- 根据权利要求45所述的编码器,其特征在于,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
- 一种编码器,其特征在于,包括:帧间预测单元,用于对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息,其中,所述当前编码图像块的运动信息指向分数像素位置,所述帧间预测单元包括滤波器选择单元,所述滤波器选择单元用于:从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器;熵编码单元,用于基于所述当前编码图像块的帧间预测模式和所述当前编码图像块的运动信息对所述当前编码图像块进行编码,得到编码信息,将所述编码信息编入到码流,其中,若所述当前编码图像块的帧间预测模式是目标帧间预测模式,所述编码信息不包括 所述目标插值滤波器的指示信息;若所述当前编码图像块的帧间预测模式为非目标帧间预测模式,所述编码信息包括所述目标插值滤波器的指示信息,所述目标插值滤波器的指示信息用于指示所述当前编码图像块采用所述目标插值滤波器进行分像素插值。
- 根据权利要求47所述的编码器,其特征在于,所述滤波器选择单元具体用于:根据率失真代价准则从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器。
- 根据权利要求47所述的编码器,其特征在于,所述帧间预测单元具体用于:确定与所述当前编码图像块最优匹配的整像素参考图像块;通过候选插值滤波器集合中每一个插值滤波器对所述整像素参考图像块进行分像素插值,得到N个分像素参考图像块,N为正整数;在所述整像素参考图像块和所述N个分像素参考图像块中确定与所述当前编码图像块最优匹配的预测块;基于所述预测块确定所述运动信息,其中,插值得到所述预测块的插值滤波器即为目标插值滤波器。
- 根据权利要求47-49任一项所述的编码器,其特征在于,所述候选插值滤波器集合包括通过如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器。
- 根据权利要求50所述的编码器,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,则:所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据权利要求1-4所述的插值滤波器的训练方法得到的滤波器参数。
- 根据权利要求51所述的编码器,其特征在于,所述编码信息还包括训练得到的所述目标插值滤波器的滤波器参数;或者,所述编码信息还包括滤波器参数差值,所述滤波器参数差值为训练得到的用于当前编码的图像单元的目标插值滤波器的滤波器参数相对于训练得到的用于在先编码的图像单元的目标插值滤波器的滤波器参数。
- 根据权利要求52所述的编码器,其特征在于,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
- 一种解码器,其特征在于,包括:熵解码单元,用于从码流中解析出目标插值滤波器的指示信息;以及,获取当前解码图像块的运动信息,其中,所述运动信息指向分数像素位置;帧间预测单元,用于基于所述当前解码图像块的运动信息对所述当前解码图像块执行 预测过程,其中,所述预测过程包括:根据所述指示信息所指示的目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块;重构单元,用于基于所述当前解码图像块的预测块,重建所述当前解码图像块的重建块。
- 根据权利要求54所述的解码器,其特征在于,所述熵解码单元具体用于从码流中解析出所述当解码图像块的运动信息的索引;所述帧间预测单元,还用于基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定所述当前解码图像块的运动信息。
- 根据权利要求54所述的解码器,其特征在于,所述熵解码单元具体用于:从码流中解析出所述当解码图像块的运动信息的索引和运动矢量差值;所述帧间预测单元还用于:基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定当前解码图像块的运动矢量预测值;以及,基于所述运动矢量预测值和所述运动矢量差值,得到所述当前解码图像块的运动矢量。
- 根据权利要求54所述的解码器,其特征在于,所述帧间预测单元还用于:若所述当前解码图像块的帧间预测模式为合并模式(merge mode),获取在所述合并模式下合并到的在先解码的图像块的运动信息,即为当前解码图像块的运动信息。
- 根据权利要求54-57任一项所述的解码器,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练解码器得到的第二插值滤波器,则:所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据权利要求1-4所述的插值滤波器的训练方法得到的滤波器参数。
- 根据权利要求58所述的解码器,其特征在于,所述熵解码单元还用于:从码流中解析出用于当前解码的图像单元的目标插值滤波器的滤波器参数;所述解码器还包括配置单元,用于通过所述当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
- 根据权利要求58所述的解码器,其特征在于,所述熵解码单元还用于:从码流中解析出滤波器参数差值,所述滤波器参数差值为用于当前解码的图像单元的目标插值滤波器的滤波器参数相对于用于在先解码的图像单元的目标插值滤波器的滤波器参数用于当前解码的图像单元的目标插值滤波器的滤波器参数;所述解码器还包括:配置单元,用于根据所述在先解码的图像单元的目标插值滤波器 的滤波器参数和所述滤波器参数差值得到所述当前解码的图像单元的目标插值滤波器的滤波器参数;以及,通过所述当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
- 根据权利要求59或60所述的解码器,其特征在于,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
- 一种解码器,其特征在于,包括:熵解码单元,用于从码流中解析出当前解码图像块的用于指示所述当前解码图像块的帧间预测模式的信息;帧间预测单元,用于获取所述当前解码图像块的运动信息,其中,所述运动信息指向分数像素位置;若所述当前图像块的帧间预测模式为非目标帧间预测模式,基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:根据从所述码流中解析出的目标插值滤波器的指示信息所指示的目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块;重构单元,用于基于所述当前解码图像块的预测块,对所述当前解码图像块进行重建。
- 根据权利要求62所述的解码器,其特征在于,所述熵解码单元还用于:从码流中解析出所述当解码图像块的运动信息的索引;所述帧间预测单元还用于:基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定所述当前解码图像块的运动信息。
- 根据权利要求62所述的解码器,其特征在于,所述熵解码单元还用于:从码流中解析出所述当解码图像块的运动信息的索引和运动矢量差值;所述帧间预测单元还用于:基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定所述当前解码图像块的运动矢量预测值;以及,基于所述运动矢量预测值和所述运动矢量差值,得到所述当前解码图像块的运动矢量。
- 根据权利要求62所述的解码器,其特征在于,所述帧间预测单元还用于:若所述当前图像块的帧间预测模式是目标帧间预测模式,基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:确定用于所述当前解码图像块的目标插值滤波器;根据所述目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块。
- 根据权利要求65所述的解码器,其特征在于,所述目标帧间预测模式为合并模式,其中,所述获取所述当前解码图像块的运动信息,包括:获取在所述合并模式下合并到的在 先解码的图像块的运动信息;所述确定用于所述当前解码图像块的目标插值滤波器,包括:确定在所述先解码的图像块在解码过程中使用的插值滤波器为所述用于所述当前解码图像块的目标插值滤波器;或,确定所述用于所述当前解码图像块的目标插值滤波器为从所述码流中解析出的目标插值滤波器的指示信息所指示的目标插值滤波器。
- 根据权利要求62-66任一项所述的解码器,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,则:所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据权利要求1-4所述的插值滤波器的训练方法得到的滤波器参数。
- 根据权利要求62-67任一项所述的解码器,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,所述熵解码单元还用于:从码流中解析出用于当前解码的图像单元的目标插值滤波器的滤波器参数;所述解码器还包括:配置单元,用于通过所述当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
- 根据权利要求62-67任一项所述的解码器,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,所述熵解码单元还用于:从码流中解析出滤波器参数差值,所述滤波器参数差值为用于当前解码的图像单元的目标插值滤波器的滤波器参数相对于用于在先解码的图像单元的目标插值滤波器的滤波器参数用于当前解码的图像单元的目标插值滤波器的滤波器参数;所述解码器还包括:配置单元,所述配置单元用于根据所述在先解码的图像单元的目标插值滤波器的滤波器参数和所述滤波器参数差值得到所述当前解码的图像单元的目标插值滤波器的滤波器参数;以及,通过所述当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
- 根据权利要求68或69所述的解码器,其特征在于,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
- 一种插值滤波器的训练装置,其特征在于,包括存储器和处理器;所述存储器用于存储程序代码,所述处理器用于调用所述程序代码,执行如权利要求1-4任一项所述的插值滤波器训练方法。
- 一种编码装置,其特征在于,包括存储器和处理器;所述存储器用于存储程序代码;所述处理器用于调用所述程序代码,以执行如权利要求5-19任一项所述的视频图像编码方法。
- 一种解码装置,其特征在于,包括存储器和处理器;所述存储器用于存储程序代码;所述处理器用于调用所述程序代码,以执行如权利要求20-35任一项所述的视频图像解码方法。
- 一种计算机可读存储介质,其特征在于,包括程序代码,所述程序代码在计算机上运行时,使得所述计算机执行如权利要求1-4任一项所述的插值滤波器训练方法。
- 一种计算机可读存储介质,其特征在于,包括程序代码,所述程序代码在计算机上运行时,使得所述计算机执行如权利要求5-19任一项所述的视频图像编码方法。
- 一种计算机可读存储介质,其特征在于,包括程序代码,所述程序代码在计算机上运行时,使得所述计算机执行如权利要求20-35任一项所述的视频图像解码方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021518927A JP7331095B2 (ja) | 2018-10-06 | 2019-09-26 | 補間フィルタトレーニング方法及び装置、ビデオピクチャエンコーディング及びデコーディング方法、並びに、エンコーダ及びデコーダ |
KR1020217013057A KR102592279B1 (ko) | 2018-10-06 | 2019-09-26 | 보간 필터 훈련 방법 및 장치, 비디오 화상 인코딩 및 디코딩 방법, 및 인코더 및 디코더 |
EP19869096.8A EP3855741A4 (en) | 2018-10-06 | 2019-09-26 | TRAINING METHOD AND DEVICE FOR INTERPOLATION FILTER, VIDEO IMAGE CODING METHOD, VIDEO PICTURE DECODING METHOD, CODERS AND DECODERS |
US17/221,184 US20210227243A1 (en) | 2018-10-06 | 2021-04-02 | Interpolation filter training method and apparatus, video picture encoding and decoding method, and encoder and decoder |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811166872.XA CN111010568B (zh) | 2018-10-06 | 2018-10-06 | 插值滤波器的训练方法、装置及视频图像编解码方法、编解码器 |
CN201811166872.X | 2018-10-06 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/221,184 Continuation US20210227243A1 (en) | 2018-10-06 | 2021-04-02 | Interpolation filter training method and apparatus, video picture encoding and decoding method, and encoder and decoder |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020069655A1 true WO2020069655A1 (zh) | 2020-04-09 |
Family
ID=70054947
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/108311 WO2020069655A1 (zh) | 2018-10-06 | 2019-09-26 | 插值滤波器的训练方法、装置及视频图像编解码方法、编解码器 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20210227243A1 (zh) |
EP (1) | EP3855741A4 (zh) |
JP (1) | JP7331095B2 (zh) |
KR (1) | KR102592279B1 (zh) |
CN (1) | CN111010568B (zh) |
WO (1) | WO2020069655A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021249290A1 (zh) * | 2020-06-10 | 2021-12-16 | 华为技术有限公司 | 环路滤波方法和装置 |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3993414A1 (en) * | 2020-11-03 | 2022-05-04 | Ateme | Method for image processing and apparatus for implementing the same |
CN112911286B (zh) * | 2021-01-29 | 2022-11-15 | 杭州电子科技大学 | 一种分像素插值滤波器的设计方法 |
US11750847B2 (en) | 2021-04-19 | 2023-09-05 | Tencent America LLC | Quality-adaptive neural network-based loop filter with smooth quality control by meta-learning |
WO2022246809A1 (zh) * | 2021-05-28 | 2022-12-01 | Oppo广东移动通信有限公司 | 编解码方法、码流、编码器、解码器以及存储介质 |
CN117597930A (zh) * | 2021-08-20 | 2024-02-23 | 深圳传音控股股份有限公司 | 图像处理方法、移动终端及存储介质 |
US11924467B2 (en) * | 2021-11-16 | 2024-03-05 | Google Llc | Mapping-aware coding tools for 360 degree videos |
WO2023097095A1 (en) * | 2021-11-29 | 2023-06-01 | Beijing Dajia Internet Information Technology Co., Ltd. | Invertible filtering for video coding |
WO2023153891A1 (ko) * | 2022-02-13 | 2023-08-17 | 엘지전자 주식회사 | 영상 인코딩/디코딩 방법 및 장치, 그리고 비트스트림을 저장한 기록 매체 |
CN117201782A (zh) * | 2022-05-31 | 2023-12-08 | 华为技术有限公司 | 滤波方法、滤波模型训练方法及相关装置 |
CN115277331B (zh) * | 2022-06-17 | 2023-09-12 | 哲库科技(北京)有限公司 | 信号补偿方法及装置、调制解调器、通信设备、存储介质 |
CN115278248B (zh) * | 2022-09-28 | 2023-04-07 | 广东电网有限责任公司中山供电局 | 一种视频图像编码设备 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1666429A (zh) * | 2002-07-09 | 2005-09-07 | 诺基亚有限公司 | 用于在视频编码中选择插值滤波器类型的方法和系统 |
CN101043621A (zh) * | 2006-06-05 | 2007-09-26 | 华为技术有限公司 | 一种自适应插值处理方法及编解码模块 |
CN101616325A (zh) * | 2009-07-27 | 2009-12-30 | 清华大学 | 一种视频编码中自适应插值滤波计算的方法 |
CN101790092A (zh) * | 2010-03-15 | 2010-07-28 | 河海大学常州校区 | 基于图像块编码信息的智能滤波器设计方法 |
CN102084655A (zh) * | 2008-07-07 | 2011-06-01 | 高通股份有限公司 | 通过过滤器选择进行的视频编码 |
CN102638678A (zh) * | 2011-02-12 | 2012-08-15 | 乐金电子(中国)研究开发中心有限公司 | 视频编解码帧间图像预测方法及视频编解码器 |
CN103875246A (zh) * | 2011-10-18 | 2014-06-18 | 日本电信电话株式会社 | 影像编码方法、装置、影像解码方法、装置及它们的程序 |
US20160191946A1 (en) * | 2014-12-31 | 2016-06-30 | Microsoft Technology Licensing, Llc | Computationally efficient motion estimation |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040076333A1 (en) * | 2002-10-22 | 2004-04-22 | Huipin Zhang | Adaptive interpolation filter system for motion compensated predictive video coding |
US8516026B2 (en) * | 2003-03-10 | 2013-08-20 | Broadcom Corporation | SIMD supporting filtering in a video decoding system |
CN100551073C (zh) * | 2006-12-05 | 2009-10-14 | 华为技术有限公司 | 编解码方法及装置、分像素插值处理方法及装置 |
EP2048886A1 (en) * | 2007-10-11 | 2009-04-15 | Panasonic Corporation | Coding of adaptive interpolation filter coefficients |
KR101648818B1 (ko) * | 2008-06-12 | 2016-08-17 | 톰슨 라이센싱 | 움직임 보상 보간 및 참조 픽쳐 필터링에 대한 국부적 적응형 필터링을 위한 방법 및 장치 |
EP2136565A1 (en) * | 2008-06-19 | 2009-12-23 | Thomson Licensing | Method for determining a filter for interpolating one or more pixels of a frame, method for encoding or reconstructing a frame and method for transmitting a frame |
KR101647376B1 (ko) * | 2009-03-30 | 2016-08-10 | 엘지전자 주식회사 | 비디오 신호 처리 방법 및 장치 |
JP5570363B2 (ja) * | 2010-09-22 | 2014-08-13 | Kddi株式会社 | 動画像符号化装置、動画像復号装置、動画像符号化方法、動画像復号方法、およびプログラム |
JP5485851B2 (ja) * | 2010-09-30 | 2014-05-07 | 日本電信電話株式会社 | 映像符号化方法,映像復号方法,映像符号化装置,映像復号装置およびそれらのプログラム |
MX346561B (es) * | 2012-07-02 | 2017-03-24 | Samsung Electronics Co Ltd | Metodo y aparato para predecir un vector de movimiento para la codificacion de video o decodificacion de video. |
CN110177276B (zh) | 2014-02-26 | 2021-08-24 | 杜比实验室特许公司 | 处理视频图像的空间区域的方法、存储介质及计算装置 |
JP2018530244A (ja) * | 2015-09-25 | 2018-10-11 | 華為技術有限公司Huawei Technologies Co.,Ltd. | 選択可能な補間フィルタを用いるビデオ動き補償のための装置および方法 |
US10341659B2 (en) * | 2016-10-05 | 2019-07-02 | Qualcomm Incorporated | Systems and methods of switching interpolation filters |
KR102595689B1 (ko) * | 2017-09-29 | 2023-10-30 | 인텔렉추얼디스커버리 주식회사 | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 |
CN108012157B (zh) * | 2017-11-27 | 2020-02-04 | 上海交通大学 | 用于视频编码分数像素插值的卷积神经网络的构建方法 |
-
2018
- 2018-10-06 CN CN201811166872.XA patent/CN111010568B/zh active Active
-
2019
- 2019-09-26 KR KR1020217013057A patent/KR102592279B1/ko active IP Right Grant
- 2019-09-26 EP EP19869096.8A patent/EP3855741A4/en active Pending
- 2019-09-26 JP JP2021518927A patent/JP7331095B2/ja active Active
- 2019-09-26 WO PCT/CN2019/108311 patent/WO2020069655A1/zh unknown
-
2021
- 2021-04-02 US US17/221,184 patent/US20210227243A1/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1666429A (zh) * | 2002-07-09 | 2005-09-07 | 诺基亚有限公司 | 用于在视频编码中选择插值滤波器类型的方法和系统 |
CN101043621A (zh) * | 2006-06-05 | 2007-09-26 | 华为技术有限公司 | 一种自适应插值处理方法及编解码模块 |
CN102084655A (zh) * | 2008-07-07 | 2011-06-01 | 高通股份有限公司 | 通过过滤器选择进行的视频编码 |
CN101616325A (zh) * | 2009-07-27 | 2009-12-30 | 清华大学 | 一种视频编码中自适应插值滤波计算的方法 |
CN101790092A (zh) * | 2010-03-15 | 2010-07-28 | 河海大学常州校区 | 基于图像块编码信息的智能滤波器设计方法 |
CN102638678A (zh) * | 2011-02-12 | 2012-08-15 | 乐金电子(中国)研究开发中心有限公司 | 视频编解码帧间图像预测方法及视频编解码器 |
CN103875246A (zh) * | 2011-10-18 | 2014-06-18 | 日本电信电话株式会社 | 影像编码方法、装置、影像解码方法、装置及它们的程序 |
US20160191946A1 (en) * | 2014-12-31 | 2016-06-30 | Microsoft Technology Licensing, Llc | Computationally efficient motion estimation |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021249290A1 (zh) * | 2020-06-10 | 2021-12-16 | 华为技术有限公司 | 环路滤波方法和装置 |
Also Published As
Publication number | Publication date |
---|---|
EP3855741A1 (en) | 2021-07-28 |
US20210227243A1 (en) | 2021-07-22 |
JP2022514160A (ja) | 2022-02-10 |
CN111010568A (zh) | 2020-04-14 |
KR102592279B1 (ko) | 2023-10-19 |
KR20210064370A (ko) | 2021-06-02 |
CN111010568B (zh) | 2023-09-29 |
JP7331095B2 (ja) | 2023-08-22 |
EP3855741A4 (en) | 2021-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020069655A1 (zh) | 插值滤波器的训练方法、装置及视频图像编解码方法、编解码器 | |
US11438578B2 (en) | Video picture prediction method and apparatus | |
WO2020125595A1 (zh) | 视频译码器及相应方法 | |
WO2020103800A1 (zh) | 视频解码方法和视频解码器 | |
CN111919444A (zh) | 色度块的预测方法和装置 | |
CN112385234A (zh) | 图像和视频译码的设备和方法 | |
US20220094947A1 (en) | Method for constructing mpm list, method for obtaining intra prediction mode of chroma block, and apparatus | |
WO2020114394A1 (zh) | 视频编解码方法、视频编码器和视频解码器 | |
WO2020038378A1 (zh) | 色度块预测方法及装置 | |
US20230370597A1 (en) | Picture partitioning method and apparatus | |
WO2020135467A1 (zh) | 帧间预测方法、装置以及相应的编码器和解码器 | |
KR20220024877A (ko) | 이중 예측 옵티컬 플로 계산 및 이중 예측 보정에서 블록 레벨 경계 샘플 그레이디언트 계산을 위한 정수 그리드 참조 샘플의 위치를 계산하는 방법 | |
EP3890322A1 (en) | Video coder-decoder and corresponding method | |
CN110876061B (zh) | 色度块预测方法及装置 | |
WO2020147514A1 (zh) | 视频编码器、视频解码器及相应方法 | |
WO2020135371A1 (zh) | 一种标志位的上下文建模方法及装置 | |
WO2020114509A1 (zh) | 视频图像解码、编码方法及装置 | |
WO2020114393A1 (zh) | 变换方法、反变换方法以及视频编码器和视频解码器 | |
WO2020114508A1 (zh) | 视频编解码方法及装置 | |
WO2020135615A1 (zh) | 视频图像解码方法及装置 | |
US11917203B2 (en) | Non-separable transform method and device | |
WO2020143292A1 (zh) | 一种帧间预测方法及装置 | |
WO2020057506A1 (zh) | 色度块预测方法及装置 | |
WO2020119742A1 (zh) | 块划分方法、视频编解码方法、视频编解码器 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19869096 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021518927 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20217013057 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2019869096 Country of ref document: EP Effective date: 20210421 |