WO2020069655A1 - 插值滤波器的训练方法、装置及视频图像编解码方法、编解码器 - Google Patents

插值滤波器的训练方法、装置及视频图像编解码方法、编解码器

Info

Publication number
WO2020069655A1
WO2020069655A1 PCT/CN2019/108311 CN2019108311W WO2020069655A1 WO 2020069655 A1 WO2020069655 A1 WO 2020069655A1 CN 2019108311 W CN2019108311 W CN 2019108311W WO 2020069655 A1 WO2020069655 A1 WO 2020069655A1
Authority
WO
WIPO (PCT)
Prior art keywords
filter
interpolation filter
image block
target
current
Prior art date
Application number
PCT/CN2019/108311
Other languages
English (en)
French (fr)
Inventor
吴枫
闫宁
刘�东
李厚强
杨海涛
Original Assignee
华为技术有限公司
中国科学技术大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司, 中国科学技术大学 filed Critical 华为技术有限公司
Priority to JP2021518927A priority Critical patent/JP7331095B2/ja
Priority to KR1020217013057A priority patent/KR102592279B1/ko
Priority to EP19869096.8A priority patent/EP3855741A4/en
Publication of WO2020069655A1 publication Critical patent/WO2020069655A1/zh
Priority to US17/221,184 priority patent/US20210227243A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/587Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Definitions

  • the present application relates to the technical field of video coding and decoding, and in particular to a training method and device for interpolation filters, a video image coding and decoding method, and a codec.
  • Digital video capabilities can be incorporated into a variety of devices, including digital TVs, digital live broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, tablet computers, e-book readers, Digital cameras, digital recording devices, digital media players, video game devices, video game consoles, cellular or satellite radio phones (so-called "smart phones"), video teleconferencing devices, video streaming devices, and the like .
  • Digital video devices implement video compression technology, for example, in the standards defined by MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264 / MPEG-4 Part 10 Advanced Video Coding (AVC), The video encoding technology described in the H.265 / High Efficiency Video Coding (HEVC) standard and extensions to such standards.
  • Video devices can more efficiently transmit, receive, encode, decode, and / or store digital video information by implementing such video compression techniques.
  • Video compression techniques perform spatial (intra-image) prediction and / or temporal (inter-image) prediction to reduce or remove redundancy inherent in the video sequence.
  • a video slice ie, a video frame or a portion of a video frame
  • the image block in the to-be-intra-coded (I) slice of the image is encoded using spatial prediction with reference samples in adjacent blocks in the same image.
  • An image block in an inter-coded (P or B) slice of an image may use spatial prediction relative to reference samples in neighboring blocks in the same image or temporal prediction relative to reference samples in other reference images.
  • the image may be referred to as a frame, and the reference image may be referred to as a reference frame.
  • various video coding standards including the high-efficiency video coding (HEVC) standard propose a predictive coding mode for image blocks, that is, a block to be coded is predicted based on already coded blocks of video data.
  • HEVC high-efficiency video coding
  • intra prediction mode the current block is predicted based on one or more previously decoded neighboring blocks in the same image as the current block; in inter prediction mode, the current block is predicted based on already decoded blocks in different images Piece.
  • the optimal matching reference block needs to be inter-pixel interpolation.
  • a fixed coefficient interpolation filter is usually used to perform sub-pixel interpolation.
  • the prediction accuracy is poor, resulting in poor video image encoding and decoding performance.
  • Embodiments of the present application provide an interpolation filter training method, device, video image coding and decoding method, and codec, which can improve the prediction accuracy of motion information of image blocks, thereby improving codec performance.
  • an embodiment of the present application provides an interpolation filter training method, including: a computing device interpolating pixels of a sample image at integer pixel positions through a first interpolation filter to obtain the sample image at a first score The first sub-pixel image of the pixel position; input the sample image into the second interpolation filter to obtain the second sub-pixel image;
  • the filter parameter of the second interpolation filter is determined by minimizing a first function representing the difference between the first sub-pixel image and the second sub-pixel image.
  • the first sub-pixel image obtained by interpolation of the traditional interpolation filter is used as the label data to train the second interpolation filter, so that the obtained second interpolation filter can be directly used for interpolation to obtain the first
  • the pixel value of the fractional pixel position, the label data is more accurate, and the encoding and decoding performance of the video image is improved.
  • the second interpolation filter of the neural network is a non-linear filter, the prediction accuracy of the complex video signal is poor when performing prediction, which can further improve the encoding and decoding performance of the video image.
  • an embodiment of the present application further provides an interpolation filter training method, including: a computing device interpolating pixels of a sample image at integer pixel positions through a first interpolation filter to obtain the sample image in the first The first sub-pixel image of the fractional pixel position; input the sample image into the second interpolation filter to obtain a second sub-pixel image; input the second sub-pixel image into the third interpolation filter through a flip operation , A first image is obtained, and the first image is obtained through a reverse operation of the flip operation to obtain a second image, wherein the second interpolation filter and the third interpolation filter share filter parameters; further, The determination is made according to a first function representing the difference between the first sub-pixel image and the second sub-pixel image and a second function representing the difference between the sample image and the second image Filter parameters.
  • the embodiment of the present invention performs sub-pixel interpolation on the sample image through a traditional interpolation filter to obtain the first sub-pixel image, and uses it as label data, using the principle of reversibility of the sub-pixel, to simultaneously represent the first sub-pixel by minimizing
  • the first function of the difference between the image and the second sub-pixel image and the second function used to represent the difference between the sample image and the second image determine the filter parameters, and the second interpolation is constrained by supervising the sample image
  • the filter improves the accuracy of the sub-pixel interpolation by the second interpolation filter, thereby improving the encoding and decoding performance of the video image.
  • the computing device is based on a first loss function representing the difference between the first sub-pixel image and the second sub-pixel image and a difference value representing the sample image and the second image
  • the second function of determining the filter parameters may include but is not limited to the following two implementation manners:
  • the computing device determines the filter parameters by minimizing a third function, where the third function is a difference value used to represent the first sub-pixel image and the second sub-pixel image A weighted sum of a first function of and a second function representing the difference between the sample image and the second image.
  • Second implementation manner by alternately minimizing a first loss function representing the difference between the first sub-pixel image and the second sub-pixel image and representing the sample image and the second image
  • the second function of the difference determines the filter parameters.
  • computing devices described in the first and second aspects may be encoding devices or compression devices, and the above devices may be devices with data processing functions such as computers, servers, or terminals (eg, mobile phones, tablet computers, etc.).
  • an embodiment of the present application further provides a video image encoding method, including:
  • the encoder performs inter prediction on the currently coded image block to obtain motion information of the current coded image block, where the motion information of the current coded image block points to a fractional pixel position, and the inter prediction process includes: Determining a target interpolation filter for the current encoded image block in the candidate interpolation filter set;
  • the coding information includes indication information of a target interpolation filter; the indication information of the target interpolation filter is used to indicate a reference block that obtains a fractional pixel position corresponding to the current encoded image block by performing sub-pixel interpolation through the target interpolation filter.
  • an embodiment of the present application further provides a video image encoding method, including:
  • the encoder performs inter prediction on the currently coded image block to obtain motion information of the current coded image block, where the motion information of the current coded image block points to a fractional pixel position, and the inter prediction process includes: Determining a target interpolation filter for the current encoded image block in the candidate interpolation filter set;
  • the encoder may select an interpolation filter according to the content of the currently encoded image block to perform the interpolation operation, so that the obtained prediction block has a higher prediction accuracy and reduces the code stream. To increase the compression rate of video images.
  • the encoder described in the third aspect or the fourth aspect may also be an encoding device including the encoder, and the encoding device may be a computer, server, or terminal (eg, mobile phone, tablet computer, etc.) having data processing functions device of.
  • the encoding device may be a computer, server, or terminal (eg, mobile phone, tablet computer, etc.) having data processing functions device of.
  • an encoder determines an implementation manner of a target interpolation filter for the current encoded image block from a set of candidate interpolation filters It may be that the encoder determines the target interpolation filter for the current encoded image block from the set of candidate interpolation filters according to the rate-distortion cost criterion.
  • the encoder can perform interpolation operations based on the content selection rate filter with a low distortion cost of the currently encoded image block, thereby improving prediction accuracy, reducing code stream, and increasing the compression rate of the video image.
  • an encoder performs inter prediction on the currently encoded image block to obtain motion information of the currently encoded image block
  • the method can be:
  • the encoder determines an entire pixel reference image block that optimally matches the current encoded image block
  • the motion information is determined based on the prediction block, where the interpolation filter obtained by interpolation to obtain the prediction block is the target interpolation filter.
  • the encoder can select the interpolation filter corresponding to the reference block with the least distortion to perform interpolation to reduce the code stream and improve the compression rate of the video image.
  • the candidate interpolation filter set includes the training method of any interpolation filter described in the first aspect or the second aspect The resulting second interpolation filter.
  • the filter of the target interpolation filter is a second interpolation filter obtained by any of the interpolation filter training methods described in the first aspect or the second aspect, then: the filter of the target interpolation filter The parameter is a preset filter parameter; or, the filter parameter of the target interpolation filter is a filter parameter obtained according to the interpolation filter training method of claims 1-4.
  • the coding information further includes filter parameters of the target interpolation filter obtained by training; or, the coding information further includes a filter parameter difference value, and the filter parameter difference value is used for training
  • the filter parameters of the target interpolation filter of the current image unit are relative to the filter parameters of the target interpolation filter of the image unit that was trained for the previous encoding.
  • the encoder can perform online training on the second interpolation filter in the candidate interpolation filter set, so that the interpolation filter can be adjusted in real time according to the content of the currently encoded image unit, thereby improving prediction accuracy.
  • the image unit includes an image frame, a slice, a video sequence subgroup, a coding tree unit (CTU), a coding unit (CU), or a prediction unit (PU).
  • CTU coding tree unit
  • CU coding unit
  • PU prediction unit
  • an embodiment of the present application further provides a video image decoding method, including:
  • the decoder parses out the instruction information of the target interpolation filter from the code stream
  • Performing a prediction process on the current decoded image block based on the motion information of the current decoded image block includes: performing sub-pixel interpolation according to the target interpolation filter indicated by the indication information to obtain the current Decode the prediction block of the image block;
  • the reconstructed block of the current decoded image block is reconstructed.
  • an embodiment of the present application further provides a video image decoding method, including:
  • the decoder parses out the information of the currently decoded image block indicating the inter prediction mode of the currently decoded image block from the code stream;
  • the inter prediction mode of the current image block is a non-target inter prediction mode
  • the inter prediction mode of the current image block is a target inter prediction mode
  • a prediction process is performed on the current decoded image block based on the motion information of the current decoded image block, where the prediction process includes : Determining a target interpolation filter for the current decoded image block; performing sub-pixel interpolation through the target interpolation filter to obtain the prediction block of the current decoded image block.
  • the target interpolation filter determined for the current decoded image block specifically includes: determining that the image block decoded first The interpolation filter used in the decoding process is the target interpolation filter for the current decoded image block; or, the target interpolation filter for the current decoded image block is determined to be from the code stream The target interpolation filter indicated by the parsed instruction information of the target interpolation filter.
  • the decoder selects the interpolation filter indicated by the indication information of the target inter mode during the inter prediction process to perform sub-pixel interpolation to obtain the prediction block of the current decoded image block, and implements the decoder according to The content of the current encoded image block is selected by an interpolation filter to perform an interpolation operation, so that the obtained prediction block has a prediction block with higher prediction accuracy, reduces the code stream, and improves the compression rate of the video image.
  • the acquisition of the motion information of the currently decoded image block by the decoder may include, but is not limited to, the following three implementation manners:
  • the decoder can parse out the index of the motion information of the decoded image block from the code stream; further, based on the decoded image block The index of the motion information of and the candidate motion information list of the currently decoded image block determine the motion information of the currently decoded image block.
  • the decoder can parse out the motion information index and motion vector difference of the current decoded image block from the code stream; based on the current The index of the motion information of the decoded image block and the candidate motion information list of the current decoded image block determine the motion vector prediction value of the current decoded image block; further, based on the motion vector prediction value and the motion vector difference value, the obtained The motion vector of the currently decoded image block is described.
  • Third embodiment In a target inter prediction mode (such as merge mode), if the inter prediction mode of the current decoded image block is merge mode, the decoder obtains the merged mode in the merge mode
  • the motion information of the previously decoded image block is the motion information of the currently decoded image block.
  • the target filter is the second interpolation obtained by the interpolation filter training method described in the first aspect or the second aspect Filter, then:
  • the filter parameter of the target interpolation filter is a preset filter parameter; or, the filter parameter of the target interpolation filter is a filter obtained according to the interpolation filter training method described in the first aspect or the second aspect Parameter.
  • the method may further include:
  • the method may further include: configuring the target by using filter parameters of the target interpolation filter of the currently decoded image unit Interpolation filter.
  • the method may further include:
  • the filter parameter difference value is parsed from the code stream, and the filter parameter difference value is the filter parameter of the target interpolation filter for the currently decoded image unit relative to the target interpolation filter for the previously decoded image unit
  • the filter parameters of the filter are used for the filter parameters of the target interpolation filter of the currently decoded image unit
  • the target interpolation filter is configured by the filter parameter of the target interpolation filter of the currently decoded image unit.
  • the image unit includes an image frame, a slice, a video sequence subgroup, a coding tree unit (CTU), a coding unit (CU), or a prediction unit (PU).
  • CTU coding tree unit
  • CU coding unit
  • PU prediction unit
  • an embodiment of the present application further provides an interpolation filter training device, including several functional units for implementing any method of the first aspect.
  • the training device of the interpolation filter may include:
  • the tag data acquisition module is configured to interpolate the pixels of the sample image at integer pixel positions through the first interpolation filter to obtain the first sub-pixel image of the sample image at the first sub-pixel position;
  • An interpolation module configured to input the sample image into a second interpolation filter to obtain a second sub-pixel image
  • the parameter determination module is configured to determine the filter parameter of the second interpolation filter by minimizing a first function representing the difference between the first sub-pixel image and the second sub-pixel image.
  • an embodiment of the present application further provides an interpolation filter training device, including several functional units for implementing any method of the second aspect.
  • the training device of the interpolation filter may include:
  • the tag data acquisition module is configured to interpolate the pixels of the sample image at integer pixel positions through the first interpolation filter to obtain the first sub-pixel image of the sample image at the first sub-pixel position;
  • An interpolation module configured to input the sample image into a second interpolation filter to obtain a second sub-pixel image
  • An inverse interpolation module configured to input the second sub-pixel image into a third interpolation filter through a flip operation to obtain a first image, and obtain the second image through the inverse operation of the flip operation , wherein the second interpolation filter and the third interpolation filter share filter parameters;
  • the parameter determination module is used for representing the difference between the first sub-pixel image and the second sub-pixel image and the first function for indicating the difference between the sample image and the second image
  • the second function determines the filter parameters.
  • an embodiment of the present application further provides an encoder, including several functional units for implementing any method of the third aspect.
  • the encoder may include:
  • An inter prediction unit configured to perform inter prediction on the current coded image block to obtain motion information of the current coded image block, where the motion information of the current coded image block points to a fractional pixel position, the inter
  • the prediction unit includes a filter selection unit for determining a target interpolation filter for the current encoded image block from the set of candidate interpolation filters;
  • the entropy coding unit encodes the current coded image block based on the inter prediction mode of the current coded image block and the motion information of the current coded image block, obtains the coding information, and codes the coding information into the code stream,
  • the coding information includes indication information of the target interpolation filter; the indication information of the target interpolation filter is used to instruct the sub-pixel interpolation through the target interpolation filter to obtain the fractional pixel position corresponding to the current encoded image block Reference block.
  • an embodiment of the present application further provides an encoder, including several functional units for implementing any method of the fourth aspect.
  • the encoder may include:
  • An inter prediction unit configured to perform inter prediction on the current coded image block to obtain motion information of the current coded image block, where the motion information of the current coded image block points to a fractional pixel position
  • the inter The prediction unit includes a filter selection unit configured to: determine a target interpolation filter for the current encoded image block from a set of candidate interpolation filters;
  • An entropy coding unit is used to encode the current coded image block based on the inter prediction mode of the current coded image block and the motion information of the current coded image block, obtain coding information, and encode the coding information into A code stream, where, if the inter prediction mode of the currently coded image block is a target inter prediction mode, the coding information does not include indication information of the target interpolation filter; if the inter frame of the current coded image block The prediction mode is a non-target inter prediction mode, and the coding information includes indication information of the target interpolation filter, and the indication information of the target interpolation filter is used to instruct the current encoded image block to use the target interpolation filter Perform sub-pixel interpolation.
  • an embodiment of the present application further provides a decoder including several functional units for implementing any method of the fifth aspect.
  • the encoder may include:
  • the entropy decoding unit is used to parse out the indication information of the target interpolation filter from the code stream; and obtain the motion information of the currently decoded image block, where the motion information points to the fractional pixel position;
  • An inter prediction unit configured to perform a prediction process on the current decoded image block based on the motion information of the current decoded image block, where the prediction process includes: performing a division according to a target interpolation filter indicated by the indication information Pixel interpolation to obtain the prediction block of the current decoded image block.
  • the reconstruction unit is configured to reconstruct the reconstruction block of the current decoded image block based on the prediction block of the current decoded image block.
  • an embodiment of the present application further provides a decoder including several functional units for implementing any method of the sixth aspect.
  • the encoder may include:
  • An entropy decoding unit used to parse out the information of the current decoded image block indicating the inter prediction mode of the current decoded image block from the code stream;
  • the inter prediction unit is used to obtain the motion information of the current decoded image block, where the motion information points to a fractional pixel position; if the inter prediction mode of the current image block is a non-target inter prediction mode, based on the The motion information of the current decoded image block performs a prediction process on the current decoded image block, wherein the prediction process includes: target interpolation filtering indicated by indication information of the target interpolation filter parsed from the code stream The device performs sub-pixel interpolation to obtain the prediction block of the current decoded image block;
  • the reconstruction unit is configured to reconstruct the current decoded image block based on the prediction block of the current decoded image block.
  • an embodiment of the present application further provides an interpolation filter training device, including a memory and a processor; the memory is used to store program code, and the processor is used to call the program code, and execute Part or all of the steps of any one of the interpolation filter training methods described in the first aspect or the second aspect.
  • the filter parameter of the second interpolation filter is determined by minimizing a first function representing the difference between the first sub-pixel image and the second sub-pixel image.
  • the processor executes the first loss function according to the difference between the first sub-pixel image and the second sub-pixel image and the second loss image to represent the sample image and the second image
  • the second function of the difference in determining the filter parameters may include but is not limited to the following two implementation manners:
  • the filter parameter is determined by minimizing a third function, where the third function is the first for indicating the difference between the first sub-pixel image and the second sub-pixel image A function and a weighted sum of a second function representing the difference between the sample image and the second image.
  • Second implementation manner by alternately minimizing a first loss function representing the difference between the first sub-pixel image and the second sub-pixel image and representing the sample image and the second image
  • the second function of the difference determines the filter parameters.
  • the training device for the interpolation filter in the first aspect and the second aspect may be an encoding device or a compression device, and the above device may be a computer, server, or terminal (eg, mobile phone, tablet computer, etc.) that has data processing functions device of.
  • an embodiment of the present application further provides an encoding device, including a memory and a processor; the memory is used to store program code; and the processor is used to call the program code to perform the third aspect or Part or all of the steps of any video image coding method described in the fourth aspect.
  • performing: performing inter prediction on the current coded image block to obtain motion information of the current coded image block, where the motion information of the current coded image block points to a fractional pixel position, and the inter prediction process includes : Determining a target interpolation filter for the current encoded image block from the set of candidate interpolation filters;
  • the coding information includes indication information of a target interpolation filter; the indication information of the target interpolation filter is used to indicate a reference block that obtains a fractional pixel position corresponding to the current encoded image block by performing sub-pixel interpolation through the target interpolation filter.
  • the method includes: determining a target interpolation filter for the current encoded image block from the set of candidate interpolation filters;
  • the encoder described in the fourteenth aspect may also be an encoding device including the encoder, and the encoding device may be a device with a data processing function such as a computer, a server, or a terminal (eg, mobile phone, tablet computer, etc.).
  • a data processing function such as a computer, a server, or a terminal (eg, mobile phone, tablet computer, etc.).
  • an implementation manner in which the processor determines the target interpolation filter for the current encoded image block from the candidate interpolation filter set may be: The encoder determines a target interpolation filter for the current encoded image block from the set of candidate interpolation filters according to the rate-distortion cost criterion.
  • an implementation manner in which the processor performs inter prediction on the current coded image block to obtain motion information of the current coded image block may be :
  • the motion information is determined based on the prediction block, where the interpolation filter obtained by interpolation to obtain the prediction block is the target interpolation filter.
  • the candidate interpolation filter set includes the first one obtained by any of the interpolation filter training methods described in the first aspect or the second aspect Two interpolation filters.
  • the filter of the target interpolation filter is a second interpolation filter obtained by any of the interpolation filter training methods described in the first aspect or the second aspect, then: the filter of the target interpolation filter The parameter is a preset filter parameter; or, the filter parameter of the target interpolation filter is a filter parameter obtained according to the interpolation filter training method of claims 1-4.
  • the coding information further includes filter parameters of the target interpolation filter obtained by training; or, the coding information further includes a filter parameter difference value, and the filter parameter difference value is used for training
  • the filter parameters of the target interpolation filter of the current image unit are relative to the filter parameters of the target interpolation filter of the image unit that was trained for the previous encoding.
  • the processor can perform online training on the second interpolation filter in the candidate interpolation filter set, so that the interpolation filter can be adjusted in real time according to the content of the currently encoded image unit, and the prediction accuracy is improved.
  • the image unit includes an image frame, a slice, a video sequence subgroup, a coding tree unit (CTU), a coding unit (CU), or a prediction unit (PU).
  • CTU coding tree unit
  • CU coding unit
  • PU prediction unit
  • an embodiment of the present application further provides a decoding device, including a memory and a processor; the memory is used to store program code; and the processor is used to call the program code to perform the fifth aspect or Part or all of the steps of any video image decoding method described in the sixth aspect.
  • the instruction information of the target interpolation filter is parsed from the code stream
  • Performing a prediction process on the current decoded image block based on the motion information of the current decoded image block where the prediction process includes: performing sub-pixel interpolation according to the target interpolation filter indicated by the indication information to obtain the current Decode the prediction block of the image block.
  • the reconstructed block of the current decoded image block is reconstructed.
  • the inter prediction mode of the current image block is a non-target inter prediction mode
  • the inter prediction mode of the current image block is a target inter prediction mode
  • a prediction process is performed on the current decoded image block based on the motion information of the current decoded image block, where the prediction process includes : Determining a target interpolation filter for the current decoded image block; performing sub-pixel interpolation through the target interpolation filter to obtain the prediction block of the current decoded image block.
  • the processor determining the target interpolation filter for the current decoded image block specifically includes: determining the image block decoded first
  • the interpolation filter used in the decoding process is the target interpolation filter for the current decoded image block; or, the target interpolation filter for the current decoded image block is determined to be from the code stream
  • the processor acquiring the motion information of the currently decoded image block may include, but is not limited to, the following three implementation manners:
  • the processor can parse out the index of the motion information of the decoded image block from the code stream;
  • the index of the motion information of and the candidate motion information list of the currently decoded image block determine the motion information of the currently decoded image block.
  • the processor can parse out the motion information index and motion vector difference of the current decoded image block from the code stream; based on the current The index of the motion information of the decoded image block and the candidate motion information list of the current decoded image block determine the motion vector prediction value of the current decoded image block; further, based on the motion vector prediction value and the motion vector difference value, the obtained The motion vector of the currently decoded image block is described.
  • the processor obtains the merged mode in the merge mode
  • the motion information of the previously decoded image block is the motion information of the currently decoded image block.
  • the target filter is a second interpolation filter obtained by the interpolation filter training method described in the first aspect or the second aspect, then:
  • the filter parameter of the target interpolation filter is a preset filter parameter; or, the filter parameter of the target interpolation filter is a filter obtained according to the interpolation filter training method described in the first aspect or the second aspect Parameter.
  • the processor may further execute:
  • the method may further include: configuring the target by using filter parameters of the target interpolation filter of the currently decoded image unit Interpolation filter.
  • the processor may further execute:
  • the filter parameter difference value is parsed from the code stream, and the filter parameter difference value is the filter parameter of the target interpolation filter for the currently decoded image unit relative to the target interpolation filter for the previously decoded image unit
  • the filter parameters of the filter are used for the filter parameters of the target interpolation filter of the currently decoded image unit
  • the target interpolation filter is configured by the filter parameter of the target interpolation filter of the currently decoded image unit.
  • the image unit includes an image frame, a slice, a video sequence subgroup, a coding tree unit (CTU), a coding unit (CU), or a prediction unit (PU).
  • CTU coding tree unit
  • CU coding unit
  • PU prediction unit
  • an embodiment of the present application also provides a computer-readable storage medium, including program code, which when executed on a computer, causes the computer to execute the program according to the first aspect or the second aspect Some or all steps of any interpolation filter training method.
  • an embodiment of the present application provides a computer program product that, when the computer program product runs on a computer, causes the computer to perform any of the interpolation filter training described in the first aspect or the second aspect Some or all steps of the method.
  • an embodiment of the present application further provides a computer-readable storage medium, including program code, which when executed on a computer, causes the computer to execute the computer program according to the third aspect or the fourth aspect Part or all steps of any video image coding method.
  • an embodiment of the present application provides a computer program product that, when the computer program product runs on a computer, causes the computer to perform any video image encoding as described in the third aspect or the fourth aspect Some or all steps of the method.
  • an embodiment of the present application further provides a computer-readable storage medium, including program code, which, when run on a computer, causes the computer to execute the program according to the fifth aspect or the sixth aspect Part or all steps of any video image decoding method.
  • an embodiment of the present application provides a computer program product, which, when the computer program product runs on a computer, causes the computer to execute any video image as described in the fifth aspect or the sixth aspect Part or all steps of the decoding method.
  • FIG. 1 is a schematic block diagram of a video encoding and decoding system according to an embodiment of this application;
  • FIG. 2 is a schematic block diagram of an encoder in an embodiment of this application.
  • FIG. 3 is a schematic block diagram of a decoder in an embodiment of this application.
  • FIG. 5 is a schematic explanatory diagram of the principle of the reversibility of sub-pixel interpolation in an embodiment of the present application
  • 6A is a schematic flowchart of an interpolation filter training method in an embodiment of the present application.
  • 6B is a schematic explanatory diagram of a training training process of an interpolation filter in an embodiment of the present application
  • 6C is a schematic flowchart of another interpolation filter training method in an embodiment of the present application.
  • 6D is a schematic explanatory diagram of another training process of interpolation filters in an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of a video image encoding method in an embodiment of this application.
  • FIG. 8 is a schematic flowchart of another video image encoding method in an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a video image decoding method in an embodiment of this application.
  • FIG. 10 is a schematic flowchart of another video image decoding method in an embodiment of the present application.
  • FIG. 11 is a schematic flowchart of yet another video image decoding method in an embodiment of the present application.
  • FIG. 12 is a schematic block diagram of an interpolation filter training device provided by an embodiment of the present invention.
  • FIG. 13 is a schematic block diagram of an interpolation filter training device provided by an embodiment of the present invention.
  • FIG. 14 is a schematic block diagram of another interpolation filter training device provided by an embodiment of the present invention.
  • 15 is a schematic block diagram of another encoder in an embodiment of this application.
  • 16 is a schematic block diagram of another decoder in an embodiment of the present application.
  • FIG. 17 is a schematic block diagram of an encoding device or a decoding device in an embodiment of the present application.
  • the disclosure in conjunction with the described method may be equally applicable to the corresponding device or system for performing the method, and vice versa.
  • the corresponding device may include one or more units such as functional units to perform the one or more method steps described (eg, one unit performs one or more steps , Or multiple units, each of which performs one or more of multiple steps), even if such one or more units are not explicitly described or illustrated in the drawings.
  • the corresponding method may include a step to perform the functionality of one or more units (eg, one step executes one or more units Functionality, or multiple steps, each of which performs the functionality of one or more of the multiple units), even if such one or more steps are not explicitly described or illustrated in the drawings.
  • a step to perform the functionality of one or more units eg, one step executes one or more units Functionality, or multiple steps, each of which performs the functionality of one or more of the multiple units
  • the features of the exemplary embodiments and / or aspects described herein may be combined with each other.
  • Video coding usually refers to processing a sequence of pictures that form a video or video sequence.
  • picture In the field of video coding, the terms “picture”, “frame” or “image” may be used as synonyms.
  • the video encoding used in this application means video encoding or video decoding.
  • Video encoding is performed on the source side, and usually involves processing (eg, by compressing) the original video picture to reduce the amount of data required to represent the video picture (thereby storing and / or transmitting more efficiently).
  • Video decoding is performed on the destination side and usually involves inverse processing relative to the encoder to reconstruct the video picture.
  • the “encoding” of video pictures should be understood as referring to “encoding” or “decoding” of video sequences.
  • the combination of the encoding part and the decoding part is also called codec (encoding and decoding).
  • the original video picture can be reconstructed, that is, the reconstructed video picture has the same quality as the original video picture (assuming no transmission loss or other data loss during storage or transmission).
  • further compression is performed by, for example, quantization to reduce the amount of data required to represent the video picture, but the decoder side cannot fully reconstruct the video picture, that is, the quality of the reconstructed video picture is better than the original video picture The quality is lower or worse.
  • Several video coding standards of H.261 belong to "lossy hybrid video codec” (ie, combining spatial and temporal prediction in the sample domain with 2D transform coding for applying quantization in the transform domain).
  • Each picture of a video sequence is usually divided into non-overlapping block sets, which are usually encoded at the block level.
  • the encoder side usually processes the encoded video at the block (video block) level.
  • the prediction block is generated by spatial (intra-picture) prediction and temporal (inter-picture) prediction.
  • the encoder duplicates the decoder processing loop so that the encoder and decoder generate the same prediction (eg, intra-frame prediction and inter-frame prediction) and / or reconstruction for processing, ie encoding subsequent blocks.
  • the terms "block”, “image block” and “picture block” may be used as synonyms and may be part of a picture or frame.
  • Joint Video Collaboration Working Group Joint Collaboration Team
  • Video Coding by the ITU-T Video Coding Experts Group (VCEG) and the ISO / IEC Motion Picture Experts Group (Motion) Pictures Experts Group (MPEG)
  • JCT-VC Joint Video Collaboration Team
  • High-Efficiency Video Coding HEVC
  • ITU-T H.266 / next-generation video coding Very high-generation video coding
  • HEVC High Efficiency Video Coding
  • VVC VVC
  • the CTU is split into multiple CUs by using a quadtree structure represented as a coding tree.
  • a decision is made at the CU level whether to use inter-picture (temporal) or intra-picture (spatial) prediction to encode picture regions.
  • Each CU can be further split into one, two, or four PUs according to the PU split type. The same prediction process is applied within a PU, and related information is transmitted to the decoder on the basis of the PU.
  • the CU may be divided into transform units (TU) according to other quadtree structures similar to the coding tree used for the CU.
  • quad-tree and binary-tree Quad-tree and binary tree, QTBT
  • the CU may have a square or rectangular shape.
  • the coding tree unit (CTU) is first divided by a quadtree structure.
  • the quad leaf nodes are further divided by a binary tree structure.
  • the binary leaf node is called a coding unit (CU), and the segmentation is used for prediction and transformation processing without any other segmentation.
  • CU coding unit
  • This means that CU, PU and TU have the same block size in the QTBT coding block structure.
  • encoded image block is an image block applied at the encoding end
  • decoded image block is an image block applied at the decoding end
  • Current coded image block may also be represented as “currently to be coded image block” or “currently encoded block”, etc.
  • currently decoded image block may also be represented as “currently to be decoded image block” or “currently decoded block” and so on.
  • Reference block can also be expressed as “reference image block”;
  • prediction block can be expressed as “prediction image block”, and in some scenes can also be expressed as “optimal matching block” or “matching block” and so on.
  • the "first interpolation filter” is an interpolation filter provided in the prior art, and may be a fixed coefficient interpolation filter, for example, a bilinear interpolation filter, a bicubic interpolation filter, etc .; or It is a content adaptive interpolation filter or other kinds of interpolation filters.
  • a 6-tap finite corresponding filter is used to generate half-pixel samples, and simple bilinear interpolation is used to generate quarter pixels.
  • the interpolation filter in HEVC has made many improvements compared to H.264 / AVC.
  • An 8-tap filter is used to generate half-pixels, while a quarter-pixel uses a 7-tap interpolation filter.
  • a typical adaptive interpolation filter estimates the filter coefficients at the encoding end according to the error of motion compensation prediction, and then encodes the filter coefficients into the code stream.
  • a separable adaptive interpolation filter is proposed, which can achieve a significant reduction in complexity while basically maintaining the coding performance.
  • second interpolation filter and “third interpolation filter” are interpolation filters obtained based on the interpolation filter training method provided in the embodiments of the present application.
  • the second interpolation filter and / or the third interpolation filter may be a support vector machine (support vector machine (SVM), a neural network (NN), a convolutional neural network (CNN) Or other forms, this embodiment of the present application is not limited.
  • SVM support vector machine
  • NN neural network
  • CNN convolutional neural network
  • the "target interpolation filter” is the selected interpolation filter in the set of candidate filters.
  • the “candidate interpolation filter set” may include one or more interpolation filters, the types of the plurality of interpolation filters are different, and may include but not limited to the second interpolation filter herein.
  • the plurality of interpolation filters included in the candidate filter set may not include the second interpolation filter.
  • the motion information may include a motion vector, which is an important parameter in the inter prediction process, which represents the spatial displacement of the previously coded image block relative to the current coded image block.
  • Motion estimation methods such as motion search can be used to obtain motion vectors.
  • the bits representing the motion vector were included in the encoded bit stream to allow the decoder to reproduce the prediction block, thereby obtaining the reconstructed block.
  • the reference motion vector it was later proposed to use the reference motion vector to differentially encode the motion vector, that is, instead of encoding the entire motion vector, only the difference between the motion vector and the reference motion vector is encoded.
  • the reference motion vector may be selected from previously used motion vectors in the video stream. Selecting the previously used motion vector to encode the current motion vector may further reduce the number of bits included in the encoded video bitstream .
  • Embodiments of the encoder 100, decoder 200, and encoding system 300 are described below based on FIGS. 1 to 3.
  • FIG. 1 is a conceptual or schematic block diagram illustrating an exemplary encoding system 300, for example, a video encoding system 300 that can utilize the technology of the present application (this disclosure).
  • the encoder 100 eg, video encoder 100
  • the decoder 200 eg, video decoder 200
  • the encoding system 300 includes a source device 310 for providing encoded data 330, for example, an encoded picture 330, to a destination device 320 that decodes the encoded data 330, for example.
  • the source device 310 includes the encoder 100, and optionally, may also include a picture source 312, such as a preprocessing unit 314 of the picture preprocessing unit 314, and a communication interface or communication unit 318.
  • a picture source 312 such as a preprocessing unit 314 of the picture preprocessing unit 314, and a communication interface or communication unit 318.
  • the image source 312 may include or may be any type of image capture device, for example, for capturing real-world images, and / or any type of image or comment (for screen content encoding, some text on the screen is also considered to be an image to be encoded Or a part of the image) generating equipment, for example, a computer graphics processor for generating computer animation pictures, or for acquiring and / or providing real-world pictures, computer animation pictures (for example, screen content, virtual reality (VR) ) Pictures) in any category of equipment, and / or any combination thereof (for example, augmented reality (AR) pictures).
  • image capture device for example, for capturing real-world images, and / or any type of image or comment (for screen content encoding, some text on the screen is also considered to be an image to be encoded Or a part of the image)
  • generating equipment for example, a computer graphics processor for generating computer animation pictures, or for acquiring and / or providing real-world pictures, computer animation pictures (for example, screen content, virtual reality (VR)
  • the (digital) picture is or can be regarded as a two-dimensional array or matrix of sampling points with luminance values.
  • the sampling points in the array may also be called pixels (short for picture element) or pixels (pel).
  • the number of sampling points in the horizontal or vertical direction (or axis) of the array or picture defines the size and / or resolution of the picture.
  • three color components are usually used, that is, a picture can be represented or contain three sampling arrays.
  • the picture includes corresponding red, green and blue sampling arrays.
  • each pixel is usually expressed in a luma / chroma format or color space, for example, YCbCr, including the luminance component indicated by Y (sometimes also indicated by L) and the two chromaticities indicated by Cb and Cr Weight.
  • Luminance (abbreviated as luma) component Y represents brightness or gray-scale horizontal intensity (for example, the two are the same in gray-scale pictures), while two chroma (abbreviated as chroma) components Cb and Cr represent chroma or color information components .
  • the YCbCr format picture includes a luminance sampling array of luminance sampling values (Y), and two chrominance sampling arrays of chrominance values (Cb and Cr). RGB format pictures can be converted or transformed into YCbCr format and vice versa, this process is also called color transformation or conversion. If the picture is black shell, the picture may include only the brightness sampling array.
  • the picture source 312 may be, for example, a camera for capturing pictures, a memory such as a picture memory, including or storing previously captured or generated pictures, and / or any category (internal Or external) interface.
  • the camera may be, for example, an integrated camera local or integrated in the source device, and the memory may be an integrated memory local or for example integrated in the source device.
  • the interface may be, for example, an external interface that receives pictures from an external video source.
  • the external video source is, for example, an external picture capture device, such as a camera, external memory, or external picture generation device.
  • the external picture generation device is, for example, an external computer graphics processor, computer Or server.
  • the interface may be any type of interface according to any proprietary or standardized interface protocol, such as a wired or wireless interface, an optical interface.
  • the interface for acquiring the picture data 313 may be the same interface as the communication interface 318 or a part of the communication interface 318.
  • the picture or picture data 313 (for example, the video data 312) may also be referred to as the original picture or the original picture data 313.
  • the pre-processing unit 314 is used to receive (original) picture data 313 and perform pre-processing on the picture data 313 to obtain the pre-processed picture 315 or the pre-processed picture data 315.
  • the preprocessing performed by the preprocessing unit 314 may include trimming, color format conversion (for example, conversion from RGB to YCbCr), color adjustment, or denoising. It can be understood that the pre-processing unit 314 may be an optional component.
  • An encoder 100 (eg, video encoder 100) is used to receive pre-processed picture data 315 and provide encoded picture data 171 (details will be described further below, for example, based on FIG. 2, FIG. 7, or FIG. 8).
  • the encoder 100 may be used to perform a video image encoding method, and in another embodiment, the encoder 100 may also be used for training of interpolation filters.
  • the communication interface 318 of the source device 310 may be used to receive the encoded picture data 171 and transmit it to other devices, for example, the destination device 320 or any other device, for storage or direct reconstruction, or for storing the corresponding
  • the encoded data 330 and / or the encoded picture data 171 is processed before transmission of the encoded data 330 to other devices, such as the destination device 320 or any other device for decoding or storage.
  • the destination device 320 includes a decoder 200 (for example, a video decoder 200), and optionally, may also include a communication interface or communication unit 322, a post-processing unit 326, and a display device 328.
  • the communication interface 322 of the destination device 320 is used, for example, to receive the encoded picture data 171 or the encoded data 330 directly from the source device 310 or any other source, such as a storage device, and the storage device, such as an encoded picture data storage device.
  • the communication interface 318 and the communication interface 322 may be used to directly communicate through the direct communication link between the source device 310 and the destination device 320 or through any type of network to transmit or receive the encoded picture data 171 or the encoded data 330
  • the link is, for example, a direct wired or wireless connection, and any kind of network is, for example, a wired or wireless network or any combination thereof, or any kind of private and public networks, or any combination thereof.
  • the communication interface 318 may be used, for example, to encapsulate the encoded picture data 171 into a suitable format, such as a packet, for transmission on a communication link or communication network.
  • the communication interface 322 forming a corresponding part of the communication interface 318 may be used, for example, to depacketize the encoded data 330 to obtain the encoded picture data 171.
  • Both the communication interface 318 and the communication interface 322 may be configured as a one-way communication interface, as indicated by the arrow for the encoded picture data 330 from the source device 310 to the destination device 320 in FIG. 1, or as a two-way communication interface, and It can be used, for example, to send and receive messages to establish connections, confirm and exchange any other information related to the communication link and / or data transmission such as the transmission of encoded picture data.
  • the decoder 200 is used to receive the encoded picture data 171 and provide the decoded picture data 231 or the decoded picture 231 (details will be described further below, for example, based on FIG. 3, FIG. 9, FIG. 10, or FIG. 11). In one example, the decoder 200 may be used to perform a video image decoding method described below.
  • the post-processor 326 of the destination device 320 is used to post-process decoded picture data 231 (also referred to as reconstructed picture data), for example, decoded picture 131, to obtain post-processed picture data 327, for example, post-processing Picture 327.
  • the post-processing performed by the post-processing unit 326 may include, for example, color format conversion (eg, conversion from YCbCr to RGB), color adjustment, trimming, or resampling, or any other processing for, for example, preparing the decoded picture data 231 to
  • the display device 328 displays.
  • the display device 328 of the destination device 320 is used to receive post-processed picture data 327 to display pictures to, for example, a user or a viewer.
  • the display device 328 may be or may include any type of display for presenting reconstructed pictures, for example, an integrated or external display or monitor.
  • the display may include a liquid crystal display (LCD), an organic light emitting diode (OLED) display, a plasma display, a projector, a micro LED display, a liquid crystal on silicon (LCoS), Digital light processor (digital light processor, DLP) or any other type of display.
  • FIG. 1 illustrates the source device 310 and the destination device 320 as separate devices
  • device embodiments may also include the functionality of the source device 310 and the destination device 320 or both, ie, the source device 310 or the corresponding And the destination device 320 or the corresponding functionality.
  • the same hardware and / or software, or separate hardware and / or software, or any combination thereof may be used to implement the source device 310 or corresponding functionality and the destination device 320 or corresponding functionality .
  • Both the encoder 100 and the decoder 200 may be implemented as any of various suitable circuits, for example, one or more microprocessors, digital signal processors (digital signal processor, DSP), application-specific integrated circuit (ASIC), field-programmable gate array (FPGA), discrete logic, hardware, or any combination thereof.
  • the device may store the instructions of the software in a suitable non-transitory computer-readable storage medium, and may use one or more processors to execute the instructions in hardware to perform the techniques of the present disclosure . Any one of the foregoing (including hardware, software, a combination of hardware and software, etc.) can be regarded as one or more processors.
  • Each of the video encoder 100 and the video decoder 200 may be included in one or more encoders or decoders, any of which may be integrated as a combined encoder / decoder in the corresponding device Part of the codec (codec).
  • the source device 310 may be referred to as a video encoding device or a video encoding device.
  • the destination device 320 may be referred to as a video decoding device or a video decoding device.
  • the source device 310 and the destination device 320 may be examples of video encoding devices or video encoding devices.
  • Source device 310 and destination device 320 may include any of a variety of devices, including any type of handheld or stationary device, for example, notebook or laptop computers, mobile phones, smart phones, tablets or tablet computers, cameras, desktops Computers, set-top boxes, televisions, display devices, digital media players, video game consoles, video streaming devices (such as content service servers or content distribution servers), broadcast receiver devices, broadcast transmitter devices, etc., and may not be used Or use any kind of operating system.
  • any type of handheld or stationary device for example, notebook or laptop computers, mobile phones, smart phones, tablets or tablet computers, cameras, desktops Computers, set-top boxes, televisions, display devices, digital media players, video game consoles, video streaming devices (such as content service servers or content distribution servers), broadcast receiver devices, broadcast transmitter devices, etc., and may not be used Or use any kind of operating system.
  • source device 310 and destination device 320 may be equipped for wireless communication. Therefore, the source device 310 and the destination device 320 may be wireless communication devices.
  • the video encoding system 300 shown in FIG. 1 is only an example, and the technology of the present application may be applied to video encoding settings that do not necessarily include any data communication between encoding and decoding devices (eg, video encoding or video decoding) .
  • data can be retrieved from local storage, streamed on the network, and so on.
  • the video encoding device may encode the data and store the data to the memory, and / or the video decoding device may retrieve the data from the memory and decode the data.
  • encoding and decoding are performed by devices that do not communicate with each other but only encode data to and / or retrieve data from memory and decode the data.
  • video decoder 200 may be used to perform the reverse process.
  • the video decoder 200 may be used to receive and parse such syntax elements, and decode relevant video data accordingly.
  • the video encoder 100 may entropy encode one or more syntax elements, such as indication information defining the target filter and parameter information of the interpolation filter, into an encoded video bitstream (also referred to as a codestream).
  • video decoder 200 may parse such syntax elements and decode relevant video data accordingly.
  • the video encoder 100 includes a residual calculation unit 104, a transform processing unit 106, a quantization unit 108, an inverse quantization unit 110, an inverse transform processing unit 112, a reconstruction unit 114, a buffer 116, and loop filtering
  • the prediction processing unit 160 may include an inter prediction unit 144, an intra prediction unit 154, and a mode selection unit 162.
  • the inter prediction unit 144 may include a motion estimation unit and a motion compensation unit (not shown).
  • the video encoder 100 shown in FIG. 2 may also be referred to as a hybrid video encoder or a video encoder based on a hybrid video codec.
  • the residual calculation unit 104, the transform processing unit 106, the quantization unit 108, the prediction processing unit 160, and the entropy encoding unit 170 form the forward signal path of the encoder 100, and for example, the inverse quantization unit 110, the inverse transform processing unit 112, the heavy
  • the structural unit 114, the buffer 116, the loop filter 120, the decoded picture buffer (DPB) 130, and the prediction processing unit 160 form a backward signal path of the encoder, where the backward signal path of the encoder corresponds The signal path of the decoder (see decoder 200 in FIG. 3).
  • the encoder 100 receives a picture 101 or a block 103 of the picture 101 through, for example, an input 102, for example, forming a picture in a picture sequence of a video or a video sequence.
  • the picture block 103 may also be called a current picture block or a picture block to be coded
  • the picture 101 may be called a current picture or a picture to be coded (especially when the current picture is distinguished from other pictures in video coding, other pictures such as the same video sequence That is, the previously encoded and / or decoded pictures in the video sequence of the current picture are also included.
  • An embodiment of the encoder 100 may include a division unit (not shown in FIG. 2) for dividing the picture 101 into a plurality of blocks such as the block 103, usually into a plurality of non-overlapping blocks.
  • the segmentation unit can be used to use the same block size and the corresponding grid that defines the block size for all pictures in the video sequence, or to change the block size between pictures or subsets or picture groups, and divide each picture into The corresponding block.
  • the prediction processing unit 160 of the video encoder 100 may be used to perform any combination of the above-mentioned segmentation techniques.
  • block 103 is also or can be regarded as a two-dimensional array or matrix of sampling points with luminance values (sample values), although its size is smaller than that of picture 101.
  • the block 103 may include, for example, one sampling array (for example, the brightness array in the case of black-and-white pictures 101) or three sampling arrays (for example, one brightness array and two chroma arrays in the case of color pictures) or basis An array of any other number and / or category of color formats applied.
  • the number of sampling points in the horizontal and vertical direction (or axis) of the block 103 defines the size of the block 103.
  • the encoder 100 shown in FIG. 2 is used to encode the picture 101 block by block, for example, to perform encoding and prediction on each block 103.
  • the residual calculation unit 104 is used to calculate the residual block 105 based on the picture block 103 and the prediction block 165 (further details of the prediction block 165 are provided below), for example, by subtracting the prediction of the sample value of the picture block 103 by sample (pixel by pixel) The sample values of block 165 to obtain the residual block 105 in the sample domain.
  • the transform processing unit 106 is used to apply a transform such as discrete cosine transform (DCT) or discrete sine transform (DST) on the sample values of the residual block 105 to obtain transform coefficients 107 in the transform domain .
  • the transform coefficient 107 may also be called a transform residual coefficient, and represents a residual block 105 in the transform domain.
  • the transform processing unit 106 may be used to apply integer approximations of DCT / DST, such as the transform specified by HEVC / H.265. Compared with the orthogonal DCT transform, this integer approximation is usually scaled by a factor. In order to maintain the norm of the residual block processed by the forward and inverse transform, an additional scaling factor is applied as part of the transform process.
  • the scaling factor is usually selected based on certain constraints, for example, the scaling factor is a power of two used for the shift operation, the bit depth of the transform coefficient, the accuracy, and the trade-off between implementation cost and so on.
  • a specific scaling factor is specified for the inverse transform by the inverse transform processing unit 212 on the decoder 200 side (and a corresponding inverse transform by the inverse transform processing unit 112 on the encoder 100 side), and accordingly, The 100 side specifies the corresponding scaling factor for the positive transform through the transform processing unit 106.
  • the quantization unit 108 is used to quantize the transform coefficient 107, for example, by applying scalar quantization or vector quantization, to obtain the quantized transform coefficient 109.
  • the quantized transform coefficient 109 may also be referred to as a quantized residual coefficient 109.
  • the quantization process can reduce the bit depth associated with some or all of the transform coefficients 107. For example, n-bit transform coefficients can be rounded down to m-bit transform coefficients during quantization, where n is greater than m.
  • the degree of quantization can be modified by adjusting the quantization parameter (QP). For example, for scalar quantization, different scales can be applied to achieve thinner or coarser quantization.
  • QP quantization parameter
  • a smaller quantization step size corresponds to a finer quantization
  • a larger quantization step size corresponds to a coarser quantization
  • a suitable quantization step size can be indicated by a quantization parameter (QP).
  • the quantization parameter may be an index of a predefined set of suitable quantization steps.
  • smaller quantization parameters may correspond to fine quantization (smaller quantization step size)
  • larger quantization parameters may correspond to coarse quantization (larger quantization step size)
  • the quantization may include dividing by the quantization step size and the corresponding quantization or inverse quantization performed by, for example, inverse quantization 110, or may include multiplying the quantization step size.
  • Embodiments according to some standards such as HEVC may use quantization parameters to determine the quantization step size.
  • the quantization step size can be calculated based on the quantization parameter using fixed-point approximation including equations for division. Additional scaling factors can be introduced for quantization and inverse quantization to restore the norm of the residual block that may be modified due to the scale used in the fixed-point approximation of the equations for quantization step size and quantization parameter.
  • the scale of inverse transform and inverse quantization may be combined.
  • a custom quantization table can be used and signaled from the encoder to the decoder in the bitstream, for example.
  • Quantization is a lossy operation, where the larger the quantization step, the greater the loss.
  • the inverse quantization unit 110 is used to apply the inverse quantization of the quantization unit 108 on the quantized coefficient to obtain the inverse quantization coefficient 111, for example, based on or using the same quantization step size as the quantization unit 108, apply the quantization scheme applied by the quantization unit 108 Inverse quantization scheme.
  • the inverse quantized coefficient 111 may also be referred to as the inverse quantized residual coefficient 111, which corresponds to the transform coefficient 107, although the loss due to quantization is usually not the same as the transform coefficient.
  • the inverse transform processing unit 112 is used to apply the inverse transform of the transform applied by the transform processing unit 106, for example, an inverse discrete cosine transform (DCT) or an inverse discrete sine transform (DST), in the sample domain
  • the inverse transform block 113 is obtained.
  • the inverse transform block 113 may also be referred to as an inverse transform dequantized block 113 or an inverse transform residual block 113.
  • the reconstruction unit 114 (eg, the summer 114) is used to add the inverse transform block 113 (ie, the reconstructed residual block 113) to the prediction block 165 to obtain the reconstructed block 115 in the sample domain, for example, The sample value of the reconstructed residual block 113 and the sample value of the prediction block 165 are added.
  • a buffer unit 116 such as a line buffer 116 is used to buffer or store the reconstructed block 115 and corresponding sample values for, for example, intra prediction.
  • the encoder may be used to use the unfiltered reconstructed blocks and / or corresponding sample values stored in the buffer unit 116 for any type of estimation and / or prediction, such as intra prediction.
  • an embodiment of the encoder 100 may be configured such that the buffer unit 116 is used not only for storing the reconstructed block 115 for intra prediction 154, but also for the loop filter unit 120 (not shown in FIG. 2) Out), and / or, for example, causing the buffer unit 116 and the decoded picture buffer unit 130 to form a buffer.
  • Other embodiments may be used to use the filtered block 121 and / or blocks or samples from the decoded picture buffer 130 (neither shown in FIG. 2) as an input or basis for intra prediction 154.
  • the loop filter unit 120 (or simply "loop filter” 120) is used to filter the reconstructed block 115 to obtain the filtered block 121, so as to smoothly perform pixel conversion or improve video quality.
  • the loop filter unit 120 is intended to represent one or more loop filters, such as deblocking filters, sample-adaptive offset (SAO) filters, or other filters, such as bilateral filters, Adaptive loop filter (adaptive loop filter, ALF), or sharpening or smoothing filter, or collaborative filter.
  • the loop filter unit 120 is shown as an in-loop filter in FIG. 2, in other configurations, the loop filter unit 120 may be implemented as a post-loop filter.
  • the filtered block 121 may also be referred to as the filtered reconstructed block 121.
  • the decoded picture buffer 130 may store the reconstructed coding block after the loop filter unit 120 performs a filtering operation on the reconstructed coding block.
  • Embodiments of the encoder 100 may be used to output loop filter parameters (eg, sample adaptive offset information), for example, directly output or by the entropy encoding unit 170 or any other
  • the entropy coding unit outputs after entropy coding, for example, so that the decoder 200 can receive and apply the same loop filter parameters for decoding.
  • the decoded picture buffer (DPB) 130 may be a reference picture memory for storing reference picture data for the video encoder 100 to encode video data.
  • DPB 130 can be formed by any of a variety of memory devices, such as dynamic random access memory (dynamic random access memory (DRAM) (including synchronous DRAM (synchronous DRAM, SDRAM), magnetoresistive RAM (magnetoresistive RAM, MRAM), resistive RAM (resistive RAM, RRAM)) or other types of memory devices.
  • DRAM dynamic random access memory
  • the DPB 130 and the buffer 116 may be provided by the same memory device or separate memory devices.
  • a decoded picture buffer (DPB) 130 is used to store the filtered block 121.
  • the decoded picture buffer 130 may be further used to store other previous filtered blocks of the same current picture or different pictures such as previous reconstructed pictures, such as the previously reconstructed and filtered block 121, and may provide a complete previous The reconstructed ie decoded pictures (and corresponding reference blocks and samples) and / or partially reconstructed current pictures (and corresponding reference blocks and samples), for example for inter prediction.
  • a decoded picture buffer (DPB) 130 is used to store the reconstructed block 115 if the reconstructed block 115 is reconstructed without in-loop filtering.
  • the prediction processing unit 160 also known as the block prediction processing unit 160, is used to receive or acquire the block 103 (currently encoded image block 103 of the current picture 101) and reconstructed picture data, such as the same (current) picture from the buffer 116 Reference samples and / or reference picture data 231 of one or more previously decoded pictures from the decoded picture buffer 130, and used to process such data for prediction, that is, to provide an inter prediction block 145 or The prediction block 165 of the intra prediction block 155.
  • the block prediction processing unit 160 also known as the block prediction processing unit 160, is used to receive or acquire the block 103 (currently encoded image block 103 of the current picture 101) and reconstructed picture data, such as the same (current) picture from the buffer 116 Reference samples and / or reference picture data 231 of one or more previously decoded pictures from the decoded picture buffer 130, and used to process such data for prediction, that is, to provide an inter prediction block 145 or The prediction block 165 of the intra prediction block 155.
  • the inter prediction unit 145 may include a candidate interpolation filter set 151 and a filter selection unit 152, and the candidate interpolation filter set 151 may include multiple kinds of interpolation filters, for example, including discrete cosine-based Transformed interpolation filter (DCT-based interpolation filter, DCTIF) and invertibility-based interpolation filter (invertibility-based interpolation filter, also referred to as InvIF in this article).
  • DCT-based interpolation filter DCTIF
  • invertibility-based interpolation filter also referred to as InvIF in this article.
  • InvIF refers to the interpolation filter obtained through the interpolation filter training method described in FIG. 6A or FIG. 6C of the present application.
  • the filter selection unit 152 is used to realize or combine with other units (such as the transformation processing unit 106, the quantization unit 108, the inverse transformation processing unit 112, the reconstruction unit 114, the loop filter unit 120, the transformation processing unit 106, etc.) To realize the selection of an interpolation filter (for example, DCTIF or InvIF) from the candidate interpolation filter set 151 and / or for the entropy coding unit to entropy encode the indication information of the selected interpolation filter (also referred to herein as a target interpolation filter) .
  • the candidate interpolation filter set 151 may include multiple types of interpolation filters, which may all be interpolation filters provided in the prior art, or may include interpolation shown in FIG.
  • the interpolation filter obtained by the filter training method may include a single interpolation filter obtained by the interpolation filter training method shown in FIG. 6A or FIG. 6C provided by the embodiment of the present application.
  • the candidate interpolation filter set 151 may be used in the process of motion estimation. In another embodiment of the present application, the candidate interpolation filter set 151 may also be used in other scenarios that require interpolation operations.
  • the encoder 100 may further include a training unit (not shown in FIG. 2) for training the interpolation filter.
  • the training unit may be provided inside or outside the inter prediction module 145. It can be understood that the training unit may be provided in the inter prediction unit 145, or may be provided in other positions of the encoder 100, and coupled with one or more interpolation filters in the inter prediction unit to implement the training of the interpolation filter , The update of the filter parameters of the interpolation filter, etc. It can be understood that the training unit may also be located outside the encoder or in other devices (devices that do not include the encoder 100). The encoder may configure the interpolation filter by receiving filter parameters.
  • the mode selection unit 162 may be used to select a prediction mode (eg, intra or inter prediction mode) and / or the corresponding prediction block 145 or 155 used as the prediction block 165 to calculate the residual block 105 and reconstruct the reconstructed block 115.
  • a prediction mode eg, intra or inter prediction mode
  • / or the corresponding prediction block 145 or 155 used as the prediction block 165 to calculate the residual block 105 and reconstruct the reconstructed block 115.
  • An embodiment of the mode selection unit 162 may be used to select a prediction mode (for example, from those prediction modes supported by the prediction processing unit 160), which provides the best match or the minimum residual (the minimum residual means Better compression in transmission or storage), or provide minimum signaling overhead (minimum signaling overhead means better compression in transmission or storage), or consider or balance both at the same time.
  • the mode selection unit 162 may be used to determine a prediction mode based on rate distortion optimization (RDO), that is, to select a prediction mode that provides minimum bit rate distortion optimization, or to select a prediction mode in which the related rate distortion at least meets the prediction mode selection criteria .
  • RDO rate distortion optimization
  • the prediction process performed by the example of the encoder 100 for example, by the prediction processing unit 160
  • the mode selection for example, by the mode selection unit 162
  • the encoder 100 is used to determine or select the best or optimal prediction mode from the (predetermined) prediction mode set.
  • the prediction mode set may include, for example, intra prediction modes and / or inter prediction modes.
  • the set of intra prediction modes may include 35 different intra prediction modes, for example, non-directional modes such as DC (or mean) mode and planar mode, or directional modes as defined in H.265, or may include 67 Different intra prediction modes, for example, non-directional modes such as DC (or mean) mode and planar mode, or directional modes as defined in the developing H.266.
  • the set of inter prediction modes depends on the available reference pictures (ie, for example, the aforementioned at least partially decoded pictures stored in DBP 130) and other inter prediction parameters, such as whether to use the entire reference picture or only the reference A part of the picture, for example a search window area surrounding the area of the current block, to search for the best matching reference block, and / or for example depends on whether a pixel picture such as half-pixel and / or quarter-pixel interpolation is applied.
  • skip mode and / or direct mode can also be applied.
  • the prediction processing unit 160 may be further used to split the block 103 into smaller block partitions or sub-blocks, for example, by iteratively using quad-tree (QT) splitting, binary-tree (BT) splitting or Triple-tree (TT) partitioning, or any combination thereof, and for performing prediction for each of block partitions or sub-blocks, for example, where mode selection includes selecting the tree structure of the partitioned block 103 and the selection applied to the block The prediction mode of each of the partitions or sub-blocks.
  • QT quad-tree
  • BT binary-tree
  • TT Triple-tree
  • the inter prediction unit 144 may include a motion estimation (ME) unit (not shown in FIG. 2) and a motion compensation (MC) unit (not shown in FIG. 2).
  • the motion estimation unit is used to receive or acquire the picture block 103 (the current picture block 103 of the current picture 101) and the decoded picture 131, or at least one or more previously reconstructed blocks, for example, one or more other / different previous warp
  • the reconstructed block of the picture 231 is decoded to perform motion estimation.
  • the video sequence may include the current picture and the previously decoded picture 231, or in other words, the current picture and the previously decoded picture 231 may be part of the picture sequence forming the video sequence, or form the picture sequence.
  • the encoder 100 may be used to select a reference block from multiple reference blocks of the same or different pictures in multiple other pictures and provide a reference picture (or reference picture index) to a motion estimation unit (not shown in FIG. 2) Etc.) and / or provide the offset (spatial offset) between the position of the reference block (X, Y coordinates) and the position of the current block (spatial offset) as the inter prediction parameter.
  • This offset is also known as motion vector (MV).
  • the motion estimation unit may include a candidate interpolation filter set, and the motion estimation unit is further used to select a target interpolation filter for the current encoded image block from the candidate interpolation filter set according to a rate-distortion cost criterion.
  • the motion estimation unit is further used to: perform sub-pixel interpolation on the entire pixel reference image block optimally matched with the current encoded image block through each interpolation filter in the candidate interpolation filter set to obtain N sub-pixel reference image blocks, Further, the prediction block that best matches the current encoded image block is determined from the whole pixel reference image block and the N sub-pixel reference image blocks, and the interpolation filter of the prediction block selected from the candidate interpolation filter set is the target. Interpolation filter.
  • the motion compensation unit is used to acquire, for example, receive inter prediction parameters, and perform inter prediction based on or using inter prediction parameters to obtain inter prediction blocks 145.
  • the motion compensation performed by the motion compensation unit may include extracting or generating a prediction block based on a motion / block vector determined by motion estimation (possibly performing interpolation of sub-pixel accuracy). Interpolation filtering can generate additional pixel samples from known pixel samples, potentially increasing the number of candidate prediction blocks that can be used to encode picture blocks.
  • the motion compensation unit 146 may locate the prediction block pointed to by the motion vector in a reference picture list. Motion compensation unit 146 may also generate syntax elements associated with blocks and video slices for use by video decoder 200 when decoding picture blocks of video slices.
  • the intra prediction unit 154 is used to obtain, for example, a picture block 103 (current picture block) of the same picture and one or more previously reconstructed blocks, such as reconstructed neighboring blocks, for intra estimation.
  • the encoder 100 may be used to select an intra prediction mode from multiple (predetermined) intra prediction modes.
  • Embodiments of the encoder 100 may be used to select an intra prediction mode based on optimization criteria, for example, based on a minimum residual (eg, an intra prediction mode that provides the prediction block 155 most similar to the current picture block 103) or minimum rate distortion.
  • a minimum residual eg, an intra prediction mode that provides the prediction block 155 most similar to the current picture block 103
  • minimum rate distortion e.g., a minimum rate distortion
  • the intra prediction unit 154 is further used to determine the intra prediction block 155 based on the intra prediction parameters of the intra prediction mode as selected. In any case, after selecting the intra-prediction mode for the block, the intra-prediction unit 154 is also used to provide the intra-prediction parameters to the entropy encoding unit 170, that is, to provide an indication of the selected intra-prediction mode for the block Information. In one example, the intra prediction unit 154 may be used to perform any combination of intra prediction techniques described below.
  • the entropy coding unit 170 is used to convert the entropy coding algorithm or scheme (for example, variable length coding (VLC) scheme, context adaptive VLC (context adaptive VLC, CAVLC) scheme, arithmetic coding scheme, context adaptive binary arithmetic) Encoding (context adaptive) binary arithmetic coding (CABAC), syntax-based context-adaptive binary arithmetic coding (SBAC), probability interval entropy (probability interval interpartitioning entropy, PIPE) encoding or other entropy Encoding method or technique) applied to a single or all of the quantized residual coefficients 109, inter prediction parameters, intra prediction parameters, and / or loop filter parameters (or not applied) to obtain output 172 to For example, encoded picture data 171 output in the form of an encoded bit stream 171.
  • VLC variable length coding
  • CABAC context adaptive binary arithmetic
  • SBAC syntax-based context-adaptive binary arithmetic coding
  • the encoded bitstream may be transmitted to the video decoder 200 or archived for later transmission or retrieval by the video decoder 200.
  • the entropy encoding unit 170 may also be used to entropy encode other syntax elements of the current video slice being encoded.
  • the entropy encoding unit 170 is further used to entropy encode the indication information of the target interpolation filter and / or the filter parameters of the interpolation filter.
  • the training unit is configured to train the interpolation filter based on the machine learning included in the inter prediction unit 144 based on the sample image to determine or optimize the filter parameters of the interpolation filter.
  • video encoder 100 may be used to encode video streams.
  • the non-transform based encoder 100 can directly quantize the residual signal without the transform processing unit 106 for certain blocks or frames.
  • the encoder 100 may have a quantization unit 108 and an inverse quantization unit 110 combined into a single unit.
  • FIG. 3 shows an exemplary video decoder 200 for implementing the technology of the present application, that is, a video image decoding method.
  • the video decoder 200 is used to receive encoded picture data (eg, encoded bitstream) 171, for example, encoded by the encoder 100, to obtain the decoded picture 131.
  • encoded picture data eg, encoded bitstream
  • video decoder 200 receives video data from video encoder 100, such as an encoded video bitstream (also referred to as a codestream) and associated syntax elements that represent picture blocks of the encoded video slice.
  • an encoded video bitstream also referred to as a codestream
  • associated syntax elements that represent picture blocks of the encoded video slice.
  • the decoder 200 includes an entropy decoding unit 204, an inverse quantization unit 210, an inverse transform processing unit 212, a reconstruction unit 214 (such as a summer 214), a buffer 216, a loop filter 220, a Decode picture buffer 230 and prediction processing unit 260.
  • the prediction processing unit 260 may include an inter prediction unit 244, an intra prediction unit 254, and a mode selection unit 262.
  • video decoder 200 may perform a decoding pass that is generally reciprocal to the encoding pass described with reference to video encoder 100 of FIG. 2.
  • the entropy decoding unit 204 is used to perform entropy decoding on the encoded picture data (for example, code stream or current decoded image block) 171 to obtain, for example, quantized coefficients 209 and / or decoded encoding parameters (also called encoding information, FIG. 3 Not shown in), for example, in inter-prediction, intra-prediction parameters, loop filter parameters, target filter indication information, filter parameters, and / or information indicating inter prediction modes (decoded) ) Any one or all of them.
  • the entropy decoding unit 204 is further used to forward syntax elements such as inter prediction parameters, intra prediction parameters, indication information of the target filter, filter parameters, and / or information indicating inter prediction modes to the prediction processing unit 260.
  • the video decoder 200 may receive syntax elements at the video slice level and / or the video block level.
  • the inverse quantization unit 210 may be functionally the same as the inverse quantization unit 110, the inverse transform processing unit 212 may be functionally the same as the inverse transform processing unit 112, the reconstruction unit 214 may be functionally the same as the reconstruction unit 114, and the buffer 216 may be functionally Like the buffer 116, the loop filter 220 may be functionally the same as the loop filter 120, and the decoded picture buffer 230 may be functionally the same as the decoded picture buffer 130.
  • the prediction processing unit 260 may include an inter prediction unit 244 and an intra prediction unit 254, where the inter prediction unit 244 may be functionally similar to the inter prediction unit 144, and the intra prediction unit 254 may be similar to the intra prediction unit 154 .
  • the prediction processing unit 260 is generally used to perform block prediction and / or obtain the prediction block 265 from the encoded data 171, and to receive or obtain prediction-related parameters and / or information about the entropy decoding unit 204 (explicitly or implicitly). Information about the selected prediction mode.
  • the intra prediction unit 254 of the prediction processing unit 260 is used to signal-based the intra prediction mode and the previous decoded block from the current frame or picture. Data to generate a prediction block 265 for the picture block of the current video slice.
  • the inter prediction unit 244 eg, motion compensation unit
  • Other syntax elements generate prediction block 265 for the video block of the current video slice.
  • a prediction block may be generated from a reference picture in a reference picture list.
  • the video decoder 200 may construct the reference frame lists: list 0 and list 1 based on the reference pictures stored in the DPB 230 using default construction techniques.
  • the prediction processing unit 260 is used to determine the syntax elements such as indication information, filter parameters, or information indicating the inter prediction mode of the target interpolation filter used to obtain the prediction block by parsing the motion vector, and performing sub-pixel interpolation.
  • the prediction processing unit 260 uses some received syntax elements to determine a prediction mode (eg, intra or inter prediction) of a video block used to encode a video slice, an inter prediction slice type (eg, B slice, (P slice or GPB slice), construction information of one or more of the reference picture lists for slices, motion vectors for each inter-coded video block for slices, each warp for slices
  • a prediction mode eg, intra or inter prediction
  • an inter prediction slice type eg, B slice, (P slice or GPB slice
  • construction information of one or more of the reference picture lists for slices motion vectors for each inter-coded video block for slices, each warp for slices
  • the inter prediction state of the inter-coded video block the indication information of the target filter used to perform sub-pixel interpolation to obtain the prediction block, and other information to decode the video block of the current video slice.
  • the prediction processing unit 260 may include a candidate interpolation filter set 251 and a filter selection unit 252.
  • the candidate interpolation filter set 251 includes one or more interpolation filters, for example, DCTIF and InvIF.
  • the filter selection unit 252 is used for determining the target interpolation filter indicated by the parsed target interpolation filter indication information from the candidate interpolation filter set 251 if the motion information points to the fractional pixel position, and indicated by the indication information The target interpolation filter performs sub-pixel interpolation to obtain the prediction block.
  • the inverse quantization unit 210 may be used to inverse quantize (ie, inverse quantize) the quantized transform coefficients provided in the bitstream and decoded by the entropy decoding unit 204.
  • the inverse quantization process may include using the quantization parameters calculated by the video encoder 100 for each video block in the video slice to determine the degree of quantization that should be applied and also determine the degree of inverse quantization that should be applied.
  • the inverse transform processing unit 212 is used to apply an inverse transform (for example, inverse DCT, inverse integer transform, or conceptually similar inverse transform process) to the transform coefficients so as to generate a residual block in the pixel domain.
  • an inverse transform for example, inverse DCT, inverse integer transform, or conceptually similar inverse transform process
  • the reconstruction unit 214 (eg, summer 214) is used to add the inverse transform block 213 (ie, the reconstructed residual block 213) to the prediction block 265 to obtain the reconstructed block 215 in the sample domain, for example by The sample values of the reconstructed residual block 213 and the sample values of the prediction block 265 are added.
  • the loop filter unit 220 (during the encoding loop or after the encoding loop) is used to filter the reconstructed block 215 to obtain the filtered block 221 to smoothly perform pixel conversion or improve video quality.
  • the loop filter unit 220 may be used to perform any combination of filtering techniques described below.
  • the loop filter unit 220 is intended to represent one or more loop filters, such as deblocking filters, sample-adaptive offset (SAO) filters, or other filters, such as bilateral filters, self-adaptive filters Adaptive loop filter (adaptive loop filter, ALF), or sharpening or smoothing filter, or collaborative filter.
  • the loop filter unit 220 is shown as an in-loop filter in FIG. 3, in other configurations, the loop filter unit 220 may be implemented as a post-loop filter.
  • the decoded video block 221 in a given frame or picture is then stored in a decoded picture buffer 230 that stores reference pictures for subsequent motion compensation.
  • the decoder 200 is used, for example, to output the decoded picture 231 through the output 232 for presentation to the user or for the user to view.
  • video decoder 200 may be used to decode the compressed bitstream.
  • the decoder 200 may generate an output video stream without the loop filter unit 220.
  • the non-transform based decoder 200 may directly inversely quantize the residual signal without the inverse transform processing unit 212 for certain blocks or frames.
  • the video decoder 200 may have an inverse quantization unit 210 and an inverse transform processing unit 212 combined into a single unit.
  • FIGS. 2 and 3 show specific encoders 100 and decoders 200
  • the encoders 100 and decoders 200 may also include various other functional units, modules, or components that are not depicted. Furthermore, it is not limited to the specific components and / or the manner in which various components shown in FIGS. 2 and 3 are arranged.
  • the various units of the system described herein may be implemented in software, firmware, and / or hardware, and / or any combination thereof.
  • each digital image can be regarded as a two-dimensional array of m rows and n columns, including m * n samples (each sample, the position of each sample is called a sample position, and the value of each sample becomes a sample value).
  • m * n is called the resolution of the image, that is, the number of samples contained in the image.
  • 2K image resolution is 1920 * 1080
  • 4K video image resolution is 3840 * 2160.
  • a sample is also called a pixel
  • the sample value is also called a pixel value, so each pixel also contains two pieces of information, the pixel position and the pixel value.
  • Digital video coding aims to remove redundant information in digital video, making digital video more conducive to storage and transmission in the network.
  • the redundancy of digital video includes spatial redundancy, temporal redundancy, statistical redundancy, and visual redundancy.
  • the current block-based hybrid coding framework will introduce inter-frame prediction technology, predicted by the encoded frame The current frame to be encoded, thereby greatly saving the encoding bit rate.
  • the current picture to be encoded is first divided into several non-overlapping coding units (CU). Each CU has its own coding mode. Each CU can be further divided into several prediction units (PUs), and each PU has its own prediction mode, such as prediction direction, motion vector (MV), and so on. At the encoding end, each PU searches for a matching block in the reference frame, and the location of the matching block is identified using MV. Because the sample values of some positions in the image are not sampled during digital sampling (referring to fractional pixel positions), therefore, the current block may not be able to search for a perfectly matched block in the reference frame, and this time will use sub-pixel interpolation Techniques to interpolate pixel values that produce fractional pixel positions.
  • Figure 4 shows a schematic diagram of the position of whole pixels and sub-pixels.
  • the position indicated by capital letters represents integer pixel positions, and the remaining lower case letters represent different fractional pixel positions.
  • the pixel values of different fractional pixel positions are generated by interpolation using different pixels through different interpolation filters, and then used as a reference block.
  • the motion vector accuracy is one-quarter accuracy.
  • the whole pixel image block that best matches the current image block to be encoded is first searched. Then use a 1/2 precision interpolation filter to generate 4 1 / 2-pixel sub-pixel image blocks.
  • the current image block to be encoded is matched with four 1 / 2-pixel sub-pixel image blocks and whole pixel image blocks to obtain an optimally matched 1 / 2-precision motion vector.
  • the most-matched pixel block pointed by the above-mentioned best-match 1 / 2-precision motion vector is interpolated using a 1 / 4-precision interpolation filter to obtain eight 1 / 4-precision sub-pixel blocks.
  • the reason for the existence of whole pixels and sub-pixels is due to the discrete nature of digital sampling.
  • the dots represent the sampled samples
  • the dotted line indicates that the original analog signal is s (t)
  • the solid line indicates the interpolated signal (referred to as the interpolated signal).
  • Interpolation is the inverse process of digital sampling. The purpose of interpolation is to use discrete sample values to recover the original continuous signal as completely as possible, and to obtain sample values at specific positions (fractional pixel positions).
  • the target position ⁇ fractional pixel position
  • the interpolation process is described as follows:
  • s i represents the sample value at the integer pixel position i
  • i is the index of the integer pixel position
  • i is an integer
  • f ⁇ is the interpolation filter corresponding to the fractional pixel position ⁇
  • M and N are positive integers
  • -M ⁇ ⁇ ⁇ N are positive integers
  • u 0 f ⁇ (s ′ - ⁇ -M , s ′ - ⁇ -M + 1 ,..., s ′ - ⁇ , s ′ - ⁇ + 1 ,..., s ′ - ⁇ + N ) (2)
  • u 0 f ⁇ (s ⁇ + M , s ⁇ + M-1 ,..., s ⁇ , s ⁇ -1 ,..., s ⁇ -N ) (3)
  • the interpolation filter in part (b) of Figure 5 can also recover integer pixel positions from fractional pixel positions, namely:
  • the embodiments of the present invention provide two training methods for interpolation filters.
  • the interpolation filter training method can be run in an encoder or a computing device.
  • the computing device may include, but is not limited to, a computer, a cloud computing device, a server, a terminal device, and so on.
  • interpolation filters with fixed coefficients are generally used, for example, bilinear interpolation filters, bicubic interpolation filters, and the like.
  • interpolation filters with fixed coefficients are commonly used in video coding standards.
  • H.264 / AVC a 6-tap finite corresponding filter is used to generate half-pixel samples, and simple bilinear interpolation is used to generate quarter pixels.
  • the interpolation filter in HEVC has made many improvements compared to H.264 / AVC.
  • An 8-tap filter is used to generate half pixels, while a quarter-pixel uses a 7-tap interpolation filter.
  • the fixed coefficient interpolation filter is easy to implement and has low complexity, so it is widely used. However, due to the diversity and non-stationarity of the video signal, the performance of the fixed coefficient filter is very limited.
  • a typical adaptive interpolation filter estimates the filter coefficients at the encoding end according to the error of motion compensation prediction, and then encodes the filter coefficients into the code stream.
  • a separable adaptive interpolation filter is proposed, which can achieve a significant reduction in complexity while basically maintaining the coding performance.
  • some adaptive interpolation filters are usually designed to assume that the image is isotropic.
  • the adaptive interpolation filter is content-adaptive, it is still based on the linear interpolation filter.
  • encoding the filter coefficients still requires some bits.
  • the embodiment of the present application is based on the above technical problem, and proposes a training method of the interpolation filter. Please refer to the schematic flowchart of an interpolation filter training method provided in the embodiment of the present application shown in FIG. 6A, and the schematic explanatory diagram of the training process shown in FIG. 6B.
  • the method includes but is not limited to some or all of the following steps:
  • S612 Interpolate the pixels of the sample image at integer pixel positions through the first interpolation filter to obtain the first sub-pixel image of the sample image at the first sub-pixel position.
  • S614 Input the sample image into the second interpolation filter to obtain a second sub-pixel image.
  • S616 Determine the filter parameter of the second interpolation filter by minimizing a first function representing the difference between the first sub-pixel image block and the second sub-pixel image block.
  • steps S64 and S66 are an iterative process in the training process.
  • the sample image may be the original image X or the image X ′ after the original image X is encoded and compressed by the encoder.
  • the sample image input to the first interpolation filter is the original image
  • the image input to the second interpolation filter may be an image after the sample image is encoded and compressed by the encoder.
  • the first interpolation filter is any interpolation filter in the prior art that can interpolate and generate pixel values at the first fractional pixel position.
  • the first interpolation filter may be an interpolation filter with a fixed coefficient, a Adaptive interpolation filters or other types of interpolation filters are not limited in the embodiments of the present application.
  • the first fractional pixel position may be any fractional pixel position. It can be seen that in the embodiment of the present application, the first interpolation pixel image is used as the label data to train the second interpolation filter, so that the obtained second interpolation filter can be directly used for interpolation to obtain the pixel value of the first fractional pixel position.
  • the second interpolation filter may be a support vector machine (SVM), neural network (NN), convolutional neural network (CNN), or other forms.
  • SVM support vector machine
  • NN neural network
  • CNN convolutional neural network
  • the application examples are not limited.
  • the first function may be a function for representing the difference between the first fractional pixel image and the second fractional pixel image.
  • the first function may be a loss function, an objective function, a cost function, etc., which is not limited in the embodiments of the present application.
  • the first function is a regularization loss function, and the first function can be expressed as:
  • is the index of the fractional pixel position
  • L reg ⁇ is the first function corresponding to the fractional pixel position ⁇
  • X is the sample image
  • X ′ is the image after the sample image is compressed by the encoder
  • TIF ⁇ is the fractional pixel position ⁇ corresponds to the first interpolation filter
  • F is the second interpolation filter
  • TIF ⁇ (X) is the first sub-pixel image corresponding to the fractional pixel position ⁇
  • X f, ⁇ F ⁇ (X ′) is the fractional pixel position
  • norm symbol Where i is the index of the pixel in x.
  • the first function may also be specifically expressed in other ways, for example, a log loss function, a square loss function, an exponential loss function, or another form of loss function, which is not limited in this embodiment of the present application.
  • sample image may be one image or multiple images.
  • a sample image may be a frame image, a coding unit (CU), or a prediction unit (PU), which is not limited in the present invention.
  • the filter parameters of the second interpolation filter can be obtained by minimizing the loss function, and the training process can be expressed as:
  • n is the total number of sample images, n is a positive integer; k is the index of the sample image, k is a positive integer, k ⁇ n, ⁇ * is the optimal filter parameter, and ⁇ is the filter parameter, It is the first function corresponding to the sample image k.
  • n may be equal to 1 or other positive integer.
  • the filter parameters of the second interpolation filter may be solved by least square method, linear regression, linear regression, gradient descent, or other methods.
  • the interpolation filter obtained by machine learning has no label data.
  • the label data used in the prior art uses a "blur-sampling" method to blur the sample image using a low-pass filter, so that the correlation between adjacent pixels increases. Then the image is sampled into several sub-images according to different phases. Consider phase 0 as a whole pixel, and other phases as sub-pixels at different positions. However, the label data obtained by this method is designed manually, so it is not optimal.
  • the interpolation filter training method performs sub-pixel interpolation on the sample image through a traditional interpolation filter to obtain a first sub-pixel image, which is used as label data and supervises the first sub-pixel image to train the interpolation
  • the filter (second interpolation filter) obtains an interpolation filter, which can improve the performance of encoding and decoding.
  • interpolation filters corresponding to multiple fractional pixel positions can be jointly trained.
  • the specific implementation method includes but is not limited to the following steps:
  • S1 Interpolate the pixels of the sample image at integer pixel positions through the first interpolation filter corresponding to the fractional pixel position ⁇ to obtain the first sub-pixel image of the sample image at the fractional pixel position ⁇ , where the fractional pixel position ⁇ is Any one of the Q fractional pixel positions, Q is a positive integer.
  • Q may be the total number of fractional pixel positions, or other numerical values.
  • FIG. 6C Please refer to the schematic flowchart of another interpolation filter training method provided in the embodiment of the present application shown in FIG. 6C, and the schematic explanatory diagram of the training process shown in FIG. 6D.
  • the method includes but is not limited to some or all of the following steps:
  • S622 Perform sub-pixel interpolation on the sample image through the first interpolation filter to obtain the first sub-pixel image of the sample image at the first sub-pixel position.
  • S624 Input the sample image into the second interpolation filter to obtain a second sub-pixel image.
  • the second sub-pixel image is input into the third interpolation filter through the inversion operation to obtain the first image, and the first image is obtained through the inverse operation of the inversion operation to obtain the second image.
  • the three interpolation filters share filter parameters.
  • S628 Determine the filter parameter according to a first function representing the difference between the first sub-pixel image and the second sub-pixel image and a second function representing the difference between the sample image and the second image.
  • steps S76 and S78 are an iterative process in the training process.
  • the sub-pixel image X f generated by the sub-pixel interpolation undergoes a flip operation T and then undergoes a third interpolation filter to perform the sub-pixel interpolation to obtain the first image, and the first image is then subjected to the inverse operation T -1 of the flip operation T
  • the reconstructed image of the sample image, that is, the second image is obtained.
  • the first image and the second image are all pixel images
  • the flip operation includes horizontal flip, vertical flip and diagonal flip.
  • the selection of the type of flip operation can be determined by the following formula:
  • y f and x f respectively represent the sub-pixel displacement of the flipped image relative to the second sub-pixel image in the vertical and horizontal directions.
  • the second function may be a function for representing the difference between the sample image and the second image.
  • the second function may be a loss function, an objective function, a cost function, etc., which is not limited in the embodiments of the present application.
  • the second function can be expressed as:
  • L rec ⁇ is the second function
  • X is the sample image
  • indicates the position of the first fractional pixel
  • TIF ⁇ is the first interpolation filter
  • F is the second interpolation filter
  • TIF ⁇ (X) is the first fraction Pixel image
  • X f, ⁇ is the second sub-pixel image.
  • TT -1 E
  • E is the identity matrix.
  • the first function may also be specifically expressed in other ways, for example, a log loss function, a square loss function, an exponential loss function, or another form of loss function, which is not limited in this embodiment of the present application.
  • step S628 may be:
  • the filter parameters are determined by minimizing a third function, where the third function is a first function used to represent the difference between the first sub-pixel image and the second sub-pixel image and used to represent A weighted sum of the second function of the difference between the sample image and the second image.
  • the joint loss function (also called the third function) is defined as follows:
  • the filter parameters of the second interpolation filter can be obtained by minimizing the joint loss function, and the training process can be expressed as:
  • step S48 may be: by alternately minimizing a first loss function representing the difference between the first sub-pixel image and the second sub-pixel image And a second function representing the difference between the sample image and the second image determines the filter parameter.
  • the sample image may be one image or multiple images.
  • a sample image may be a frame image, a coding unit (CU), or a prediction unit (PU), which is not limited in the present invention.
  • the filter parameters of the second interpolation filter can be obtained by minimizing the loss function.
  • the training process can be expressed as:
  • n is the total number of sample images, n is a positive integer; k is the index of the sample image, k is a positive integer, k ⁇ n, ⁇ * is the optimal filter parameter, and ⁇ is the filter parameter, Is the first function corresponding to the sample image k, It is the second function corresponding to the sample image k.
  • n may be equal to 1 or other positive integer.
  • the interpolation filter training method performs sub-pixel interpolation on the sample image through a traditional interpolation filter to obtain a first sub-pixel image, which is used as label data.
  • a traditional interpolation filter to obtain a first sub-pixel image, which is used as label data.
  • the second interpolation filter is constrained by supervising the sample image, and the accuracy of the sub-pixel interpolation by the second interpolation filter is improved.
  • This method can be performed by the video encoder 100.
  • the method is described as a series of steps or operations. It should be understood that the steps may be performed in various orders and / or simultaneously, and are not limited to the order of execution shown in FIG. 6.
  • the video data stream with multiple video frames is using a video encoder, perform a process including the following steps to predict the motion information of the currently encoded image block of the current video frame, and based on the inter prediction mode and the current encoding of the currently encoded image block
  • the motion information of the image block encodes the currently encoded image block.
  • S72 Perform inter-frame prediction on the current coded image block to obtain motion information of the current coded image block, where the motion information of the current coded image block points to a fractional pixel position.
  • the inter-frame prediction process includes: determining from the set of candidate interpolation filters The target interpolation filter for the currently encoded image block.
  • S74 Encode the current encoded image block based on the inter prediction mode of the current encoded image block and the motion information of the current encoded image block to obtain the encoded information, and encode the encoded information into the code stream, where the encoded information includes the target interpolation filter ’s Indication information; the indication information of the target interpolation filter is used to instruct the reference block for obtaining the fractional pixel position corresponding to the current encoded image block through the pixel interpolation by the target interpolation filter.
  • the video encoder needs to encode the indication information of the target filter into the code stream, so that the decoding end knows the encoding The type of target filter for the prediction block.
  • S82 Perform inter prediction on the current encoded image block to obtain the motion information of the current encoded image block, where the motion information of the current encoded image block points to the position of the fractional pixel.
  • the inter prediction process includes: The target interpolation filter for the currently encoded image block.
  • S84 Encode the current encoded image block based on the inter prediction mode of the current encoded image block and the motion information of the current encoded image block, obtain the encoded information, and encode the encoded information into the code stream, where, if the frame of the currently encoded image block The inter prediction mode is the target inter prediction mode, and the coding information does not include the indication information of the target interpolation filter; if the inter prediction mode of the currently encoded image block is a non-target inter prediction mode, the coding information includes the indication information of the target interpolation filter The indication information of the target interpolation filter is used to instruct the current encoded image block to use the target interpolation filter to perform sub-pixel interpolation.
  • the video encoder includes a set of candidate interpolation filters.
  • the set of candidate interpolation filters may include multiple types of interpolation filters, and for each type of interpolation filter, one or more interpolation filters may be included. .
  • the video codec When the video codec performs inter prediction on the current encoded image block, it may select one of the interpolation filters to perform sub-pixel interpolation to obtain the predicted block of the current encoded image block.
  • the target interpolation filter is an interpolation filter that performs sub-pixel interpolation to obtain a prediction block or a type of interpolation filter that obtains the prediction block. That is to say, the instruction information of the target interpolation filter indicates the interpolation filter that obtains the prediction block, and may also indicate the type of interpolation filter that obtains the prediction block.
  • the candidate interpolation filter set includes two types of interpolation filters, for example, a first type interpolation filter and a second type interpolation filter.
  • the indication information may be "0 "
  • the target interpolation filter is the second type of interpolation filter
  • the indication information may be" 1 ".
  • first-type interpolation filter or the second-type interpolation filter may include second interpolation filters respectively corresponding to one or more fractional pixel positions trained by the interpolation filter.
  • the determination of the target interpolation filter may include but is not limited to the following two implementation manners.
  • the target interpolation filter for the current encoded image block is determined from the set of candidate interpolation filters according to the rate-distortion cost criterion.
  • the specific implementation is as follows: calculating the rate-distortion cost of the sub-pixel image block generated by each type of interpolation filter, and determining the interpolation filter with the lowest rate-distortion cost as the target interpolation filter that is finally used for the sub-pixel interpolation to obtain the prediction block corresponding to the current encoded image block Device.
  • the video encoder can search for an entire pixel reference block that optimally matches the current encoded image block; in turn, through the first type of interpolation filter (candidate interpolation filter set) Any kind of interpolation filter), sub-pixel interpolation is performed on the entire pixel reference image block to obtain P sub-pixel reference image blocks, the prediction block is determined, the motion information of the prediction block is obtained and the residual is calculated, and the residual, motion information, etc.
  • Encoding information is encoded into the code stream, and image block reconstruction is performed according to the code stream.
  • the mean square deviation of the reconstructed image block and the current image block of the enemy is used as distortion, and the obtained code stream university is used as the code rate.
  • the rate obtains the rate-distortion cost of the first-type interpolation filter.
  • the calculation of the rate-distortion cost is in the prior art, and will not be repeated here. It should be understood that in the present application, although the current encoding image block is completely encoded and reconstructed during the inter-frame prediction process, this process is a test process, and the encoding information obtained by this process may not necessarily be written into the code stream . Optionally, only the coding information obtained by the coding process involving a type of interpolation filter with the lowest rate-distortion cost is written to the code stream.
  • P is a positive integer, which is determined by the accuracy of the sub-pixel interpolation performed by the first-type interpolation filter.
  • the video encoder can search for an integer pixel reference block that optimally matches the current encoded image block; the integer pixel reference image is matched by each interpolation filter in the candidate interpolation filter set
  • the block performs sub-pixel interpolation to obtain N sub-pixel reference image blocks, where N is a positive integer; determine the prediction block that best matches the current encoded image block among the entire pixel reference image block and the N sub-pixel reference image blocks; based on prediction
  • the block determines the motion information.
  • the target filter is an interpolation filter that interpolates the prediction block or a type of interpolation filter that obtains the prediction block.
  • the motion information points to integer pixels
  • the candidate interpolation filter set may include a second interpolation filter obtained by any one of the above interpolation filter training methods.
  • the filter parameter of the target interpolation filter is a preset filter parameter; or, the target interpolation
  • the filter parameters of the filter are filter parameters obtained by online training according to any of the above-mentioned interpolation filter training methods.
  • the coding information further includes filter parameters of the trained target interpolation filter; or, the coding information further includes filter parameter differences, and the filter parameter differences are the trained target interpolation filters for the current image unit
  • the filter parameters of the filter are relative to the filter parameters of the training target interpolation filter for the previously encoded image unit.
  • the image unit includes image frames, slices, video sequence subgroups, coding tree units (CTU), coding units (CU) or prediction units (PU), and so on. That is to say, the video encoder can encode every frame of an image fast, a slice of other image blocks, a video sequence subgroup, a coding tree unit (CTU), a coding unit (CU) or a prediction unit (PU), that is Train once.
  • CTU coding tree unit
  • CU coding unit
  • PU prediction unit
  • the video encoder may obtain the current encoded image block as a sample image every time, and train the second interpolation filter in the candidate interpolation filter set.
  • the following describes two specific implementation processes of the video image decoding method provided by the embodiments of the present invention based on FIGS. 9 and 10.
  • the method may be performed by the video decoder 200.
  • the method is described as a series of steps or operations. It should be understood that the steps may be performed in various orders and / or simultaneously, and are not limited to the execution shown in FIG. 9 or FIG. 10. order.
  • the implementation process may include some or all of the following steps:
  • S96 Perform a prediction process on the current decoded image block based on the motion information of the current decoded image block, where the prediction process includes: performing sub-pixel interpolation according to the target interpolation filter indicated by the indication information to obtain The prediction block of the currently decoded image block is described.
  • S98 Reconstruct the reconstructed block of the currently decoded image block based on the predicted block of the currently decoded image block.
  • step S92 may be executed before or after step S94, or may be executed simultaneously with step S94, which is not limited in this embodiment of the present invention.
  • the encoded information parsed by the code stream includes the indication information of the target interpolation filter.
  • the inter prediction mode it is related to the inter prediction mode to analyze whether the mutual encoding information contains motion information.
  • the inter prediction mode is the merge mode
  • the video decoder can inherit the motion information of the previously decoded image block merged in the merge mode; when the inter prediction mode is the non-merge mode, the video decoder can start from the code stream
  • the index of the motion information of the decoded image block is parsed out, or the index of the motion information of the decoded image block and the difference of the motion vector are parsed from the code stream to obtain the motion information.
  • an implementation of step S94 may include: the video decoder may parse out the index of the motion information of the decoded image block from the code stream; The motion information index of the block and the candidate motion information list of the currently decoded image block determine the motion information of the currently decoded image block.
  • step S94 may include: the video decoder may parse out the index and motion vector difference of the motion information of the decoded image block from the code stream; in turn, based on the motion information of the decoded image block The index and the candidate motion information list of the currently decoded image block determine the motion vector prediction value of the current decoded image block; based on the motion vector prediction value and the motion vector difference value, the motion vector of the current decoded image block is obtained.
  • step S94 may include: the video decoder may inherit the motion information of the previously decoded image block merged in the merge mode. It can be understood that the motion information of the image block decoded on this line is consistent with the motion information of the currently decoded image block.
  • the video decoder may first perform S94, and when the acquired motion information of the currently decoded image block points to the fractional pixel position, then perform step S92 to parse out the instruction information of the target interpolation filter from the code stream. It can be understood that when the motion information points to an integer pixel position, the coding information corresponding to the current image block parsed from the code stream does not include the indication information of the target interpolation filter; and when the obtained motion information of the current decoded image block points to an integer For the pixel position, the video decoder does not need to perform S92, at this time, it can make prediction based on the acquired motion information
  • the implementation process may include some or all of the following steps:
  • S102 Analyze the information of the currently decoded image block indicating the inter prediction mode of the currently decoded image block from the code stream;
  • inter prediction mode of the current image block is a non-target inter prediction mode, perform an inter prediction process on the current decoded image block based on the motion information of the current decoded image block, where the prediction process
  • the method includes: performing sub-pixel interpolation according to the target interpolation filter indicated by the instruction information of the target interpolation filter parsed from the code stream to obtain the prediction block of the current decoded image block;
  • S108 Reconstruct the current decoded image block based on the prediction block of the current decoded image block.
  • the inter prediction mode is distinguished.
  • the target interpolation filter is only needed when the inter prediction mode is a non-target prediction mode (such as a non-merge mode) and the motion information points to the fractional pixel position
  • the indication information of the code is encoded into the code stream; when the inter prediction mode is the target inter prediction mode (such as the merge mode), there is no need to include the motion information, the index of the motion information, the motion vector difference, and the target interpolation filter indication information Code into the code stream.
  • the target interpolation filter indication needs to be resolved Information; however, when the inter prediction mode is the target prediction mode (eg, merge mode) and the motion information of the currently decoded image block points to the fractional pixel position, the motion information of the previously decoded image block merged in the merge mode can be inherited And the instruction information of the target interpolation filter.
  • the target prediction mode eg, merge mode
  • step S104 may include: the video decoder may parse out the code stream The index of the motion information of the current decoded image block; further, the motion information of the current decoded image block is determined based on the index of the motion information of the current decoded image block and the candidate motion information list of the current decoded image block.
  • step S104 may include: the video decoder may parse out the index and motion vector difference value of the motion information of the decoded image block from the code stream; in turn, based on the motion information of the decoded image block The index and the candidate motion information list of the currently decoded image block determine the motion vector prediction value of the current decoded image block; based on the motion vector prediction value and the motion vector difference value, the motion vector of the current decoded image block is obtained.
  • the video decoder needs to parse out the indication information of the target interpolation filter from the code stream, which is needed in the process of the video decoder performing inter prediction, according to the The target interpolation filter indicated by the indication information performs sub-pixel interpolation to obtain the prediction block of the currently decoded image block. If the motion information of the currently decoded image points to an integer pixel position, the video decoder directly obtains the prediction block pointed to by the motion information according to the motion information.
  • step S104 may include: the video decoder can inherit the merged mode in the merge mode. The motion information of the image block decoded first.
  • the video decoder also needs to inherit the indication information of the interpolation filter used in the decoding process of the previously decoded image block merged in the merge mode, and then determine The target interpolation filter specified by the indication information is required during the inter-frame prediction process of the video decoder, and the pixel interpolation is performed according to the target interpolation filter indicated by the indication information to obtain the prediction block of the currently decoded image block. If the motion information of the currently decoded image points to an integer pixel position, the video decoder directly obtains the prediction block pointed to by the motion information according to the motion information.
  • the indication information of the target interpolation filter may also be used To encode.
  • the decoding end can determine the indication information of the target interpolation filter parsed from the code stream to determine the indication information The indicated target filter is used to interpolate through the target filter to obtain the prediction block of the currently decoded image block during inter prediction.
  • the indication information is indication information that may indicate a target interpolation filter that obtains the prediction block of the current decoded image block through sub-pixel interpolation, or may indicate the type of the target interpolation filter. If the indication information is the type of the target interpolation filter, an implementation method of the video decoder performing sub-pixel interpolation according to the target interpolation filter indicated by the indication information is: the video decoder performs interpolation filtering determined by the indication information according to the motion information The target interpolation filter for the prediction block indicated by the motion information by sub-pixel interpolation is determined in the type of the filter.
  • the filter parameter of the target interpolation filter at the video encoding end may be a preset filter parameter Is consistent with the filter parameter of the target interpolation filter at the video encoding end; or, the filter parameter of the target interpolation filter at the video encoding end may also be a filter parameter obtained according to the training method of the interpolation filter described above.
  • the encoding information further includes filter parameters of the target interpolation filter for the currently encoded image unit.
  • the video decoder at the decoding end can also parse the filter parameters from the code stream, and the filter parameters can be the filter parameters of the target interpolation filter for the currently decoded image unit obtained by the above filter training method, video encoding
  • the target interpolation filter Before performing the sub-pixel interpolation according to the target interpolation filter indicated by the indication information to obtain the prediction block of the currently decoded image block, the target interpolation filter can also be configured through the filter parameters of the target interpolation filter for the currently decoded image unit Device.
  • the encoding information further includes the filter parameter difference.
  • the video decoder at the decoding end can also parse into the filter parameter interpolation from the code stream, and the filter parameter difference value is the filter parameter of the target interpolation filter obtained by the training for the currently decoded image unit relative to the training.
  • the video encoder performs sub-pixel interpolation according to the target interpolation filter indicated by the indication information to obtain the prediction of the currently decoded image block
  • the filter parameters of the target interpolation filter for the currently decoded image unit can also be obtained from the filter parameters of the target interpolation filter for the previously decoded image unit and the filter parameter difference; further, The target interpolation filter is configured by the filter parameter of the target interpolation filter for the currently decoded image unit.
  • the filter parameters of the target interpolation filter at the encoding end and the decoding end may be fixed as the predicted filter parameters.
  • the coding information may not include the filter parameter or the filter parameter difference of the target interpolation filter, and the decoding end does not need to parse the filter parameter or the filter parameter difference of the target interpolation filter.
  • the image unit is an image frame, a slice, a video sequence subgroup, a coding tree unit (CTU), a coding unit (CU), or a prediction unit (PU), or the like. That is, the video decoder updates the filter parameters in units of the time duration required for decoding the image unit.
  • CTU coding tree unit
  • CU coding unit
  • PU prediction unit
  • the following provides a schematic flowchart of yet another video image encoding method provided by the present application, as shown in FIG. 11.
  • the method may include, but is not limited to, some or all of the following steps:
  • S1101 Analyze the information of the currently decoded image block indicating the inter prediction mode of the currently decoded image block from the code stream.
  • S1102 Determine whether the inter prediction mode specified by the information indicating the inter prediction mode of the currently decoded image block is a merge mode.
  • step S1103 is executed; otherwise, it is used to indicate the current decoded image block
  • the inter prediction mode specified by the information of the inter prediction mode is the non-merging mode, and step S1105 is executed.
  • S1103 Acquire the motion information of the previously decoded image block merged in the merge mode and the indication information of the target interpolation filter.
  • the motion information of the previously decoded image block merged in the merge mode is the motion information of the currently decoded image block.
  • S1104 Determine whether the motion information of the currently decoded image block points to an integer pixel position.
  • step S1104 may be executed. If the judgment result of S1104 is yes, the image block corresponding to the integer pixel position pointed by the motion information is the prediction block of the currently decoded image block, and the video decoder may execute step S1109, otherwise, execute step S1108.
  • S1105 Parse the motion information of the currently decoded image block from the code stream.
  • S1106 Determine whether the motion information of the currently decoded image block points to an integer pixel position.
  • step S1106 may be executed. If the judgment result of S1106 is yes, it means that the image block corresponding to the integer pixel position pointed by the motion information is the prediction block of the currently decoded image block, and the video decoder can execute step S1109, otherwise, it shows the current decoded image block
  • the prediction block of is a sub-pixel image, and the video decoder executes step S1107.
  • S1107 Analyze the indication information of the target interpolation filter used for the current encoded image block.
  • S1108 Perform sub-pixel interpolation according to the target interpolation filter indicated by the indication information of the target interpolation filter to obtain the prediction block of the decoded image block.
  • S1109 Reconstruct the reconstructed block of the currently decoded image block based on the predicted block of the currently decoded image block.
  • the video decoder judges whether the decoded image block in the above process is the last image block, and if so, the decoding process ends, otherwise, the above decoding process can be performed on the next image block to be decoded.
  • FIG. 12 is a schematic block diagram of an interpolation filter training device provided by an embodiment of the present invention.
  • the interpolation filter training device 1200 may be connected to an inter prediction unit or set in the encoder 100. In other units in the encoder 100.
  • the training of the interpolation filter may also be implemented by a computing device, which may be a computer, server, or other device or device that includes data processing.
  • the interpolation filter 1200 may include, but is not limited to, a tag data acquisition module 1201, an interpolation module 1202, and a parameter determination module 1203. among them:
  • the tag data acquisition module 1201 is configured to interpolate the pixels of the sample image at integer pixel positions through the first interpolation filter to obtain the first sub-pixel image of the sample image at the first sub-pixel position;
  • the interpolation module 1202 is configured to input the sample image into a second interpolation filter to obtain a second sub-pixel image
  • the parameter determining module 1203 is configured to determine the filter parameter of the second interpolation filter by minimizing a first function representing the difference between the first sub-pixel image and the second sub-pixel image.
  • FIG. 13 is a schematic block diagram of another interpolation filter training apparatus provided by an embodiment of the present invention.
  • the interpolation filter 1300 may include but is not limited to a tag data acquisition module 1301, an interpolation module 1302, and an inverse interpolation module 1303 and the parameter determination module 1304.
  • the tag data acquisition module 1301 is configured to interpolate the pixels of the sample image at integer pixel positions through the first interpolation filter to obtain the first sub-pixel image of the sample image at the first sub-pixel position;
  • the interpolation module 1302 is configured to input the sample image into a second interpolation filter to obtain a second sub-pixel image
  • the inverse interpolation module 1303 is configured to input the second sub-pixel image into a third interpolation filter through a flip operation to obtain a first image, and obtain the second image through the inverse operation of the flip operation An image, wherein the second interpolation filter and the third interpolation filter share filter parameters;
  • the parameter determining module 1304 is used for representing the difference between the first sub-pixel image and the second sub-pixel image and the difference between the sample image and the second image according to a first function
  • the parameter determination module 1304 is specifically configured to determine the filter parameter by minimizing a third function, where the third function is used to represent the first sub-pixel image and all A weighted sum of a first function of the difference between the second sub-pixel image and a second function representing the difference between the sample image and the second image.
  • the parameter determination module 1304 is specifically configured to: by alternately minimizing the first loss function representing the difference between the first sub-pixel image and the second sub-pixel image and used to A second function representing the difference between the sample image and the second image determines the filter parameter.
  • FIG. 14 is a schematic block diagram of yet another interpolation filter training apparatus provided by an embodiment of the present application.
  • the apparatus 1400 may include a processor 1410 and a memory 1420.
  • the memory 1420 is connected to the processor 1410 through a bus 1430.
  • the memory 1420 is used to store program codes for implementing any of the above interpolation filter training methods.
  • the processor 1410 is used to call the program codes stored in the memory to execute various interpolation filter training methods described in this application. The related description in the embodiment of the interpolation filter method described in FIGS. 6A-6D will not be repeated in the embodiment of the present application.
  • the apparatus 1400 may take the form of a computing system containing multiple computing devices, or take the form of a single computing device such as a mobile phone, tablet computer, laptop computer, notebook computer, desktop computer, and so on.
  • the processor 1410 in the apparatus 1400 may be a central processor.
  • the processor 1410 may be any other type of device or multiple devices that can manipulate or process information currently or will be developed in the future.
  • FIG. 14 although a single processor such as processor 1410 may be used to practice the disclosed embodiments, using more than one processor may achieve advantages in speed and efficiency.
  • the memory 1420 in the apparatus 140 may be a read-only memory (Read Only Memory, ROM) device or a random access memory (random access memory, RAM) device. Any other suitable type of storage device may be used as the memory 1420.
  • the memory 1420 may include code and data 1401 (eg, sample images) accessed by the processor 1410 using the bus 1430.
  • the memory 1420 may further include an operating system 1402 and an application program 1403, and the application program 1403 includes at least one program that permits the processor 1410 to perform the method described herein.
  • the application 1403 may include applications 1 to N, and applications 1 to N further include video coding applications that perform the methods described herein.
  • Apparatus 1400 may also include additional memory in the form of secondary memory 1402, which may be, for example, a memory card used with a mobile computing device. Because the video communication session may contain a large amount of information, the information may be stored in whole or in part in the slave memory 1420 and loaded into the memory 1420 for processing as needed.
  • secondary memory 1402 may be, for example, a memory card used with a mobile computing device. Because the video communication session may contain a large amount of information, the information may be stored in whole or in part in the slave memory 1420 and loaded into the memory 1420 for processing as needed.
  • the device 1400 may also include, but is not limited to, a communication interface or module, an input / output device, etc.
  • the communication interface or module is used to implement data exchange between the device 1400 and other devices (for example, encoding devices or decoding devices).
  • the input device is used to realize the input of information (text, images, sound, etc.) or commands, and may include but not limited to a touch screen, a keyboard, a camera, a recorder, and the like.
  • the output device is used to realize the output of information (text, images, sound, etc.) or commands, and may include but not limited to a display, a microphone, etc., and this application is not limited.
  • FIG. 15 is a schematic block diagram of an encoder for implementing the video image encoding method described in FIG. 7 or FIG. 8 according to an embodiment of the present application.
  • each unit in the encoder 1500 is as follows:
  • the inter prediction unit 1501 is configured to perform inter prediction on the current coded image block to obtain motion information of the current coded image block, where the motion information of the current coded image block points to a fractional pixel position, the frame
  • the inter prediction unit includes a filter selection unit 1502, and the filter selection unit 1502 is specifically configured to: determine a target interpolation filter for the current encoded image block from the candidate interpolation filter set;
  • the entropy encoding unit 1503 encodes the current coded image block based on the inter prediction mode of the current coded image block and the motion information of the current coded image block, obtains the coded information, and codes the coded information into the code stream
  • the coding information includes indication information of the target interpolation filter; the indication information of the target interpolation filter is used to instruct the sub-pixel interpolation through the target interpolation filter to obtain the fractional pixels corresponding to the current encoded image block Reference block for location.
  • each unit in the encoder 1500 is as follows:
  • the inter prediction unit 1501 is configured to perform inter prediction on the current coded image block to obtain motion information of the current coded image block, where the motion information of the current coded image block points to a fractional pixel position, where
  • the inter prediction unit includes a filter selection unit 1502, and the filter selection unit 1502 is configured to: determine a target interpolation filter for the current encoded image block from a set of candidate interpolation filters;
  • the entropy coding unit 1503 is configured to encode the current coded image block based on the inter prediction mode of the current coded image block and the motion information of the current coded image block, obtain coding information, and encode the coding information into To the code stream, where, if the inter prediction mode of the currently coded image block is the target inter prediction mode, the coding information does not include indication information of the target interpolation filter; if the frame of the current coded image block
  • the inter prediction mode is a non-target inter prediction mode, and the coding information includes indication information of the target interpolation filter, and the indication information of the target interpolation filter is used to instruct the current encoded image block to use the target interpolation filter
  • the device performs sub-pixel interpolation.
  • the filter selection unit 1502 is specifically configured to determine a target interpolation filter for the current encoded image block from the candidate interpolation filter set according to a rate-distortion cost criterion.
  • the inter prediction unit 1501 is specifically configured to:
  • the motion information is determined based on the prediction block, where the interpolation filter obtained by interpolation to obtain the prediction block is the target interpolation filter.
  • the candidate interpolation filter set includes a second interpolation filter obtained by any of the interpolation filter training methods described in FIGS. 6A-6D.
  • the filter parameter of the target interpolation filter is A preset filter parameter; or, the filter parameter of the target interpolation filter is a filter parameter obtained according to any of the interpolation filter training methods described above in FIGS. 6A-6D.
  • the coding information further includes filter parameters of the target interpolation filter obtained by training; or, the coding information further includes a filter parameter difference value, and the filter parameter difference value is used for training
  • the filter parameters of the target interpolation filter of the current image unit are relative to the filter parameters of the target interpolation filter of the image unit that was trained for the previous encoding.
  • the image unit includes an image frame, a slice, a video sequence subgroup, a coding tree unit (CTU), a coding unit (CU), or a prediction unit (PU).
  • CTU coding tree unit
  • CU coding unit
  • PU prediction unit
  • FIG. 16 is a schematic block diagram of a decoder for implementing the video image decoding method described in FIGS. 8-10 according to an embodiment of the present application.
  • each unit in the decoder 1600 is as follows:
  • the entropy decoding unit 1601 is used to parse out the instruction information of the target interpolation filter from the code stream; and, obtain the motion information of the currently decoded image block, where the motion information points to the fractional pixel position;
  • the inter prediction unit 1602 is configured to perform a prediction process on the current decoded image block based on the motion information of the current decoded image block, where the prediction process includes: performing according to the target interpolation filter indicated by the indication information Sub-pixel interpolation to obtain the prediction block of the current decoded image block.
  • the reconstruction unit 1603 is configured to reconstruct the reconstruction block of the current decoded image block based on the prediction block of the current decoded image block.
  • FIG. 3 A schematic block diagram of the encoder shown in FIG. 3. Corresponding to the video image decoding method shown in FIG. 10 or FIG. 11, in another embodiment of the present application, the specific functions of each unit in the decoder 200 are as follows:
  • the entropy decoding unit 1601 is configured to parse out the information of the currently decoded image block indicating the inter prediction mode of the currently decoded image block from the code stream;
  • the inter prediction unit 1602 is used to obtain the motion information of the current decoded image block, where the motion information points to a fractional pixel position; if the inter prediction mode of the current image block is a non-target inter prediction mode, based on The motion information of the current decoded image block performs a prediction process on the current decoded image block, where the prediction process includes: target interpolation indicated by the indication information of the target interpolation filter parsed from the code stream The filter performs sub-pixel interpolation to obtain the prediction block of the current decoded image block;
  • the reconstruction unit 1603 is configured to reconstruct the current decoded image block based on the prediction block of the current decoded image block.
  • the inter prediction unit 1602 is further configured to: if the inter prediction mode of the current image block is the target inter prediction mode, perform prediction on the current decoded image block based on the motion information of the current decoded image block Process, wherein the prediction process includes: determining a target interpolation filter for the current decoded image block; performing sub-pixel interpolation through the target interpolation filter to obtain the prediction block of the current decoded image block.
  • the inter prediction unit 1602 determines a target interpolation filter for the current decoded image block, specifically including: The interpolation filter used in the decoding process of the image block decoded first is the target interpolation filter used for the current decoded image block; or, the target interpolation filter used for the current decoded image block is determined as The target interpolation filter indicated by the indication information of the target interpolation filter parsed from the code stream.
  • the decoder 1600 acquiring the motion information of the currently decoded image block may include but is not limited to the following three implementation manners:
  • the entropy decoding unit 1601 is specifically configured to parse out the index of the motion information of the decoded image block from the code stream;
  • the inter prediction unit 1602 is further configured to determine the motion information of the currently decoded image block based on the index of the motion information of the decoded image block and the candidate motion information list of the current decoded image block.
  • the entropy decoding unit 1601 is specifically configured to: parse out the motion information index and motion vector difference of the current decoded image block from the code stream;
  • the inter prediction unit 1602 is further configured to: determine the motion vector prediction value of the current decoded image block based on the index of the motion information of the current decoded image block and the candidate motion information list of the current decoded image block; and, based on the The motion vector prediction value and the motion vector difference value to obtain the motion vector of the current decoded image block.
  • the entropy decoding unit 1601 is specifically used to parse out the index and motion vector difference of the motion information of the decoded image block from the code stream value;
  • the inter prediction unit 1602 is further configured to determine the motion vector prediction value of the current decoded image block based on the index of the motion information of the current decoded image block and the candidate motion information list of the current decoded image block; further, based on the The motion vector prediction value and the motion vector difference value to obtain the motion vector of the current decoded image block.
  • the target filter is a second interpolation filter obtained by any of the interpolation filter training methods described in FIGS. 6A-6D
  • the filter parameter of the target interpolation filter is a preset filter parameter; or, the filter parameter of the target interpolation filter is a filter obtained by any of the interpolation filter training methods described above in FIGS. 6A-6D Parameter.
  • the entropy decoding unit 1601 is further configured to: parse out the filter parameters of the target interpolation filter for the currently decoded image unit from the code stream;
  • the decoder 1600 further includes a configuration unit 1604 for configuring the target interpolation filter through the filter parameter of the target interpolation filter of the currently decoded image unit.
  • the entropy decoding unit 1601 is further configured to: parse out the filter parameter difference value from the code stream, and the filter parameter difference value is an image unit used for current decoding
  • the filter parameters of the target interpolation filter of Relative to the filter parameters of the target interpolation filter for the previously decoded image unit are used for the filter parameters of the target interpolation filter of the currently decoded image unit;
  • the decoder further includes: a configuration unit 1604, configured to obtain the target interpolation of the currently decoded image unit according to the filter parameter of the target interpolation filter of the previously decoded image unit and the filter parameter difference Filter parameters of the filter; and, configuring the target interpolation filter through the filter parameters of the target interpolation filter of the currently decoded image unit.
  • the image unit includes an image frame, a slice, a video sequence subgroup, a coding tree unit (CTU), a coding unit (CU), or a prediction unit (PU).
  • CTU coding tree unit
  • CU coding unit
  • PU prediction unit
  • the device 1700 may include a processor 1702 and a memory 1720, the memory 1704 is connected to the processor 1710 through a bus 1712, and the memory 1720 is used to store program codes for implementing any of the above video image encoding methods,
  • the processor 1710 is configured to call the program code stored in the memory to perform various video image encoding / decoding methods described in this application. For details, please refer to the relevant descriptions in the video image encoding method method embodiments described in FIGS. 7-11. The application examples are not repeated here.
  • the device 1700 may take the form of a computing system containing multiple computing devices, or in the form of a single computing device such as a mobile phone, tablet computer, laptop computer, notebook computer, desktop computer, or the like.
  • the processor 1702 in the device 1700 may be a central processor.
  • the processor 1702 may be any other type of device or multiple devices that can manipulate or process information currently or will be developed in the future.
  • a single processor such as processor 1702 can be used to practice the disclosed embodiments, the use of more than one processor can achieve advantages in speed and efficiency.
  • the memory 1704 in the device 1700 may be a read-only memory (Read Only Memory, ROM) device or a random access memory (random access memory, RAM) device. Any other suitable type of storage device may be used as the memory 1704.
  • the memory 1704 may include code and data 1706 accessed by the processor 1702 using the bus 1712.
  • the memory 1704 may further include an operating system 1708 and an application program 1710.
  • the application program 1710 includes at least one program that permits the processor 1702 to perform the method described herein.
  • the application 1710 may include applications 1 to N, and applications 1 to N further include video coding applications that perform the methods described herein.
  • the device 1700 may also include additional memory in the form of a secondary memory 1714, which may be, for example, a memory card used with a mobile computing device. Because the video communication session may contain a large amount of information, the information may be stored in whole or in part in the slave memory 1714 and loaded into the memory 1704 as needed for processing.
  • a secondary memory 1714 may be, for example, a memory card used with a mobile computing device. Because the video communication session may contain a large amount of information, the information may be stored in whole or in part in the slave memory 1714 and loaded into the memory 1704 as needed for processing.
  • Device 1700 may also include one or more output devices, such as display 1718.
  • the display 1718 may be a touch-sensitive display that combines a display and a touch-sensitive element operable to sense touch input.
  • the display 1718 may be coupled to the processor 1702 through the bus 1712.
  • other output devices allowing the user to program the device 1700 or otherwise use the device 1700 or provide other output devices as an alternative to the display 1718 may be provided.
  • the display can be implemented in different ways, including through a liquid crystal display (liquid crystal) display (LCD), a cathode-ray tube (CRT) display, a plasma display or a light emitting diode (light emitting diode) diode (LED) display, such as organic LED (organic LED, OLED) display.
  • LCD liquid crystal display
  • CTR cathode-ray tube
  • plasma display or a light emitting diode (light emitting diode) diode (LED) display, such as organic LED (organic LED, OLED) display.
  • OLED organic LED
  • the device 1700 may also include or be in communication with an image sensing device 1720, such as a camera or any other image sensing device 1720 that can sense an image, such as a camera or an existing or future development This is an image of the user running the device 1700.
  • the image sensing device 1720 may be placed directly facing the user who runs the device 1700.
  • the position and optical axis of the image sensing device 1720 can be configured so that its field of view includes an area immediately adjacent to the display 1718 and the display 1718 is visible from the area.
  • the device 1700 may also include or be in communication with a sound sensing device 1722, such as a microphone or any other sound sensing device that can sense sound in the vicinity of the device 1700 or is currently or will be developed in the future.
  • the sound sensing device 1722 may be placed to directly face the user who runs the apparatus 1700, and may be used to receive sounds made by the user when the apparatus 1700 is operated, such as voice or other utterances.
  • the processor 1702 and the memory 1704 of the device 1700 are illustrated in FIG. 17 as being integrated in a single unit, other configurations may also be used.
  • the operation of the processor 1702 may be distributed among multiple directly-coupled machines (each machine has one or more processors), or distributed in a local area or other network.
  • the memory 1704 may be distributed among multiple machines, such as network-based memory or memory among multiple machines running the device 1700. Although only a single bus is shown here, the bus 1712 of the device 1700 may be formed by multiple buses.
  • the slave memory 1714 may be directly coupled to other components of the device 1700 or may be accessed through a network, and may include a single integrated unit, such as one memory card, or multiple units, such as multiple memory cards. Therefore, the device set 1700 can be implemented in various configurations.
  • the processor may be a central processing unit (Central Processing Unit, referred to as "CPU"), the processor may also be other general-purpose processors, digital signal processors (DSP), application specific integrated circuits (ASIC), Ready-made programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory may include a read only memory (ROM) device or a random access memory (RAM) device. Any other suitable type of storage device can also be used as the memory.
  • the memory may include code and data accessed by the processor using the bus.
  • the memory may further include an operating system and an application program including at least one program that allows the processor to execute the video encoding or decoding method described in the present application (in particular, the inter prediction method or the motion information prediction method described in the present application).
  • the application program may include applications 1 to N, which further include a video encoding or decoding application (referred to simply as a video decoding application) that performs the video image encoding or decoding method described in this application.
  • the bus system may also include a power bus, a control bus, and a status signal bus.
  • a power bus may also include a power bus, a control bus, and a status signal bus.
  • various buses are marked as bus systems in the figure.
  • the decoding device may also include one or more output devices, such as a display.
  • the display may be a tactile display that combines the display with a tactile unit that operably senses touch input.
  • the display can be connected to the processor via a bus.
  • Computer-readable media may include computer-readable storage media, which corresponds to tangible media, such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another (eg, according to a communication protocol).
  • computer-readable media may generally correspond to (1) non-transitory tangible computer-readable storage media, or (2) communication media, such as signals or carrier waves.
  • Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and / or data structures for implementation of the techniques described in this application.
  • the computer program product may include a computer-readable medium.
  • such computer-readable storage media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, flash memory, or may be used to store instructions or data structures
  • any connection is properly called a computer-readable medium.
  • a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and microwave are used to transmit instructions from a website, server, or other remote source
  • the coaxial cable Wire, fiber optic cable, twisted pair, DSL or wireless technologies such as infrared, radio and microwave are included in the definition of media.
  • the computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other temporary media, but are actually directed to non-transitory tangible storage media.
  • magnetic disks and optical discs include compact discs (CDs), laser discs, optical discs, digital versatile discs (DVDs), and Blu-ray discs, where magnetic discs usually reproduce data magnetically, while optical discs reproduce optically using laser data. Combinations of the above should also be included in the scope of computer-readable media.
  • DSP digital signal processors
  • ASIC application specific integrated circuits
  • FPGA field programmable logic arrays
  • DSP digital signal processors
  • ASSIC application specific integrated circuits
  • FPGA field programmable logic arrays
  • DSP digital signal processors
  • ASCI application specific integrated circuits
  • FPGA field programmable logic arrays
  • processor may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein.
  • the functions described in the various illustrative logical blocks, modules, and steps described herein may be provided within dedicated hardware and / or software modules configured for encoding and decoding, or in combination Into the combined codec.
  • the techniques can be fully implemented in one or more circuits or logic elements.
  • the technology of the present application may be implemented in a variety of devices or equipment, including wireless handsets, integrated circuits (ICs), or a set of ICs (eg, chipsets).
  • ICs integrated circuits
  • a set of ICs eg, chipsets
  • Various components, modules or units are described in this application to emphasize the functional aspects of the device for performing the disclosed technology, but do not necessarily need to be implemented by different hardware units.
  • various units can be combined in a codec hardware unit in combination with suitable software and / or firmware, or by interoperating hardware units (including one or more processors as described above) provide.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请实施例公开了一种插值滤波器的训练方法、装置及视频图像编解码方法、编解码器,该训练方法通过以传统的插值滤波器插值得到的第一分像素图像为标签数据,来训练第二插值滤波器,使得训练得到的第二插值滤波器可直接用于插值得到第一分数像素位置的像素值,标签数据更加准确,提升视频图像的编解码性能。该编码方法在进行帧间预测过程中,通过从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器,实现编码器根据当前编码图像块的内容选择合适的插值滤波器进行插值运算,使得得到的预测块预测准确性更高的预测块,减少码流,提高视频图像的压缩率。

Description

插值滤波器的训练方法、装置及视频图像编解码方法、编解码器
本申请要求于2018年10月06日提交中国国家知识产权局、申请号为201811166872.X、申请名称为“插值滤波器的训练方法、装置及视频图像编解码方法、编解码器”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及视频编解码技术领域,尤其涉及一种插值滤波器的训练方法、装置及视频图像编解码方法、编解码器。
背景技术
数字视频能力可并入到多种多样的装置中,包含数字电视、数字直播系统、无线广播系统、个人数字助理(PDA)、膝上型或桌上型计算机、平板计算机、电子图书阅读器、数码相机、数字记录装置、数字媒体播放器、视频游戏装置、视频游戏控制台、蜂窝式或卫星无线电电话(所谓的“智能电话”)、视频电话会议装置、视频流式传输装置及其类似者。数字视频装置实施视频压缩技术,例如,在由MPEG-2、MPEG-4、ITU-T H.263、ITU-T H.264/MPEG-4第10部分高级视频编码(AVC)定义的标准、视频编码标准H.265/高效视频编码(HEVC)标准以及此类标准的扩展中所描述的视频压缩技术。视频装置可通过实施此类视频压缩技术来更有效率地发射、接收、编码、解码和/或存储数字视频信息。
视频压缩技术执行空间(图像内)预测和/或时间(图像间)预测以减少或去除视频序列中固有的冗余。对于基于块的视频编码,视频条带(即,视频帧或视频帧的一部分)可分割成若干图像块,所述图像块也可被称作树块、编码单元(CU)和/或编码节点。使用关于同一图像中的相邻块中的参考样本的空间预测来编码图像的待帧内编码(I)条带中的图像块。图像的待帧间编码(P或B)条带中的图像块可使用相对于同一图像中的相邻块中的参考样本的空间预测或相对于其它参考图像中的参考样本的时间预测。图像可被称作帧,且参考图像可被称作参考帧。
其中,包含高效视频编码(HEVC)标准在内的各种视频编码标准提出了用于图像块的预测性编码模式,即基于已经编码的视频数据块来预测当前待编码的块。在帧内预测模式中,基于与当前块在相同的图像中的一或多个先前经解码相邻块来预测当前块;在帧间预测模式中,基于不同图像中的已经解码块来预测当前块。
然而,在帧间预测模式中,运动矢量指向分像素时,需要对最优匹配的参考块进行分像素插值,现有技术中通常使用固定系数的插值滤波器进行分像素插值,对于目前多样性和非平稳性的视频信号,预测的准确性差,导致视频图像的编解码性能差。
发明内容
本申请实施例提供一种插值滤波器的训练方法、装置及视频图像编解码方法、编解码器,可提高图像块的运动信息的预测准确性,从而提高编解码性能。
第一方面,本申请实施例提供了一种插值滤波器的训练方法,包括:计算设备通过第一插值滤波器对样本图像在整数像素位置的像素进行插值,得到所述样本图像在第一分数像素位置的第一分像素图像;将所述样本图像输入到第二插值滤波器中,得到第二分像素图像;
进而,通过最小化用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数确定所述第二插值滤波器的滤波器参数。
可见,本申请实施例,以传统的插值滤波器插值得到的第一分像素图像为标签数据,来训练第二插值滤波器,使得训得到的第二插值滤波器可直接用于插值得到第一分数像素位置的像素值,标签数据更加准确,提升视频图像的编解码性能。而且,通过神经网络的第二插值滤波器为非线性滤波器在进行预测时,对于复杂视频信号预测的准确性差,可进一步提升视频图像的编解码性能。
第二方面,本申请实施例还提供了一种插值滤波器的训练方法,包括:计算设备通过第一插值滤波器对样本图像在整数像素位置的像素进行插值,得到所述样本图像在第一分数像素位置的第一分像素图像;将所述样本图像输入到第二插值滤波器中,得到第二分像素图像;将所述第二分像素图像经过翻转运算输入到第三插值滤波器中,得到第一图像,并将所述第一图像通过所述翻转运算的逆运算得到第二图像,其中,所述第二插值滤波器和所述第三插值滤波器共享滤波器参数;进而,根据用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数。
可见,本发明实施例通过传统插值滤波器对样本图像进行分像素插值,得到第一分像素图像,并作为标签数据,利用分像素的可逆性原理,通过同时最小化用于表示第一分像素图像与第二分像素图像的差值的第一函数和用于表示样本图像与第二图像的差值的第二函数来确定所述滤波器参数,实现了通过监督样本图像来约束第二插值滤波器,提高第二插值滤波器进行分像素插值的准确性,进而提升视频图像的编解码性能。
可选地,计算设备根据用于表示所述第一分像素图像与所述第二分像素图像的差值的第一损失函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数的可以包括但不限于以下两种实现方式:
第一种实现方式:计算设备通过最小化第三函数确定所述滤波器参数,其中,所述第三函数为用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数和用于表示所述样本图像与所述第二图像的差值的第二函数的加权求和。
第二种实现方式:通过交替最小化用于表示所述第一分像素图像与所述第二分像素图像的差值的第一损失函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数。
应理解,第一方面和第二方面所述的计算设备可以是编码设备或压缩设备,上述设备可以是计算机、服务器或终端(例如,手机、平板电脑等)等具有数据处理功能的设备。
第三方面,本申请实施例还提供了一种视频图像编码方法,包括:
编码器对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息,其中,所述当前编码图像块的运动信息指向分数像素位置,所述帧间预测过程包括:从候 选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器;
基于所述当前编码图像块的帧间预测模式和所述当前编码图像块的运动信息对所述当前编码图像块进行编码,得到编码信息,将所述编码信息编入码流,其中,所述编码信息包括目标插值滤波器的指示信息;所述目标插值滤波器的指示信息用于指示通过所述目标插值滤波器进行分像素插值得到所述当前编码图像块对应的分数像素位置的参考块。
第四方面,本申请实施例还提供了一种视频图像编码方法,包括:
编码器对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息,其中,所述当前编码图像块的运动信息指向分数像素位置,所述帧间预测过程包括:从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器;
基于所述当前编码图像块的帧间预测模式和所述当前编码图像块的运动信息对所述当前编码图像块进行编码,得到编码信息,将所述编码信息编入到码流,其中,若所述当前编码图像块的帧间预测模式是目标帧间预测模式,所述编码信息不包括所述目标插值滤波器的指示信息;若所述当前编码图像块的帧间预测模式为非目标帧间预测模式,所述编码信息包括所述目标插值滤波器的指示信息,所述目标插值滤波器的指示信息用于指示所述当前编码图像块采用所述目标插值滤波器进行分像素插值。
可见,本申请实施例中,编码器在进行帧间预测过程中可以根据当前编码图像块的内容选择插值滤波器进行插值运算,使得得到的预测块预测准确性更高的预测块,减少码流,提高视频图像的压缩率。
应理解,第三方面或第四方面所述的编码器还可以是包括该编码器的编码设备,该编码设备可以是计算机、服务器或终端(例如,手机、平板电脑等)等具有数据处理功能的设备。
结合第三方面或第四方面,在本申请实施例的一种可能的实现中,编码器从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器的一种实现方式可以是:编码器根据率失真代价准则从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器。
可见,编码器在进行帧间预测过程中可以根据当前编码图像块的内容选择率失真代价小的插值滤波器进行插值运算,提高预测准确性,减少码流,提高视频图像的压缩率。
结合第三方面或第四方面,在本申请实施例的一种可能的实现中,编码器对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息的一种实现方式可以是:
编码器确定与所述当前编码图像块最优匹配的整像素参考图像块;
通过候选插值滤波器集合中每一个插值滤波器对所述整像素参考图像块进行分像素插值,得到N个分像素参考图像块,N为正整数;
在所述整像素参考图像块和所述N个分像素参考图像块中确定与所述当前编码图像块最优匹配的预测块;
基于所述预测块确定所述运动信息,其中,插值得到所述预测块的插值滤波器即为目标插值滤波器。
可见,编码器在进行帧间预测过程中可以选择失真最小的参考块对应的插值滤波器进行插值,以减少码流,提高视频图像的压缩率。
结合第三方面或第四方面,在本申请实施例的一种可能的实现中,所述候选插值滤波器集合包括通过第一方面或第二方面所述的任一种插值滤波器的训练方法得到的第二插值滤波器。
可选地,若所述目标滤波器为通过第一方面或第二方面所述的任一种插值滤波器的训练方法得到的第二插值滤波器,则:所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据权利要求1-4所述的插值滤波器的训练方法得到的滤波器参数。
进一步地,所述编码信息还包括训练得到的所述目标插值滤波器的滤波器参数;或者,所述编码信息还包括滤波器参数差值,所述滤波器参数差值为训练得到的用于当前图像单元的目标插值滤波器的滤波器参数相对于训练得到的用于在先编码的图像单元的目标插值滤波器的滤波器参数。
可见,编码器可以对候选插值滤波器集合中的第二插值滤波器进行在线训练,以使插值滤波器可以根据当前编码的图像单元的内容进行实时调整,提高预测准确性。
可选地,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
第五方面,本申请实施例还提供了一种视频图像解码方法,包括:
解码器从码流中解析出目标插值滤波器的指示信息;
获取当前解码图像块的运动信息,其中,所述运动信息指向分数像素位置;
基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:根据所述指示信息所指示的目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块;
基于所述当前解码图像块的预测块,重建所述当前解码图像块的重建块。
第六方面,本申请实施例还提供了一种视频图像解码方法,包括:
解码器从码流中解析出当前解码图像块的用于指示所述当前解码图像块的帧间预测模式的信息;
获取所述当前解码图像块的运动信息,其中,所述运动信息指向分数像素位置;
若所述当前图像块的帧间预测模式为非目标帧间预测模式,基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:根据从所述码流中解析出的目标插值滤波器的指示信息所指示的目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块;
基于所述当前解码图像块的预测块,对所述当前解码图像块进行重建。
可选地,若所述当前图像块的帧间预测模式是目标帧间预测模式,基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:确定用于所述当前解码图像块的目标插值滤波器;通过所述目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块。
应理解,若所述当前图像块的帧间预测模式是目标帧间预测模式,所确定用于所述当前解码图像块的目标插值滤波器,具体包括:确定在所述先解码的图像块在解码过程中使用的插值滤波器为所述用于所述当前解码图像块的目标插值滤波器;或,确定所述用于所 述当前解码图像块的目标插值滤波器为从所述码流中解析出的目标插值滤波器的指示信息所指示的目标插值滤波器。
可见,本申请实施例中,解码器在进行帧间预测过程中选择目标帧间模式的指示信息指示的插值滤波器进行分像素插值,以得到当前解码图像块的预测块,实现了解码器根据当前编码图像块的内容选择插值滤波器进行插值运算,使得得到的预测块预测准确性更高的预测块,减少码流,提高视频图像的压缩率。
结合第五方面或第六方面,在本申请实施例的一种可能的实现中,解码器获取当前解码图像块的运动信息可以包括但不限于以下三种实施方式:
第一实施方式:在非目标帧间预测模式(比如非合并模式)下,解码器可以从码流中解析出所述当解码图像块的运动信息的索引;进而,基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定所述当前解码图像块的运动信息。
第二实施方式:在非目标帧间预测模式(比如非合并模式)下,解码器可以从码流中解析出所述当解码图像块的运动信息的索引和运动矢量差值;基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定当前解码图像块的运动矢量预测值;进而,基于所述运动矢量预测值和所述运动矢量差值,得到所述当前解码图像块的运动矢量。
第三实施方式:在目标帧间预测模式(比如合并模式)下,解码器若所述当前解码图像块的帧间预测模式为合并模式(merge mode),获取在所述合并模式下合并到的在先解码的图像块的运动信息,即为当前解码图像块的运动信息。
结合第五方面或第六方面,在本申请实施例的一种可能的实现中,若所述目标滤波器为通过第一方面或第二方面所述的插值滤波器训练方法得到的第二插值滤波器,则:
所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据第一方面或第二方面所述的插值滤波器训练方法得到的滤波器参数。
可选地,若所述目标滤波器为通过第一方面或第二方面所述的插值滤波器训练方法得到第二插值滤波器,该方法还可以包括:
从码流中解析出用于当前解码的图像单元的目标插值滤波器的滤波器参数;
在根据目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块之前,该方法还可以包括:通过所述当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
可选地,若所述目标滤波器为通过第一方面或第二方面所述的插值滤波器训练方法得到第二插值滤波器,该方法还可以包括:
从码流中解析出滤波器参数差值,所述滤波器参数差值为用于当前解码的图像单元的目标插值滤波器的滤波器参数相对于用于在先解码的图像单元的目标插值滤波器的滤波器参数用于当前解码的图像单元的目标插值滤波器的滤波器参数;
根据所述在先解码的图像单元的目标插值滤波器的滤波器参数和所述滤波器参数差值得到所述当前解码的图像单元的目标插值滤波器的滤波器参数;
通过所述当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
可选地,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
第七方面,本申请实施例还提供了一种插值滤波器的训练装置,包括用于实施第一方面的任意一种方法的若干个功能单元。举例来说,插值滤波器的训练装置可以包括:
标签数据获取模块,用于通过第一插值滤波器对样本图像在整数像素位置的像素进行插值,得到所述样本图像在第一分数像素位置的第一分像素图像;
插值模块,用于将所述样本图像输入到第二插值滤波器中,得到第二分像素图像;
参数确定模块,用于通过最小化用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数确定所述第二插值滤波器的滤波器参数。
第八方面,本申请实施例还提供了一种插值滤波器的训练装置,包括用于实施第二方面的任意一种方法的若干个功能单元。举例来说,插值滤波器的训练装置可以包括:
标签数据获取模块,用于通过第一插值滤波器对样本图像在整数像素位置的像素进行插值,得到所述样本图像在第一分数像素位置的第一分像素图像;
插值模块,用于将所述样本图像输入到第二插值滤波器中,得到第二分像素图像;
逆插值模块,用于将所述第二分像素图像经过翻转运算输入到第三插值滤波器中,得到第一图像,并将所述第一图像通过所述翻转运算的逆运算得到第二图像,其中,所述第二插值滤波器和所述第三插值滤波器共享滤波器参数;
参数确定模块,用于根据用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数。
第九方面,本申请实施例还提供了一种编码器,包括用于实施第三方面的任意一种方法的若干个功能单元。举例来说,编码器可以包括:
帧间预测单元,用于对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息,其中,所述当前编码图像块的运动信息指向分数像素位置,所述帧间预测单元包括滤波器选择单元,用于从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器;
熵编码单元,基于所述当前编码图像块的帧间预测模式和所述当前编码图像块的运动信息对所述当前编码图像块进行编码,得到编码信息,将所述编码信息编入码流,其中,所述编码信息包括目标插值滤波器的指示信息;所述目标插值滤波器的指示信息用于指示通过所述目标插值滤波器进行分像素插值得到所述当前编码图像块对应的分数像素位置的参考块。
第十方面,本申请实施例还提供了一种编码器,包括用于实施第四方面的任意一种方法的若干个功能单元。举例来说,编码器可以包括:
帧间预测单元,用于对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息,其中,所述当前编码图像块的运动信息指向分数像素位置,所述帧间预测单元包括滤波器选择单元,所述滤波器选择单元用于:从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器;
熵编码单元,用于基于所述当前编码图像块的帧间预测模式和所述当前编码图像块的 运动信息对所述当前编码图像块进行编码,得到编码信息,将所述编码信息编入到码流,其中,若所述当前编码图像块的帧间预测模式是目标帧间预测模式,所述编码信息不包括所述目标插值滤波器的指示信息;若所述当前编码图像块的帧间预测模式为非目标帧间预测模式,所述编码信息包括所述目标插值滤波器的指示信息,所述目标插值滤波器的指示信息用于指示所述当前编码图像块采用所述目标插值滤波器进行分像素插值。
第十一方面,本申请实施例还提供了一种解码器,包括用于实施第五方面的任意一种方法的若干个功能单元。举例来说,编码器可以包括:
熵解码单元,用于从码流中解析出目标插值滤波器的指示信息;以及,获取当前解码图像块的运动信息,其中,所述运动信息指向分数像素位置;
帧间预测单元,用于基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:根据所述指示信息所指示的目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块。
重构单元,用于基于所述当前解码图像块的预测块,重建所述当前解码图像块的重建块。
第十二方面,本申请实施例还提供了一种解码器,包括用于实施第六方面的任意一种方法的若干个功能单元。举例来说,编码器可以包括:
熵解码单元,用于从码流中解析出当前解码图像块的用于指示所述当前解码图像块的帧间预测模式的信息;
帧间预测单元,用于获取所述当前解码图像块的运动信息,其中,所述运动信息指向分数像素位置;若所述当前图像块的帧间预测模式为非目标帧间预测模式,基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:根据从所述码流中解析出的目标插值滤波器的指示信息所指示的目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块;
重构单元,用于基于所述当前解码图像块的预测块,对所述当前解码图像块进行重建。
第十三方面,本申请实施例还提供了一种插值滤波器的训练装置,包括存储器和处理器;所述存储器用于存储程序代码,所述处理器用于调用所述程序代码,执行如第一方面或第二方面所述的任意一种插值滤波器训练方法的部分或全部步骤。
例如,执行:通过第一插值滤波器对样本图像在整数像素位置的像素进行插值,得到所述样本图像在第一分数像素位置的第一分像素图像;将所述样本图像输入到第二插值滤波器中,得到第二分像素图像;
进而,通过最小化用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数确定所述第二插值滤波器的滤波器参数。
又例如,执行:通过第一插值滤波器对样本图像在整数像素位置的像素进行插值,得到所述样本图像在第一分数像素位置的第一分像素图像;将所述样本图像输入到第二插值滤波器中,得到第二分像素图像;将所述第二分像素图像经过翻转运算输入到第三插值滤波器中,得到第一图像,并将所述第一图像通过所述翻转运算的逆运算得到第二图像,其中,所述第二插值滤波器和所述第三插值滤波器共享滤波器参数;进而,根据用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数和用于表示所述样本图像与所 述第二图像的差值的第二函数确定所述滤波器参数。
可选地,处理器执行所述根据用于表示所述第一分像素图像与所述第二分像素图像的差值的第一损失函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数的可以包括但不限于以下两种实现方式:
第一种实现方式:通过最小化第三函数确定所述滤波器参数,其中,所述第三函数为用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数和用于表示所述样本图像与所述第二图像的差值的第二函数的加权求和。
第二种实现方式:通过交替最小化用于表示所述第一分像素图像与所述第二分像素图像的差值的第一损失函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数。
应理解,第一方面和第二方面所述的插值滤波器的训练装置可以是编码设备或压缩设备,上述设备可以是计算机、服务器或终端(例如,手机、平板电脑等)等具有数据处理功能的设备。
第十四方面,本申请实施例还提供了一种编码装置,包括存储器和处理器;所述存储器用于存储程序代码;所述处理器用于调用所述程序代码,以执行如第三方面或第四方面所述的任意一种视频图像编码方法的部分或全部步骤。
例如,执行:对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息,其中,所述当前编码图像块的运动信息指向分数像素位置,所述帧间预测过程包括:从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器;
基于所述当前编码图像块的帧间预测模式和所述当前编码图像块的运动信息对所述当前编码图像块进行编码,得到编码信息,将所述编码信息编入码流,其中,所述编码信息包括目标插值滤波器的指示信息;所述目标插值滤波器的指示信息用于指示通过所述目标插值滤波器进行分像素插值得到所述当前编码图像块对应的分数像素位置的参考块。
又例如,执行:对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息,其中,所述当前编码图像块的运动信息指向分数像素位置,所述帧间预测过程包括:从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器;
基于所述当前编码图像块的帧间预测模式和所述当前编码图像块的运动信息对所述当前编码图像块进行编码,得到编码信息,将所述编码信息编入到码流,其中,若所述当前编码图像块的帧间预测模式是目标帧间预测模式,所述编码信息不包括所述目标插值滤波器的指示信息;若所述当前编码图像块的帧间预测模式为非目标帧间预测模式,所述编码信息包括所述目标插值滤波器的指示信息,所述目标插值滤波器的指示信息用于指示所述当前编码图像块采用所述目标插值滤波器进行分像素插值。
应理解,第十四方面所述的编码器还可以是包括该编码器的编码设备,该编码设备可以是计算机、服务器或终端(例如,手机、平板电脑等)等具有数据处理功能的设备。
结合第十四方面,在本申请实施例的一种可能的实现中,处理器从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器的一种实现方式可以是:编码器根据率失真代价准则从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器。
结合第十四方面,在本申请实施例的一种可能的实现中,处理器对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息的一种实现方式可以是:
确定与所述当前编码图像块最优匹配的整像素参考图像块;
通过候选插值滤波器集合中每一个插值滤波器对所述整像素参考图像块进行分像素插值,得到N个分像素参考图像块,N为正整数;
在所述整像素参考图像块和所述N个分像素参考图像块中确定与所述当前编码图像块最优匹配的预测块;
基于所述预测块确定所述运动信息,其中,插值得到所述预测块的插值滤波器即为目标插值滤波器。
结合第十四方面,在本申请实施例的一种可能的实现中,所述候选插值滤波器集合包括通过第一方面或第二方面所述的任一种插值滤波器的训练方法得到的第二插值滤波器。
可选地,若所述目标滤波器为通过第一方面或第二方面所述的任一种插值滤波器的训练方法得到的第二插值滤波器,则:所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据权利要求1-4所述的插值滤波器的训练方法得到的滤波器参数。
进一步地,所述编码信息还包括训练得到的所述目标插值滤波器的滤波器参数;或者,所述编码信息还包括滤波器参数差值,所述滤波器参数差值为训练得到的用于当前图像单元的目标插值滤波器的滤波器参数相对于训练得到的用于在先编码的图像单元的目标插值滤波器的滤波器参数。
可见,处理器可以对候选插值滤波器集合中的第二插值滤波器进行在线训练,以使插值滤波器可以根据当前编码的图像单元的内容进行实时调整,提高预测准确性。
可选地,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
第十五方面,本申请实施例还提供了一种解码装置,包括存储器和处理器;所述存储器用于存储程序代码;所述处理器用于调用所述程序代码,以执行如第五方面或第六方面所述的任意一种视频图像解码方法的部分或全部步骤。
例如,执行:
从码流中解析出目标插值滤波器的指示信息;
获取当前解码图像块的运动信息,其中,所述运动信息指向分数像素位置;
基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:根据所述指示信息所指示的目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块。
基于所述当前解码图像块的预测块,重建所述当前解码图像块的重建块。
又例如,执行:
从码流中解析出当前解码图像块的用于指示所述当前解码图像块的帧间预测模式的信息;
获取所述当前解码图像块的运动信息,其中,所述运动信息指向分数像素位置;
若所述当前图像块的帧间预测模式为非目标帧间预测模式,基于所述当前解码图像块 的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:根据从所述码流中解析出的目标插值滤波器的指示信息所指示的目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块;
基于所述当前解码图像块的预测块,对所述当前解码图像块进行重建。
可选地,若所述当前图像块的帧间预测模式是目标帧间预测模式,基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:确定用于所述当前解码图像块的目标插值滤波器;通过所述目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块。
应理解,若所述当前图像块的帧间预测模式是目标帧间预测模式,处理器确定用于所述当前解码图像块的目标插值滤波器,具体包括:确定在所述先解码的图像块在解码过程中使用的插值滤波器为所述用于所述当前解码图像块的目标插值滤波器;或,确定所述用于所述当前解码图像块的目标插值滤波器为从所述码流中解析出的目标插值滤波器的指示信息所指示的目标插值滤波器。
结合第十五方面,在本申请实施例的一种可能的实现中,处理器获取当前解码图像块的运动信息可以包括但不限于以下三种实施方式:
第一实施方式:在非目标帧间预测模式(比如非合并模式)下,处理器可以从码流中解析出所述当解码图像块的运动信息的索引;进而,基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定所述当前解码图像块的运动信息。
第二实施方式:在非目标帧间预测模式(比如非合并模式)下,处理器可以从码流中解析出所述当解码图像块的运动信息的索引和运动矢量差值;基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定当前解码图像块的运动矢量预测值;进而,基于所述运动矢量预测值和所述运动矢量差值,得到所述当前解码图像块的运动矢量。
第三实施方式:在目标帧间预测模式(比如合并模式)下,处理器若所述当前解码图像块的帧间预测模式为合并模式(merge mode),获取在所述合并模式下合并到的在先解码的图像块的运动信息,即为当前解码图像块的运动信息。
结合第十五方面,在本申请实施例的一种可能的实现中,若所述目标滤波器为通过第一方面或第二方面所述的插值滤波器训练方法得到的第二插值滤波器,则:
所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据第一方面或第二方面所述的插值滤波器训练方法得到的滤波器参数。
可选地,若所述目标滤波器为通过第一方面或第二方面所述的插值滤波器训练方法得到第二插值滤波器,该处理器还可以执行:
从码流中解析出用于当前解码的图像单元的目标插值滤波器的滤波器参数;
在根据目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块之前,该方法还可以包括:通过所述当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
可选地,若所述目标滤波器为通过第一方面或第二方面所述的插值滤波器训练方法得到第二插值滤波器,该处理器还可以执行:
从码流中解析出滤波器参数差值,所述滤波器参数差值为用于当前解码的图像单元的目标插值滤波器的滤波器参数相对于用于在先解码的图像单元的目标插值滤波器的滤波器参数用于当前解码的图像单元的目标插值滤波器的滤波器参数;
根据所述在先解码的图像单元的目标插值滤波器的滤波器参数和所述滤波器参数差值得到所述当前解码的图像单元的目标插值滤波器的滤波器参数;
通过所述当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
可选地,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
第十六方面,本申请实施例还提供了一种计算机可读存储介质,包括程序代码,所述程序代码在计算机上运行时,使得所述计算机执行如第一方面或第二方面所述的任意一种插值滤波器训练方法的部分或全部步骤。
第十七方面,本申请实施例提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行第一方面或第二方面所述的任意一种插值滤波器训练方法的部分或全部步骤。
第十八方面,本申请实施例还提供了一种计算机可读存储介质,包括程序代码,所述程序代码在计算机上运行时,使得所述计算机执行如第三方面或第四方面所述的任意一种视频图像编码方法的部分或全部步骤。
第十九方面,本申请实施例提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如第三方面或第四方面所述的任意一种视频图像编码方法的部分或全部步骤。
第二十方面,本申请实施例还提供了一种计算机可读存储介质,包括程序代码,所述程序代码在计算机上运行时,使得所述计算机执行如第五方面或第六方面所述的任意一种视频图像解码方法的部分或全部步骤。
第二十一方面,本申请实施例提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如第五方面或第六方面所述的任意一种视频图像解码方法的部分或全部步骤。
附图说明
为了更清楚地说明本申请实施例或背景技术中的技术方案,下面将对本申请实施例或背景技术中所需要使用的附图进行说明。
图1为本申请实施例中一种视频编码及解码系统的示意性框图;
图2为本申请实施例中一种编码器的示意性框图;
图3为本申请实施例中一种解码器的示意性框图;
图4为本申请实施例中一种整像素和分像素的位置示意性说明图;
图5为本申请实施例中分像素插值可逆性原理的示意性说明图;
图6A为本申请实施例中一种插值滤波器的训练方法的示意性流程图;
图6B为本申请实施例中一种插值滤波器的训练训练流程的示意性说明图;
图6C为本申请实施例中另一种插值滤波器的训练方法的示意性流程图;
图6D为本申请实施例中另一种插值滤波器的训练训练流程的示意性说明图;
图7为本申请实施例中一种视频图像编码方法的示意性流程图;
图8为本申请实施例中另一种视频图像编码方法的示意性流程图;
图9为本申请实施例中一种视频图像解码方法的示意性流程图;
图10为本申请实施例中另一种视频图像解码方法的示意性流程图;
图11为本申请实施例中又一种视频图像解码方法的示意性流程图;
图12是本发明实施例提供的一种插值滤波器训练装置的示意性框图;
图13是本发明实施例提供的一种插值滤波器训练装置的示意性框图;
图14是本发明实施例提供的另一种插值滤波器训练装置的示意性框图;
图15为本申请实施例中另一种编码器的示意性框图;
图16为本申请实施例中另一种解码器的示意性框图;
图17为本申请实施例中一种编码设备或解码设备的示意性框图。
具体实施方式
以下描述中,参考形成本公开一部分并以说明之方式示出本发明实施例的具体方面或可使用本发明实施例的具体方面的附图。应理解,本发明实施例可在其它方面中使用,并可包括附图中未描绘的结构或逻辑变化。因此,以下详细描述不应以限制性的意义来理解,且本发明的范围由所附权利要求书界定。
例如,应理解,结合所描述方法的揭示内容可以同样适用于用于执行所述方法的对应设备或系统,且反之亦然。例如,如果描述一个或多个具体方法步骤,则对应的设备可以包含如功能单元等一个或多个单元,来执行所描述的一个或多个方法步骤(例如,一个单元执行一个或多个步骤,或多个单元,其中每个都执行多个步骤中的一个或多个),即使附图中未明确描述或说明这种一个或多个单元。另一方面,例如,如果基于如功能单元等一个或多个单元描述具体装置,则对应的方法可以包含一个步骤来执行一个或多个单元的功能性(例如,一个步骤执行一个或多个单元的功能性,或多个步骤,其中每个执行多个单元中一个或多个单元的功能性),即使附图中未明确描述或说明这种一个或多个步骤。进一步,应理解的是,除非另外明确提出,本文中所描述的各示例性实施例和/或方面的特征可以相互组合。
视频编码通常是指处理形成视频或视频序列的图片序列。在视频编码领域,术语“图片(picture)”、“帧(frame)”或“图像(image)”可以用作同义词。本申请(或本公开)中使用的视频编码表示视频编码或视频解码。视频编码在源侧执行,通常包括处理(例如,通过压缩)原始视频图片以减少表示该视频图片所需的数据量(从而更高效地存储和/或传输)。视频解码在目的地侧执行,通常包括相对于编码器作逆处理,以重构视频图片。实施例涉及的视频图片(或总称为图片,下文将进行解释)“编码”应理解为涉及视频序列的“编码”或“解码”。编码部分和解码部分的组合也称为编解码(编码和解码)。
无损视频编码情况下,可以重构原始视频图片,即经重构视频图片具有与原始视频图片相同的质量(假设存储或传输期间没有传输损耗或其它数据丢失)。在有损视频编码情况下,通过例如量化执行进一步压缩,来减少表示视频图片所需的数据量,而解码器侧无法 完全重构视频图片,即经重构视频图片的质量相比原始视频图片的质量较低或较差。
H.261的几个视频编码标准属于“有损混合型视频编解码”(即,将样本域中的空间和时间预测与变换域中用于应用量化的2D变换编码结合)。视频序列的每个图片通常分割成不重叠的块集合,通常在块层级上进行编码。换句话说,编码器侧通常在块(视频块)层级处理亦即编码视频,例如,通过空间(图片内)预测和时间(图片间)预测来产生预测块,从当前块(当前处理或待处理的块)减去预测块以获取残差块,在变换域变换残差块并量化残差块,以减少待传输(压缩)的数据量,而解码器侧将相对于编码器的逆处理部分应用于经编码或经压缩块,以重构用于表示的当前块。另外,编码器复制解码器处理循环,使得编码器和解码器生成相同的预测(例如帧内预测和帧间预测)和/或重构,用于处理亦即编码后续块。
如本文中所用,术语“块”、“图像块”“图片块”可以用作同义词,可以为图片或帧的一部分。为便于描述,参考由ITU-T视频编码专家组(Video Coding Experts Group,VCEG)和ISO/IEC运动图像专家组(Motion Picture Experts Group,MPEG)的视频编码联合工作组(Joint Collaboration Team on Video Coding,JCT-VC)开发的高效视频编码(High-Efficiency Video Coding,HEVC)或者ITU-T H.266/下一代视频编码(Versatile video coding,VVC)的参考软件描述本发明实施例。本领域普通技术人员理解本发明实施例不限于HEVC或VVC。可以指CU、PU和TU。在HEVC中,通过使用表示为编码树的四叉树结构将CTU拆分为多个CU。在CU层级处作出是否使用图片间(时间)或图片内(空间)预测对图片区域进行编码的决策。每个CU可以根据PU拆分类型进一步拆分为一个、两个或四个PU。一个PU内应用相同的预测过程,并在PU基础上将相关信息传输到解码器。在通过基于PU拆分类型应用预测过程获取残差块之后,可以根据类似于用于CU的编码树的其它四叉树结构将CU分割成变换单元(transform unit,TU)。在视频压缩技术最新的发展中,使用四叉树和二叉树(Quad-tree and binary tree,QTBT)分割帧来分割编码块。在QTBT块结构中,CU可以为正方形或矩形形状。作为一种示例,编码树单元(coding tree unit,CTU)首先由四叉树结构分割。四叉树叶节点进一步由二进制树结构分割。二进制树叶节点称为编码单元(coding unit,CU),所述分段用于预测和变换处理,无需其它任何分割。这表示CU、PU和TU在QTBT编码块结构中的块大小相同。同时,还提出与QTBT块结构一起使用多重分割,例如三叉树分割。
如本文中所用,术语“编码图像块”是应用在编码端的图像块,同理,“解码图像块”是应用在解码端的图像块。“当前编码图像块”也可以表示为“当前待编码图像块”或“当前编码块”等,“当前解码图像块”也可以表示为“当前待解码图像块”或“当前解码块”等。“参考块”也可以表示为“参考图像块”;“预测块”可以表示为“预测图像块”,在一些场景中也可以表示为“最优匹配块”或“匹配块”等。
如本文中所用,“第一插值滤波器”为现有技术中提供的插值滤波器,可以是固定系数的插值滤波器,例如,双线性插值滤波器、双三次插值滤波器等;也可以是内容自适应插值滤波器或其他种类的插值滤波器。在H.264/AVC中,采用6抽头的有限相应滤波器来产生半像素样值,采用简单的双线性插值来产生四分之一像素。HEVC中的插值滤波器相比于H.264/AVC做了很多改进,使用8抽头滤波器来产生半像素,而四分之一像素则采用7 抽头的插值滤波器。为了应对自然视频的非平稳特性,研究人员提出了内容自适应插值滤波器。典型的自适应插值滤波器在编码端根据运动补偿预测的误差估计滤波器系数,然后将滤波器系数编码写入码流。为了减少插值滤波器的复杂度,可分离的自适应插值滤波器被提出,可以实现在基本保持编码性能的情况下明显降低复杂度。
如本文中所用,“第二插值滤波器”、“第三插值滤波器”为基于本申请实施例提供的插值滤波器训练方法得到的插值滤波器,具体可参见插值滤波器训练方法实施例中的相关描述。可以理解,该第二插值滤波器和/或第三插值滤波器可以是支持向量机(support vector machine,SVM)、神经网络(neural network,NN)、卷积神经网络(convolutional neural network,CNN)或其他形式,对此,本申请实施例不作限定。
如本文中所用,“目标插值滤波器”为候选滤波器集合中被选定的插值滤波器。本文中,“候选插值滤波器集合”可以包括一个或多个插值滤波器,该多个插值滤波器的种类不同,可以包括但不限于本文中的第二插值滤波器。在本申请的另一种实现中,候选滤波器集合包括的多个插值滤波器可以不包括第二插值滤波器。
运动信息可以包括运动矢量,运动矢量是帧间预测过程中的一个重要参数,其表示先前已编码图像块相对于该当前编码图像块的空间位移。可以使用运动估算的方法,诸如运动搜索来获取运动矢量。初期的帧间预测技术,将表示运动矢量的位包括在编码的位流中,以允许解码器再现预测块,进而得到重建块。为了进一步的改善编码效率,后来又提出使用参考运动矢量差分地编码运动矢量,即取代编码运动矢量整体,而仅仅编码运动矢量和参考运动矢量之间的差值。在有些情况下,参考运动矢量可以是从在视频流中先前使用的运动矢量中选择出来的,选择先前使用的运动矢量编码当前的运动矢量可以进一步减少包括在编码的视频位流中的位数。
以下基于图1到3描述编码器100、解码器200和编码系统300的实施例。
图1为绘示示例性编码系统300的概念性或示意性框图,例如,可以利用本申请(本公开)技术的视频编码系统300。视频编码系统300的编码器100(例如,视频编码器100)和解码器200(例如,视频解码器200)表示可用于根据本申请中描述的各种实例执行用于视频图像编码或解码的技术的设备实例。如图1中所示,编码系统300包括源设备310,用于向例如解码经编码数据330的目的地设备320提供经编码数据330,例如,经编码图片330。
源设备310包括编码器100,另外亦即可选地,可以包括图片源312,例如图片预处理单元314的预处理单元314,以及通信接口或通信单元318。
图片源312可以包括或可以为任何类别的图片捕获设备,用于例如捕获现实世界图片,和/或任何类别的图片或评论(对于屏幕内容编码,屏幕上的一些文字也认为是待编码的图片或图像的一部分)生成设备,例如,用于生成计算机动画图片的计算机图形处理器,或用于获取和/或提供现实世界图片、计算机动画图片(例如,屏幕内容、虚拟现实(virtual reality,VR)图片)的任何类别设备,和/或其任何组合(例如,实景(augmented reality,AR)图片)。
(数字)图片为或者可以视为具有亮度值的采样点的二维阵列或矩阵。阵列中的采样 点也可以称为像素(pixel)(像素(picture element)的简称)或像素(pel)。阵列或图片在水平和垂直方向(或轴线)上的采样点数目定义图片的尺寸和/或分辨率。为了表示颜色,通常采用三个颜色分量,即图片可以表示为或包含三个采样阵列。RBG格式或颜色空间中,图片包括对应的红色、绿色及蓝色采样阵列。但是,在视频编码中,每个像素通常以亮度/色度格式或颜色空间表示,例如,YCbCr,包括Y指示的亮度分量(有时也可以用L指示)以及Cb和Cr指示的两个色度分量。亮度(简写为luma)分量Y表示亮度或灰度水平强度(例如,在灰度等级图片中两者相同),而两个色度(简写为chroma)分量Cb和Cr表示色度或颜色信息分量。相应地,YCbCr格式的图片包括亮度采样值(Y)的亮度采样阵列,和色度值(Cb和Cr)的两个色度采样阵列。RGB格式的图片可以转换或变换为YCbCr格式,反之亦然,该过程也称为色彩变换或转换。如果图片是黑贝的,该图片可以只包括亮度采样阵列。
图片源312(例如,视频源312)可以为,例如用于捕获图片的相机,例如图片存储器的存储器,包括或存储先前捕获或产生的图片,和/或获取或接收图片的任何类别的(内部或外部)接口。相机可以为,例如,本地的或集成在源设备中的集成相机,存储器可为本地的或例如集成在源设备中的集成存储器。接口可以为,例如,从外部视频源接收图片的外部接口,外部视频源例如为外部图片捕获设备,比如相机、外部存储器或外部图片生成设备,外部图片生成设备例如为外部计算机图形处理器、计算机或服务器。接口可以为根据任何专有或标准化接口协议的任何类别的接口,例如有线或无线接口、光接口。获取图片数据313的接口可以是与通信接口318相同的接口或是通信接口318的一部分。
区别于预处理单元314和预处理单元314执行的处理,图片或图片数据313(例如,视频数据312)也可以称为原始图片或原始图片数据313。
预处理单元314用于接收(原始)图片数据313并对图片数据313执行预处理,以获取经预处理的图片315或经预处理的图片数据315。例如,预处理单元314执行的预处理可以包括整修、色彩格式转换(例如,从RGB转换为YCbCr)、调色或去噪。可以理解,预处理单元314可以是可选组件。
编码器100(例如,视频编码器100)用于接收经预处理的图片数据315并提供经编码图片数据171(下文将进一步描述细节,例如,基于图2、图7或图8)。在一个实例中,编码器100可以用于执行视频图像编码方法,在另一实施例中,编码器100还可以用于插值滤波器的训练。
源设备310的通信接口318可以用于接收经编码图片数据171并传输至其它设备,例如,目的地设备320或任何其它设备,以用于存储或直接重构,或用于在对应地存储经编码数据330和/或传输经编码数据330至其它设备之前处理经编码图片数据171,其它设备例如为目的地设备320或任何其它用于解码或存储的设备。
目的地设备320包括解码器200(例如,视频解码器200),另外亦即可选地,可以包括通信接口或通信单元322、后处理单元326和显示设备328。
目的地设备320的通信接口322用于例如,直接从源设备310或任何其它源接收经编码图片数据171或经编码数据330,任何其它源例如为存储设备,存储设备例如为经编码图片数据存储设备。
通信接口318和通信接口322可以用于藉由源设备310和目的地设备320之间的直接通信链路或藉由任何类别的网络传输或接收经编码图片数据171或经编码数据330,直接通信链路例如为直接有线或无线连接,任何类别的网络例如为有线或无线网络或其任何组合,或任何类别的私网和公网,或其任何组合。
通信接口318可以例如用于将经编码图片数据171封装成合适的格式,例如包,以在通信链路或通信网络上传输。
形成通信接口318的对应部分的通信接口322可以例如用于解封装经编码数据330,以获取经编码图片数据171。
通信接口318和通信接口322都可以配置为单向通信接口,如图1中用于经编码图片数据330的从源设备310指向目的地设备320的箭头所指示,或配置为双向通信接口,以及可以用于例如发送和接收消息来建立连接、确认和交换任何其它与通信链路和/或例如经编码图片数据传输的数据传输有关的信息。
解码器200用于接收经编码图片数据171并提供经解码图片数据231或经解码图片231(下文将进一步描述细节,例如,基于图3、图9、图10或图11)。在一个实例中,解码器200可以用于执行执行下文描述的视频图像解码方法。
目的地设备320的后处理器326用于后处理经解码图片数据231(也称为经重构图片数据),例如,经解码图片131,以获取经后处理图片数据327,例如,经后处理图片327。后处理单元326执行的后处理可以包括,例如,色彩格式转换(例如,从YCbCr转换为RGB)、调色、整修或重采样,或任何其它处理,用于例如准备经解码图片数据231以由显示设备328显示。
目的地设备320的显示设备328用于接收经后处理图片数据327以向例如用户或观看者显示图片。显示设备328可以为或可以包括任何类别的用于呈现经重构图片的显示器,例如,集成的或外部的显示器或监视器。例如,显示器可以包括液晶显示器(liquid crystal display,LCD)、有机发光二极管(organic light emitting diode,OLED)显示器、等离子显示器、投影仪、微LED显示器、硅基液晶(liquid crystal on silicon,LCoS)、数字光处理器(digital light processor,DLP)或任何类别的其它显示器。
虽然图1将源设备310和目的地设备320绘示为单独的设备,但设备实施例也可以同时包括源设备310和目的地设备320或同时包括两者的功能性,即源设备310或对应的功能性以及目的地设备320或对应的功能性。在此类实施例中,可以使用相同硬件和/或软件,或使用单独的硬件和/或软件,或其任何组合来实施源设备310或对应的功能性以及目的地设备320或对应的功能性。
本领域技术人员基于描述明显可知,不同单元的功能性或图1所示的源设备310和/或目的地设备320的功能性的存在和(准确)划分可能根据实际设备和应用有所不同。
编码器100(例如,视频编码器100)和解码器200(例如,视频解码器200)都可以实施为各种合适电路中的任一个,例如,一个或多个微处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application-specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)、离散逻辑、硬件或其任何组合。如果部分地以软件实施所述技术,则设备可将软件的指令存储于合适的非暂时性计算机可读存储 介质中,且可使用一或多个处理器以硬件执行指令从而执行本公开的技术。前述内容(包含硬件、软件、硬件与软件的组合等)中的任一者可视为一或多个处理器。视频编码器100和视频解码器200中的每一个可以包含在一或多个编码器或解码器中,所述编码器或解码器中的任一个可以集成为对应设备中的组合编码器/解码器(编解码器)的一部分。
源设备310可称为视频编码设备或视频编码装置。目的地设备320可称为视频解码设备或视频解码装置。源设备310以及目的地设备320可以是视频编码设备或视频编码装置的实例。
源设备310和目的地设备320可以包括各种设备中的任一个,包含任何类别的手持或静止设备,例如,笔记本或膝上型计算机、移动电话、智能电话、平板或平板计算机、摄像机、台式计算机、机顶盒、电视、显示设备、数字媒体播放器、视频游戏控制台、视频流式传输设备(例如内容服务服务器或内容分发服务器)、广播接收器设备、广播发射器设备等,并可以不使用或使用任何类别的操作系统。
在一些情况下,源设备310和目的地设备320可以经装备以用于无线通信。因此,源设备310和目的地设备320可以为无线通信设备。
在一些情况下,图1中所示视频编码系统300仅为示例,本申请的技术可以适用于不必包含编码和解码设备之间的任何数据通信的视频编码设置(例如,视频编码或视频解码)。在其它实例中,数据可从本地存储器检索、在网络上流式传输等。视频编码设备可以对数据进行编码并且将数据存储到存储器,和/或视频解码设备可以从存储器检索数据并且对数据进行解码。在一些实例中,由并不彼此通信而是仅编码数据到存储器和/或从存储器检索数据且解码数据的设备执行编码和解码。
应理解,对于以上参考视频编码器100所描述的实例中的每一个,视频解码器200可以用于执行相反过程。关于信令语法元素,视频解码器200可以用于接收并解析这种语法元素,相应地解码相关视频数据。在一些例子中,视频编码器100可以将一个或多个定义目标滤波器的指示信息、插值滤波器的参数信息等语法元素熵编码成经编码视频比特流(也称码流)。在此类实例中,视频解码器200可以解析这种语法元素,并相应地解码相关视频数据。
编码器&编码方法
图2示出用于实现本申请(公开)技术的视频编码器100的实例的示意性/概念性框图。在图2的实例中,视频编码器100包括残差计算单元104、变换处理单元106、量化单元108、逆量化单元110、逆变换处理单元112、重构单元114、缓冲器116、环路滤波器单元120、经解码图片缓冲器(decoded picture buffer,DPB)130、预测处理单元160和熵编码单元170。预测处理单元160可以包含帧间预测单元144、帧内预测单元154和模式选择单元162。帧间预测单元144可以包含运动估计单元和运动补偿单元(未图示)。图2所示的视频编码器100也可以称为混合型视频编码器或根据混合型视频编解码器的视频编码器。
例如,残差计算单元104、变换处理单元106、量化单元108、预测处理单元160和熵编码单元170形成编码器100的前向信号路径,而例如逆量化单元110、逆变换处理单元112、重构单元114、缓冲器116、环路滤波器120、经解码图片缓冲器(decoded picture buffer, DPB)130、预测处理单元160形成编码器的后向信号路径,其中编码器的后向信号路径对应于解码器的信号路径(参见图3中的解码器200)。
编码器100通过例如输入102,接收图片101或图片101的块103,例如,形成视频或视频序列的图片序列中的图片。图片块103也可以称为当前图片块或待编码图片块,图片101可以称为当前图片或待编码图片(尤其是在视频编码中将当前图片与其它图片区分开时,其它图片例如同一视频序列亦即也包括当前图片的视频序列中的先前经编码和/或经解码图片)。
编码器100的实施例可以包括分割单元(图2中未绘示),用于将图片101分割成多个例如块103的块,通常分割成多个不重叠的块。分割单元可以用于对视频序列中所有图片使用相同的块大小以及定义块大小的对应栅格,或用于在图片或子集或图片群组之间更改块大小,并将每个图片分割成对应的块。
在一个实例中,视频编码器100的预测处理单元160可以用于执行上述分割技术的任何组合。
如图片101,块103也是或可以视为具有亮度值(采样值)的采样点的二维阵列或矩阵,虽然其尺寸比图片101小。换句话说,块103可以包括,例如,一个采样阵列(例如黑白图片101情况下的亮度阵列)或三个采样阵列(例如,彩色图片情况下的一个亮度阵列和两个色度阵列)或依据所应用的色彩格式的任何其它数目和/或类别的阵列。块103的水平和垂直方向(或轴线)上采样点的数目定义块103的尺寸。
如图2所示的编码器100用于逐块编码图片101,例如,对每个块103执行编码和预测。
残差计算单元104用于基于图片块103和预测块165(下文提供预测块165的其它细节)计算残差块105,例如,通过逐样本(逐像素)将图片块103的样本值减去预测块165的样本值,以在样本域中获取残差块105。
变换处理单元106用于在残差块105的样本值上应用例如离散余弦变换(discrete cosine transform,DCT)或离散正弦变换(discrete sine transform,DST)的变换,以在变换域中获取变换系数107。变换系数107也可以称为变换残差系数,并在变换域中表示残差块105。
变换处理单元106可以用于应用DCT/DST的整数近似值,例如为HEVC/H.265指定的变换。与正交DCT变换相比,这种整数近似值通常由某一因子按比例缩放。为了维持经正变换和逆变换处理的残差块的范数,应用额外比例缩放因子作为变换过程的一部分。比例缩放因子通常是基于某些约束条件选择的,例如,比例缩放因子是用于移位运算的2的幂、变换系数的位深度、准确性和实施成本之间的权衡等。例如,在解码器200侧通过例如逆变换处理单元212为逆变换(以及在编码器100侧通过例如逆变换处理单元112为对应逆变换)指定具体比例缩放因子,以及相应地,可以在编码器100侧通过变换处理单元106为正变换指定对应比例缩放因子。
量化单元108用于例如通过应用标量量化或向量量化来量化变换系数107,以获取经量化变换系数109。经量化变换系数109也可以称为经量化残差系数109。量化过程可以减少与部分或全部变换系数107有关的位深度。例如,可在量化期间将n位变换系数向下舍入到m位变换系数,其中n大于m。可通过调整量化参数(quantization parameter,QP) 修改量化程度。例如,对于标量量化,可以应用不同的标度来实现较细或较粗的量化。较小量化步长对应较细量化,而较大量化步长对应较粗量化。可以通过量化参数(quantization parameter,QP)指示合适的量化步长。例如,量化参数可以为合适的量化步长的预定义集合的索引。例如,较小的量化参数可以对应精细量化(较小量化步长),较大量化参数可以对应粗糙量化(较大量化步长),反之亦然。量化可以包含除以量化步长以及例如通过逆量化110执行的对应的量化或逆量化,或者可以包含乘以量化步长。根据例如HEVC的一些标准的实施例可以使用量化参数来确定量化步长。一般而言,可以基于量化参数使用包含除法的等式的定点近似来计算量化步长。可以引入额外比例缩放因子来进行量化和反量化,以恢复可能由于在用于量化步长和量化参数的等式的定点近似中使用的标度而修改的残差块的范数。在一个实例实施方式中,可以合并逆变换和反量化的标度。或者,可以使用自定义量化表并在例如比特流中将其从编码器通过信号发送到解码器。量化是有损操作,其中量化步长越大,损耗越大。
逆量化单元110用于在经量化系数上应用量化单元108的逆量化,以获取经反量化系数111,例如,基于或使用与量化单元108相同的量化步长,应用量化单元108应用的量化方案的逆量化方案。经反量化系数111也可以称为经反量化残差系数111,对应于变换系数107,虽然由于量化造成的损耗通常与变换系数不相同。
逆变换处理单元112用于应用变换处理单元106应用的变换的逆变换,例如,逆离散余弦变换(discrete cosine transform,DCT)或逆离散正弦变换(discrete sine transform,DST),以在样本域中获取逆变换块113。逆变换块113也可以称为逆变换经反量化块113或逆变换残差块113。
重构单元114(例如,求和器114)用于将逆变换块113(即经重构残差块113)添加至预测块165,以在样本域中获取经重构块115,例如,将经重构残差块113的样本值与预测块165的样本值相加。
可选地,例如线缓冲器116的缓冲器单元116(或简称“缓冲器”116)用于缓冲或存储经重构块115和对应的样本值,用于例如帧内预测。在其它的实施例中,编码器可以用于使用存储在缓冲器单元116中的未经滤波的经重构块和/或对应的样本值来进行任何类别的估计和/或预测,例如帧内预测。
例如,编码器100的实施例可以经配置以使得缓冲器单元116不只用于存储用于帧内预测154的经重构块115,也用于环路滤波器单元120(在图2中未示出),和/或,例如使得缓冲器单元116和经解码图片缓冲器单元130形成一个缓冲器。其它实施例可以用于将经滤波块121和/或来自经解码图片缓冲器130的块或样本(图2中均未示出)用作帧内预测154的输入或基础。
环路滤波器单元120(或简称“环路滤波器”120)用于对经重构块115进行滤波以获取经滤波块121,从而顺利进行像素转变或提高视频质量。环路滤波器单元120旨在表示一个或多个环路滤波器,例如去块滤波器、样本自适应偏移(sample-adaptive offset,SAO)滤波器或其它滤波器,例如双边滤波器、自适应环路滤波器(adaptive loop filter,ALF),或锐化或平滑滤波器,或协同滤波器。尽管环路滤波器单元120在图2中示出为环内滤波器,但在其它配置中,环路滤波器单元120可实施为环后滤波器。经滤波块121也可以称 为经滤波的经重构块121。经解码图片缓冲器130可以在环路滤波器单元120对经重构编码块执行滤波操作之后存储经重构编码块。
编码器100(对应地,环路滤波器单元120)的实施例可以用于输出环路滤波器参数(例如,样本自适应偏移信息),例如,直接输出或由熵编码单元170或任何其它熵编码单元熵编码后输出,例如使得解码器200可以接收并应用相同的环路滤波器参数用于解码。
经解码图片缓冲器(decoded picture buffer,DPB)130可以为存储参考图片数据供视频编码器100编码视频数据之用的参考图片存储器。DPB 130可由多种存储器设备中的任一个形成,例如动态随机存储器(dynamic random access memory,DRAM)(包含同步DRAM(synchronous DRAM,SDRAM)、磁阻式RAM(magnetoresistive RAM,MRAM)、电阻式RAM(resistive RAM,RRAM))或其它类型的存储器设备。可以由同一存储器设备或单独的存储器设备提供DPB 130和缓冲器116。在某一实例中,经解码图片缓冲器(decoded picture buffer,DPB)130用于存储经滤波块121。经解码图片缓冲器130可以进一步用于存储同一当前图片或例如先前经重构图片的不同图片的其它先前的经滤波块,例如先前经重构和经滤波块121,以及可以提供完整的先前经重构亦即经解码图片(和对应参考块和样本)和/或部分经重构当前图片(和对应参考块和样本),例如用于帧间预测。在某一实例中,如果经重构块115无需环内滤波而得以重构,则经解码图片缓冲器(decoded picture buffer,DPB)130用于存储经重构块115。
预测处理单元160,也称为块预测处理单元160,用于接收或获取块103(当前图片101的当前编码图像块103)和经重构图片数据,例如来自缓冲器116的同一(当前)图片的参考样本和/或来自经解码图片缓冲器130的一个或多个先前经解码图片的参考图片数据231,以及用于处理这类数据进行预测,即提供可以为经帧间预测块145或经帧内预测块155的预测块165。
在本申请实施例中,帧间预测单元145可以包括候选插值滤波器集合151和滤波器选择单元152,该候选插值滤波器集合151可以包括多个种类的插值滤波器,例如,包括基于离散余弦变换的插值滤波器(DCT-based interpolation filter,DCTIF)和基于可逆性的插值滤波器(invertibility based interpolation filter,本文中也称InvIF)。其中,InvIF是指经过本申请图6A或图6C描述的插值滤波器训练方法得到的插值滤波器。滤波器选择单元152用于实现或与其他单元(如变换处理单元106、量化单元108、逆变换处理单元112、重构单元114、环路滤波器单元120、变换处理单元106等)的组合用于实现从候选插值滤波器集合151中选择插值滤波器(例如DCTIF或InvIF)和/或用于熵编码单元对选择的插值滤波器(本文中也称目标插值滤波器)的指示信息进行熵编码。在本申请另一实施例中,候选插值滤波器集合151可以包括的多个种类插值滤波器可以都为现有技术中提供的插值滤波器,或可以包括通过图6A或图6C所示的插值滤波器训练方法得到的插值滤波器。在本申请的又一实施例中,帧间预测块145可以包括单个本申请实施例提供的通过图6A或图6C所示的插值滤波器训练方法得到的插值滤波器。应理解,候选插值滤波器集合151可以应用于运动估计的过程中,在本申请的另一实施例中候选插值滤波器集合151还可以用于其他需要进行插值运算的场景中。
编码器100还可以包括训练单元(图2中未示出)用于实现插值滤波器的训练,训练 单元可以设置于帧间预测模块145内部或外部。可以理解,训练单元可以设置于帧间预测单元145内,也可以设置于编码器100的其他位置,通过与帧间预测单元中的一个或多个插值滤波器耦合,来实现插值滤波器的训练,插值滤波器的滤波器参数的更新等。可以理解训练单元也可以位于编码器之外或其他设备(不包含编码器100的设备)中,编码器可以通过接收滤波器参数,实现对插值滤波器的配置。
模式选择单元162可以用于选择预测模式(例如帧内或帧间预测模式)和/或对应的用作预测块165的预测块145或155,以计算残差块105和重构经重构块115。
模式选择单元162的实施例可以用于选择预测模式(例如,从预测处理单元160所支持的那些预测模式中选择),所述预测模式提供最佳匹配或者说最小残差(最小残差意味着传输或存储中更好的压缩),或提供最小信令开销(最小信令开销意味着传输或存储中更好的压缩),或同时考虑或平衡以上两者。模式选择单元162可以用于基于码率失真优化(rate distortion optimization,RDO)确定预测模式,即选择提供最小码率失真优化的预测模式,或选择相关码率失真至少满足预测模式选择标准的预测模式。
下文将详细解释编码器100的实例(例如,通过预测处理单元160)执行的预测处理和(例如,通过模式选择单元162)执行的模式选择。
如上文所述,编码器100用于从(预先确定的)预测模式集合中确定或选择最好或最优的预测模式。预测模式集合可以包括例如帧内预测模式和/或帧间预测模式。
帧内预测模式集合可以包括35种不同的帧内预测模式,例如,如DC(或均值)模式和平面模式的非方向性模式,或如H.265中定义的方向性模式,或者可以包括67种不同的帧内预测模式,例如,如DC(或均值)模式和平面模式的非方向性模式,或如正在发展中的H.266中定义的方向性模式。
(可能的)帧间预测模式集合取决于可用参考图片(即,例如前述存储在DBP 130中的至少部分经解码图片)和其它帧间预测参数,例如取决于是否使用整个参考图片或只使用参考图片的一部分,例如围绕当前块的区域的搜索窗区域,来搜索最佳匹配参考块,和/或例如取决于是否应用如半像素和/或四分之一像素内插的像素图片。
除了以上预测模式,也可以应用跳过模式和/或直接模式。
预测处理单元160可以进一步用于将块103分割成较小的块分区或子块,例如,通过迭代使用四叉树(quad-tree,QT)分割、二进制树(binary-tree,BT)分割或三叉树(triple-tree,TT)分割,或其任何组合,以及用于例如为块分区或子块中的每一个执行预测,其中模式选择包括选择分割的块103的树结构和选择应用于块分区或子块中的每一个的预测模式。
帧间预测单元144可以包含运动估计(motion estimation,ME)单元(图2中未示出)和运动补偿(motion compensation,MC)单元(图2中未示出)。运动估计单元用于接收或获取图片块103(当前图片101的当前图片块103)和经解码图片131,或至少一个或多个先前经重构块,例如,一个或多个其它/不同先前经解码图片231的经重构块,来进行运动估计。例如,视频序列可以包括当前图片和先前经解码图片231,或换句话说,当前图片和先前经解码图片231可以是形成视频序列的图片序列的一部分,或者形成该图片序列。
例如,编码器100可以用于从多个其它图片中的同一或不同图片的多个参考块中选择参考块,并向运动估计单元(图2中未示出)提供参考图片(或参考图片索引等)和/或提 供参考块的位置(X、Y坐标)与当前块的位置之间的偏移(空间偏移)作为帧间预测参数。该偏移也称为运动向量(motion vector,MV)。
在本申请实施例中,运动估计单元可以包括候选插值滤波器集合,运动估计单元还用于根据率失真代价准则从候选插值滤波器集合中选定用于当前编码图像块的目标插值滤波器。或者,运动估计单元还用于:通过候选插值滤波器集合中每一个插值滤波器对与当前编码图像块最优匹配的整像素参考图像块进行分像素插值,得到N个分像素参考图像块,进一步在整像素参考图像块和N个分像素参考图像块中确定与当前编码图像块最优匹配的预测块,从候选插值滤波器集合中选定插值得到该预测块的插值滤波器即为目标插值滤波器。
运动补偿单元用于获取,例如接收帧间预测参数,并基于或使用帧间预测参数执行帧间预测来获取帧间预测块145。由运动补偿单元(图2中未示出)执行的运动补偿可以包含基于通过运动估计(可能执行对子像素精确度的内插)确定的运动/块向量取出或生成预测块。内插滤波可从已知像素样本产生额外像素样本,从而潜在地增加可用于编码图片块的候选预测块的数目。一旦接收到用于当前图片块的PU的运动向量,运动补偿单元146可以在一个参考图片列表中定位运动向量指向的预测块。运动补偿单元146还可以生成与块和视频条带相关联的语法元素,以供视频解码器200在解码视频条带的图片块时使用。
帧内预测单元154用于获取,例如接收同一图片的图片块103(当前图片块)和一个或多个先前经重构块,例如经重构相邻块,以进行帧内估计。例如,编码器100可以用于从多个(预定)帧内预测模式中选择帧内预测模式。
编码器100的实施例可以用于基于优化标准选择帧内预测模式,例如基于最小残差(例如,提供最类似于当前图片块103的预测块155的帧内预测模式)或最小码率失真。
帧内预测单元154进一步用于基于如所选择的帧内预测模式的帧内预测参数确定帧内预测块155。在任何情况下,在选择用于块的帧内预测模式之后,帧内预测单元154还用于向熵编码单元170提供帧内预测参数,即提供指示所选择的用于块的帧内预测模式的信息。在一个实例中,帧内预测单元154可以用于执行下文描述的帧内预测技术的任意组合。
熵编码单元170用于将熵编码算法或方案(例如,可变长度编码(variable length coding,VLC)方案、上下文自适应VLC(context adaptive VLC,CAVLC)方案、算术编码方案、上下文自适应二进制算术编码(context adaptive binary arithmetic coding,CABAC)、基于语法的上下文自适应二进制算术编码(syntax-based context-adaptive binary arithmetic coding,SBAC)、概率区间分割熵(probability interval partitioning entropy,PIPE)编码或其它熵编码方法或技术)应用于经量化残差系数109、帧间预测参数、帧内预测参数和/或环路滤波器参数中的单个或所有上(或不应用),以获取可以通过输出172以例如经编码比特流171的形式输出的经编码图片数据171。可以将经编码比特流传输到视频解码器200,或将其存档稍后由视频解码器200传输或检索。熵编码单元170还可用于熵编码正被编码的当前视频条带的其它语法元素。例如,在本发明的一些实施例中,熵编码单元170还用于将目标插值滤波器的指示信息和/或插值滤波器的滤波器参数进行熵编码。
训练单元,用于基于样本图像对帧间预测单元144内包括的基于机器学习的插值滤波器进行训练,以确定或优化插值滤波器的滤波器参数。
视频编码器100的其它结构变型可用于编码视频流。例如,基于非变换的编码器100可以在没有针对某些块或帧的变换处理单元106的情况下直接量化残差信号。在另一实施方式中,编码器100可具有组合成单个单元的量化单元108和逆量化单元110。
图3示出示例性视频解码器200,用于实现本申请的技术,即视频图像解码方法。视频解码器200用于接收例如由编码器100编码的经编码图片数据(例如,经编码比特流)171,以获取经解码图片131。在解码过程期间,视频解码器200从视频编码器100接收视频数据,例如表示经编码视频条带的图片块的经编码视频比特流(也称码流)及相关联的语法元素。
在图3的实例中,解码器200包括熵解码单元204、逆量化单元210、逆变换处理单元212、重构单元214(例如求和器214)、缓冲器216、环路滤波器220、经解码图片缓冲器230以及预测处理单元260。预测处理单元260可以包含帧间预测单元244、帧内预测单元254和模式选择单元262。在一些实例中,视频解码器200可执行大体上与参照图2的视频编码器100描述的编码遍次互逆的解码遍次。
熵解码单元204用于对经编码图片数据(比如,码流或当前解码图像块)171执行熵解码,以获取例如经量化系数209和/或经解码的编码参数(也称编码信息,图3中未示出),例如,帧间预测、帧内预测参数、环路滤波器参数、目标滤波器的指示信息、滤波器参数和/或指示帧间预测模式的信息等语法元素中(经解码)的任意一个或全部。熵解码单元204进一步用于将帧间预测参数、帧内预测参数、目标滤波器的指示信息、滤波器参数和/或指示帧间预测模式的信息等语法元素转发至预测处理单元260。视频解码器200可接收视频条带层级和/或视频块层级的语法元素。
逆量化单元210功能上可与逆量化单元110相同,逆变换处理单元212功能上可与逆变换处理单元112相同,重构单元214功能上可与重构单元114相同,缓冲器216功能上可与缓冲器116相同,环路滤波器220功能上可与环路滤波器120相同,经解码图片缓冲器230功能上可与经解码图片缓冲器130相同。
预测处理单元260可以包括帧间预测单元244和帧内预测单元254,其中帧间预测单元244功能上可以类似于帧间预测单元144,帧内预测单元254功能上可以类似于帧内预测单元154。预测处理单元260通常用于执行块预测和/或从经编码数据171获取预测块265,以及从例如熵解码单元204(显式地或隐式地)接收或获取预测相关参数和/或关于所选择的预测模式的信息。
当视频条带经编码为经帧内编码(I)条带时,预测处理单元260的帧内预测单元254用于基于信号表示的帧内预测模式及来自当前帧或图片的先前经解码块的数据来产生用于当前视频条带的图片块的预测块265。当视频帧经编码为经帧间编码(即B或P)条带时,预测处理单元260的帧间预测单元244(例如,运动补偿单元)用于基于运动向量及从熵解码单元204接收的其它语法元素生成用于当前视频条带的视频块的预测块265。对于帧间预测,可从一个参考图片列表内的一个参考图片中产生预测块。视频解码器200可基于存储于DPB 230中的参考图片,使用默认建构技术来建构参考帧列表:列表0和列表1。
预测处理单元260用于通过解析运动向量、用于进行分像素插值得到预测块的目标插 值滤波器的指示信息、滤波器参数或和用于指示帧间预测模式的信息等语法元素,确定用于用于进行分像素插值得到预测块的目标插值滤波器以及确定当前视频条带的视频块(也即当前解码图像块)的预测信息,并使用预测信息产生用于正经解码的当前解码图像块的预测块。例如,预测处理单元260使用接收到的一些语法元素确定用于编码视频条带的视频块的预测模式(例如,帧内或帧间预测)、帧间预测条带类型(例如,B条带、P条带或GPB条带)、用于条带的参考图片列表中的一个或多个的建构信息、用于条带的每个经帧间编码视频块的运动向量、条带的每个经帧间编码视频块的帧间预测状态、用于进行分像素插值得到预测块的目标滤波器的指示信息以及其它信息,以解码当前视频条带的视频块。
预测处理单元260可以包括候选插值滤波器集合251和滤波器选择单元252。候选插值滤波器集合251包括一种或多种插值滤波器,例如,DCTIF和InvIF等。滤波器选择单元252用于若运动信息指向分数像素位置,则从候选插值滤波器集合251中确定解析出的目标插值滤波器的指示信息所指示的目标插值滤波器器,通过该指示信息所指示的目标插值滤波器进行分像素插值得到预测块。
逆量化单元210可用于逆量化(即,反量化)在比特流中提供且由熵解码单元204解码的经量化变换系数。逆量化过程可包含使用由视频编码器100针对视频条带中的每一视频块所计算的量化参数来确定应该应用的量化程度并同样确定应该应用的逆量化程度。
逆变换处理单元212用于将逆变换(例如,逆DCT、逆整数变换或概念上类似的逆变换过程)应用于变换系数,以便在像素域中产生残差块。
重构单元214(例如,求和器214)用于将逆变换块213(即经重构残差块213)添加到预测块265,以在样本域中获取经重构块215,例如通过将经重构残差块213的样本值与预测块265的样本值相加。
环路滤波器单元220(在编码循环期间或在编码循环之后)用于对经重构块215进行滤波以获取经滤波块221,从而顺利进行像素转变或提高视频质量。在一个实例中,环路滤波器单元220可以用于执行下文描述的滤波技术的任意组合。环路滤波器单元220旨在表示一个或多个环路滤波器,例如去块滤波器、样本自适应偏移(sample-adaptive offset,SAO)滤波器或其它滤波器,例如双边滤波器、自适应环路滤波器(adaptive loop filter,ALF),或锐化或平滑滤波器,或协同滤波器。尽管环路滤波器单元220在图3中示出为环内滤波器,但在其它配置中,环路滤波器单元220可实施为环后滤波器。
随后将给定帧或图片中的经解码视频块221存储在存储用于后续运动补偿的参考图片的经解码图片缓冲器230中。
解码器200用于例如,藉由输出232输出经解码图片231,以向用户呈现或供用户查看。
视频解码器200的其它变型可用于对压缩的比特流进行解码。例如,解码器200可以在没有环路滤波器单元220的情况下生成输出视频流。例如,基于非变换的解码器200可以在没有针对某些块或帧的逆变换处理单元212的情况下直接逆量化残差信号。在另一实施方式中,视频解码器200可以具有组合成单个单元的逆量化单元210和逆变换处理单元212。
应理解,虽然图2和图3示出了特定的编码器100和解码器200,但是,编码器100 和解码器200还可以包括其他未被描绘的各种其他功能单元、模块或部件。此外,不限于在图2和图3中所示出的特定的部件和/或各种部件被布置的方式。本文所描述的系统的各种单元可以被实现在软件、固件、和/或硬件和/或其任何组合中。
下面介绍本申请涉及的分像素插值的可逆性:
应理解,每幅数字图像可看作一个m行n列的二维阵列,包含m*n个采样(sample),每个采样的位置称为采样位置,每个采样的数值成为采样值)。通常将m*n称为图像的分辨率,即图像中包含的采样的个数。例如,2K图像分辨率是1920*1080,4K视频的图像分辨率是3840*2160。通常,也将一个采样称为一个像素,采样值也称像素值,因此每个像素也包含像素位置与像素值两个信息。
若干幅数字图像按照时间顺序排列就构成了数字视频。数字视频编码旨在去除数字视频中的冗余信息,使得数字视频更有利于存储和网络中的传输。通常,数字视频的冗余包括空间冗余、时间冗余、统计冗余和视觉冗余。视频序列中相邻帧之间具有很强的相关性,存在大量的时间冗余,为了去除时间冗余,目前的基于块的混合编码框架中都会引入帧间预测技术,通过已编码帧来预测当前待编码帧,从而大大节省编码码率。
在基于块的帧间预测中,当前待编码图片首先被划分为若干个互不重叠的编码单元(CU)。每个CU有各自的编码模式。每个CU可以进一步划分为若干个预测单元(PU),每个PU拥有各自的预测模式,例如预测方向,运动矢量(MV)等等。在编码端,每个PU都会在参考帧中搜索到一个匹配的块,匹配块的位置使用MV来标识。由于图像在数字采样的过程中某些位置的样本值没有被采样(指分数像素位置),因此,在当前块在参考帧中可能搜索不到完全匹配的块,这个时候就会使用分像素插值技术来插值产生分数像素位置的像素值。图4给出了整像素和分像素的位置示意图。其中大写字母表示的位置代表整数像素位置,其余小写字母表示不同的分数像素位置。不同的分数像素位置的像素值使用整像素通过不同的插值滤波器插值产生,然后作为参考块。
例如,对于图1所示的插值过程,运动矢量精度为四分之一精度。在进行运动估计时,首先搜索到与当前待编码图像块最匹配的整像素图像块。然后使用1/2精度插值滤波器插值产生4个1/2-像素分像素图像块。将当前待编码图像块和4个1/2-像素分像素图像块和整像素图像块进行匹配,得到最优匹配的1/2-精度的运动矢量。使用上述最匹配的1/2-精度运动矢量指向的最匹配的像素块使用1/4-精度插值滤波器进行插值,得到8个1/4-精度的分像素块。从匹配的1/2-精度的最优匹配块和8个1/4-精度的分像素块中查找最优的匹配的图像块作为最终当前待编码图像块的预测,该匹配图像块的位置表示的运动矢量为当前待编码块的运动矢量。
在数字视频编码中,之所以存在整像素和分像素的概念,是由于数字采样的离散性。如图5的(a)部分所示的数字采样的示意图,点代表采样的样本,虚线表示原始模拟信号为s(t),实线表示插值得到的信号(称为插值信号)。在数字采样过程中,只有在特定位置(整数像素位置)的样值被采样下来,其余位置的样值则被丢掉。插值是数字采样的逆过程,插值的目的是使用离散的样本值尽可能完全恢复原始的连续信号,并获得特定位置(分数像素位置)的样本值。如图5的(a)部分所示,目标位置α(分数像素位置)可由其临 近的样值插值获得,插值过程描述如下:
p α=f α(s -M,s -M+1,…,s 0,s 1,…,s N)      (1)
其中,s i代表整数像素位置i处的样值,i为整数像素位置的索引,i为整数;f α为分数像素位置α对应的插值滤波器,M、N为正整数,且-M<α<N。
另一方面,如图5的(b)部分所示,如果我们将原始信号沿着纵坐标翻转,可以得到s'(t)=s(-t),然后将采样器向左平移α,那么我们采样得到的样本记为s′ i,对应图5的(a)部分中的分像素p α。由于图5的(b)部分中的整像素和分像素的距离和图5的(a)部分中的相同,因此,在图5的(b)部分中的分数像素位置的样值也可以使用f α插值得到,即:
u 0=f α(s′ -α-M,s′ -α-M+1,…,s′ ,s′ -α+1,…,s′ -α+N)    (2)
由于,s'(t)=s(-t-α),公式(2)还可以表示为:
u 0=f α(s α+M,s α+M-1,…,s α,s α-1,…,s α-N)   (3)
从公式(2)和公式(3)我们可以看到,如果存在一个理想的插值滤波器能完全插值出分数像素位置的样本,即:
p α+k=s α+k   (4)
那么在图5的(b)部分中插值滤波器同样可以从分数像素位置恢复出整数像素位置,即:
u k=s -k   (5)
由公式(4)和公式(5)中可知,如果存在一个滤波器能从整像素样值完全恢复出分像素样值,那么该滤波器同样可以从分数像素位置恢复出整数像素位置,这个特性称为分像素插值的可逆性。基于分像素插值的可逆性,本申请提供一种端到端的训练方式来产生插值滤波器。
基于上述分像素插值的可逆性,本发明实施例提供了两种插值滤波器训练方法。该插值滤波器训练方法可以运行在编码器中,也可以运行于计算设备中,该计算设备可以包括但不限于计算机、云计算设备、服务器、终端设备等。
现有技术中通常采用固定系数的插值滤波器,例如,双线性插值滤波器、双三次插值滤波器等。目前,视频编码标准中普遍采用固定系数的插值滤波器。在H.264/AVC中,采用6抽头的有限相应滤波器来产生半像素样值,采用简单的双线性插值来产生四分之一像素。HEVC中的插值滤波器相比于H.264/AVC做了很多改进,使用8抽头滤波器来产生半像素,而四分之一像素则采用7抽头的插值滤波器。固定系数的插值滤波器实现简单,复杂度低,因此被广泛采用。但是由于视频信号的多样性和非平稳性,固定系数滤波器的性能十分有限。
为了应对自然视频的非平稳特性,研究人员提出了内容自适应插值滤波器。典型的自适应插值滤波器在编码端根据运动补偿预测的误差估计滤波器系数,然后将滤波器系数编码写入码流。为了减少插值滤波器的复杂度,可分离的自适应插值滤波器被提出,可以实现在基本保持编码性能的情况下明显降低复杂度。为了减少编码系数所需的比特数,一些自适应插值滤波器的设计时通常会假设图像为各向同性。虽然自适应插值滤波器是内容自适应的,然后仍然建立在线性插值滤波器的基础上,另外编码滤波器系数仍然需要一些比特数。
本申请实施例正是基于上述技术问题,提出了一种插值滤波器的训练的方法。请参阅图6A所示的本申请实施例提供的一种插值滤波器的训练方法的示意性流程图,以及图6B所示的训练流程的示意性说明图。该方法包括但不限于如下部分或全部步骤:
S612:通过第一插值滤波器对样本图像在整数像素位置的像素进行插值,得到所述样本图像在第一分数像素位置的第一分像素图像。
S614:将所述样本图像输入到第二插值滤波器中,得到第二分像素图像。
S616:通过最小化用于表示所述第一分像素图像块与所述第二分像素图像块的差值的第一函数确定所述第二插值滤波器的滤波器参数。
需要说明的是,虽然将本申请实施例中的方法通过步骤来表示,但步骤S64和步骤S66在训练过程中为一个迭代的过程。
在本申请实施例的一种实现中,样本图像可以是原始图像X或原始图像X经过编码器编码压缩后的图像X′。然而,在本申请实施例的另一种实现中,输入到第一插值滤波器的样本图像为原始图像,输入到第二插值滤波器的可以是样本图像经过编码器编码压缩后的图像。
应理解,第一插值滤波器为现有技术中的任意一种可插值产生在第一分数像素位置的像素值的插值滤波器,该第一插值滤波器可以是固定系数的插值滤波器、自适应插值滤波器或其他类型的插值滤波器等,本申请实施例不作限定。第一分数像素位置可以是任意一个分数像素位置。可见,本申请实施例,以第一分像素图像为标签数据,来训练第二插值滤波器,使得训得到的第二插值滤波器可直接用于插值得到第一分数像素位置的像素值。
还应理解,第二插值滤波器可以是支持向量机(support vector machine,SVM)、神经网络(neural network,NN)、卷积神经网络(convolutional neural network,CNN)或其他形式,对此,本申请实施例不作限定。
第一函数可以是用于表示第一分数像素图像和第二分数像素图像的差值的函数,该第一函数可以是损失函数、目标函数、代价函数等,本申请实施例不作限定。
例如,第一函数为正则化损失函数(regularization loss function),该第一函数的可以表示为:
Figure PCTCN2019108311-appb-000001
其中,γ为分数像素位置的索引,L reg,γ为分数像素位置γ对应的第一函数,X为样本 图像,X′为样本图像进过编码器压缩后的图像,TIF γ为分数像素位置γ对应的第一插值滤波器,F为第二插值滤波器,TIF γ(X)为分数像素位置γ对应的第一分像素图像,X f,γ、F γ(X′)为分数像素位置γ对应的第二分像素图像。其中,范数符号
Figure PCTCN2019108311-appb-000002
其中,i为x中像素点的索引。
应理解,第一函数还可以具体其他表示方式,例如,对数损失函数、平方损失函数、指数损失函数或其他形式的损失函数,对此,本申请实施例不作限定。
可以理解,样本图像可以是一个图像也可以是多个图像。其中,一个样本图像可以是一帧图像、也可以是一个编码单元(CU),也可以是一个预测单元(PU),对此,本发明不作限定。
其中,第二插值滤波器的滤波器参数可以的通过最小化损失函数求得,训练过程可表示为:
Figure PCTCN2019108311-appb-000003
其中,n为样本图像的总数,n为正整数;k为样本图像的索引,k为正整数,k≤n,θ *为最优滤波器参数,θ为滤波器参数,
Figure PCTCN2019108311-appb-000004
为样本图像k对应的第一函数。可选地,n可以等于1或其他正整数。
可选地,可以通过最小二乘法(Least Square Method)、线性回归(Linear Regression)、梯度下降法(gradient descent)或其他方式求解第二插值滤波器的滤波器参数。
由于采样得到的整像素图像对应的分像素图像是不可获得的,因此通过机器学习得到的插值滤波器没有标签数据。现有技术中采用的标签数据是通过“模糊-抽样”的方法,将样本图像使用低通滤波器进行模糊,使得相邻的像素之间的相关性增加。然后将图像按照不同的相位抽样成若干个子图像。将相位0看作整像素,将其他相位看作不同位置的分像素。然而,该方法得到的标签数据是人工设计的,因此并不是最优的。
可见,本申请实施例提供的插值滤波器器训练方法通过传统插值滤波器对样本图像进行分像素插值,得到第一分像素图像,并作为标签数据,通过监督该第一分像素图像,训练插值滤波器(第二插值滤波器),得到一种插值滤波器,可提高编解码性能。
本发明一实施例中,可以将多个分数像素位置分别对应的插值滤波器联合训练。具体的实现方法包括但不限于如下步骤:
S1:通过分数像素位置γ对应的第一插值滤波器对样本图像在整数像素位置的像素进行插值,得到所述样本图像在分数像素位置γ的第一分像素图像,其中,分数像素位置γ为Q个分数像素位置中的任意一个分数像素位置,Q为正整数。
S2:将样本图像输入到分数像素位置γ对应的第二插值滤波器中,得到分数像素位置γ对应的第二分像素图像。
S3:同时最小化Q个分数像素位置分别对应的第一函数确定Q个分数像素位置分别对 应的第二插值滤波器的滤波器参数,其中,分数像素位置γ对应的第一函数用于表示样本图像在分数像素位置γ的第一分像素图像与分数像素位置γ对应的第二分像素图像的差值。
可选地,Q可以为分数像素位置的总数,也可以是其他数值。
请参阅图6C所示的本申请实施例提供的另一种插值滤波器的训练方法的示意性流程图,以及图6D所示的训练流程的示意性说明图。该方法包括但不限于如下部分或全部步骤:
S622:通过第一插值滤波器对样本图像进行分像素插值,得到样本图像在第一分数像素位置的第一分像素图像。
S624:将样本图像输入到第二插值滤波器中,得到第二分像素图像。
S626:将第二分像素图像经过翻转运算输入到第三插值滤波器中,得到第一图像,并将第一图像通过翻转运算的逆运算得到第二图像,其中,第二插值滤波器和第三插值滤波器共享滤波器参数。
S628:根据用于表示第一分像素图像与第二分像素图像的差值的第一函数和用于表示样本图像与第二图像的差值的第二函数确定所述滤波器参数。
需要说明的是,虽然将本申请实施例中的方法通过步骤来表示,但步骤S76和步骤S78在训练过程中为一个迭代的过程。
关于样本图像、第一函数、第一插值滤波器的描述可以参见上述图6A、图6B描述的插值滤波器的训练方法实施例中的相关描述,本申请不在赘述。
其中,分像素插值产生的分像素图像X f经过翻转运算T然后再经过一次第三插值滤波器进行分像素插值得到第一图像,第一图像再经过上述翻转运算T的逆运算T -1来得到样本图像的重建图像,即第二图像。其中,第一图像和第二图像都为整像素图像,翻转运算包括水平翻转、垂直翻转和对角翻转。翻转运算的类型的选择可以由下式决定:
Figure PCTCN2019108311-appb-000005
其中,y f和x f分别表示翻转的图像在垂直和水平方向上相对于第二分像素图像的分像素位移。
第二函数可以是用于表示样本图像与所述第二图像的差值的函数,该第二函数可以是损失函数、目标函数、代价函数等,本申请实施例不作限定。
例如,第二函数为可以表示为:
Figure PCTCN2019108311-appb-000006
其中,L rec,γ为第二函数,X为样本图像,γ指示第一分数像素位置,TIF γ为第一插值滤波器,F为第二插值滤波器,TIF γ(X)为第一分像素图像,X f,γ为第二分像素图像。 TT -1=E,E为单位矩阵。
应理解,第一函数还可以具体其他表示方式,例如,对数损失函数、平方损失函数、指数损失函数或其他形式的损失函数,对此,本申请实施例不作限定。
可见,整个端到端训练框架的优化过程就是同时最小化第一函数和第二函数。在本申请一是实施例中,步骤S628的一种实现方式可以是:
通过最小化第三函数确定所述滤波器参数,其中,所述第三函数为用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数和用于表示所述样本图像与所述第二图像的差值的第二函数的加权求和。
例如,定义联合损失函数(也称第三函数)如下:
L joint=(1-δ)×L rec+δ×L reg   (9)
其中,第二插值滤波器的滤波器参数可以的通过最小化联合损失函数求得,训练过程可表示为:
θ *=arg min L joint(θ)   (10)
在本申请一是实施例中,步骤S48的另一种实现方式可以是:通过交替最小化用于表示所述第一分像素图像与所述第二分像素图像的差值的第一损失函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数。
同理,上述图6C、图6D描述的训练方法中,样本图像可以是一个图像也可以是多个图像。其中,一个样本图像可以是一帧图像、也可以是一个编码单元(CU),也可以是一个预测单元(PU),对此,本发明不作限定。
其中,第二插值滤波器的滤波器参数可以的通过最小化损失函数求得,例如,训练过程可表示为:
Figure PCTCN2019108311-appb-000007
Figure PCTCN2019108311-appb-000008
其中,n为样本图像的总数,n为正整数;k为样本图像的索引,k为正整数,k≤n,θ *为最优的滤波器参数,θ为滤波器参数,
Figure PCTCN2019108311-appb-000009
为样本图像k对应的第一函数,
Figure PCTCN2019108311-appb-000010
为样本图像k对应的第二函数。可选地,n可以等于1或其他正整数。
相对于图6A、图6B所示的实施例,本发明实施例提供的插值滤波器器训练方法通过传统插值滤波器对样本图像进行分像素插值,得到第一分像素图像,并作为标签数据,利用分像素的可逆性原理,通过同时最小化用于表示第一分像素图像与第二分像素图像的差值的第一函数和用于表示样本图像与第二图像的差值的第二函数来确定所述滤波器参数,实现了通过监督样本图像来约束第二插值滤波器,提高第二插值滤波器进行分像素插值的准确性。
以下基于图7和图8介绍本发明实施例提供的视频图像编码方法的两种具体实现过程。该方法可由视频编码器100执行。该方法描述为一系列的步骤或操作,应当理解的是,各步骤可以以各种顺序执行和/或同时发生,不限于图6所示的执行顺序。假设具有多个视频帧的视频数据流正在使用视频编码器,执行包括如下步骤的过程来预测当前视频帧的当前编码图像块的运动信息,并基于当前编码图像块的帧间预测模式和当前编码图像块的运动信息对所述当前编码图像块进行编码。
如图7所示,第一种编码实现过程:
S72:对当前编码图像块进行帧间预测,得到当前编码图像块的运动信息,其中,当前编码图像块的运动信息指向分数像素位置,该帧间预测过程包括:从候选插值滤波器集合中确定用于当前编码图像块的目标插值滤波器。
S74:基于当前编码图像块的帧间预测模式和当前编码图像块的运动信息对当前编码图像块进行编码,得到编码信息,将编码信息编入码流,其中,编码信息包括目标插值滤波器的指示信息;目标插值滤波器的指示信息用于指示通过目标插值滤波器进行分像素插值得到当前编码图像块对应的分数像素位置的参考块。
在上述实现方式中,不论帧间预测模式为是否为合并模式,当运动信息指向分数像素位置时,视频编码器都需要将目标滤波器的指示信息编入码流,以便于解码端获知编码得到预测块的目标滤波器的类型。
如图8所示,第二种编码实现过程:
S82:对当前编码图像块进行帧间预测,得到当前编码图像块的运动信息,其中,当前编码图像块的运动信息指向分数像素位置,该帧间预测过程包括:从候选插值滤波器集合中确定用于当前编码图像块的目标插值滤波器。
S84:基于当前编码图像块的帧间预测模式和当前编码图像块的运动信息对当前编码图像块进行编码,得到编码信息,将编码信息编入到码流,其中,若当前编码图像块的帧间预测模式是目标帧间预测模式,编码信息不包括目标插值滤波器的指示信息;若当前编码图像块的帧间预测模式为非目标帧间预测模式,编码信息包括目标插值滤波器的指示信息,目标插值滤波器的指示信息用于指示当前编码图像块采用目标插值滤波器进行分像素插值。
本申请实施例中,视频编码器包括候选插值滤波器集合,该候选插值滤波器集合中可以包括多个种类的插值滤波器,对于每一种类的插值滤波器可以包括一个或多个插值滤波器。视频编解码器在对当前编码图像块进行帧间预测时,可以选择其中的一个插值滤波器进行分像素插值,以得到该当前编码图像块的预测块。
可选地,目标插值滤波器为进行分像素插值得到预测块的插值滤波器或得到该预测块的一类插值滤波器。也就是说,目标插值滤波器的指示信息指示的是得到预测块的插值滤波器,也可以是指示得到预测块的插值滤波器的类型。
例如,候选插值滤波器集合包括两种插值滤波器,例如,第一类插值滤波器和第二类插值滤波器,当目标插值滤波器为第一类插值滤波器时,指示信息可以是“0”,当目标插值滤波器为第二类插值滤波器时,指示信息可以是“1”。
可以理解,该第一类插值滤波器或第二类插值滤波器可以包括通过上述插值滤波器训 练的一个或多个分数像素位置分别对应的第二插值滤波器。
其中,步骤S72/S82中,确定目标插值滤波器的可以包括但不限于以下两种实现方式。
第一实现方式:
根据率失真代价准则从候选插值滤波器集合中确定用于当前编码图像块的目标插值滤波器。具体实现为:计算每一类插值滤波器生成分像素图像块的率失真代价,确定率失真代价最小的插值滤波器作为最终用于分像素插值得到当前编码图像块对应的预测块的目标插值滤波器。例如,视频编码器在对当前编码图像块进行帧间预测过程中,可以搜索到与当前编码图像块最优匹配的整像素参考块;进而,通过第一类插值滤波器(候选插值滤波器集合中任意一类插值滤波器)对整像素参考图像块进行分像素插值,得到P个分像素参考图像块,确定预测块,得到预测块的运动信息并计算残差,将残差、运动信息等编码信息编入码流,并根据码流进行图像块重建,将得到重建图像块与大敌当前图像块的均方差作为失真,将得到的码流大学作为码率,进一步地,根据失真和码率得到该第一类插值滤波器的率失真代价。率失真代价的计算为现有技术,在此,不在赘述。应理解,在本申请中,在帧间预测过程中虽然对当前编码图像块进行了完整的编码操作和重建,但该过程为测试过程,并不一定会该过程得到的编码信息写入码流。可选地,只有率失真代价最小的一类插值滤波器参与的编码过程得到的编码信息,才被写入到码流。
可以理解,P为正整数,由第一类插值滤波器进行分像素插值的精度决定。
第二实现方式:
视频编码器在对当前编码图像块进行帧间预测过程中,可以搜索到与当前编码图像块最优匹配的整像素参考块;通过候选插值滤波器集合中每一个插值滤波器对整像素参考图像块进行分像素插值,得到N个分像素参考图像块,N为正整数;在整像素参考图像块和N个分像素参考图像块中确定与当前编码图像块最优匹配的预测块;基于预测块确定运动信息,当运动信息指向分数像素位置时,目标滤波器即为插值得到该预测块的插值滤波器或插值得到得到该预测块的一类插值滤波器,反之,当运动信息指向整数像素位置时,不需要确定目标滤波器,更不需要将目标滤波器的指示信息编入码流。
可以理解,在对当前编码图像块进行帧间预测过程中,若与当前编码图像块的预测块为整数像素图像,则不需要执行确定目标滤波器的过程,也不需要执行将目标插值滤波器的指示信息编入码流。
在本申请一实施例中,候选插值滤波器集合可以包括通过上述任意一种插值滤波器的训练方法得到的第二插值滤波器。
可选地,若目标滤波器为通过上述任意一种插值滤波器的训练方法得到的第二插值滤波器,则:目标插值滤波器的滤波器参数为预设的滤波器参数;或者,目标插值滤波器的滤波器参数为根据上述任意一种插值滤波器的训练方法在线训练得到的滤波器参数。
进一步地,该编码信息还包括训练得到的目标插值滤波器的滤波器参数;或者,编码信息还包括滤波器参数差值,滤波器参数差值为训练得到的用于当前图像单元的目标插值滤波器的滤波器参数相对于训练得到的用于在先编码的图像单元的目标插值滤波器的滤波器参数。
其中,图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编 码单元(CU)或预测单元(PU)等。也就是说,视频编码器可以每编码完一帧图像快,一个条带他图像块、一个视频序列子组、一个编码树单元(CTU)、一个编码单元(CU)或预测单元(PU)即训练一次。
对于图像单元为预测单元来说,视频编码器可以每次得到将当前编码图像块作为样本图像,对候选插值滤波器集合中的第二插值滤波器进行训练。
以下基于图9和图10介绍本发明实施例提供的视频图像解码方法的两种具体实现过程。该方法可由视频解码器200执行,该方法描述为一系列的步骤或操作,应当理解的是,各步骤可以以各种顺序执行和/或同时发生,不限于图9或图10所示的执行顺序。
如图9所示,描述了对应于7所示的视频图像编码方法的一种视频图像解码方法的实现过程,该实现过程可以包括如下部分或全部步骤:
S92:从码流中解析出目标插值滤波器的指示信息;
S94:获取当前解码图像块的运动信息,其中,所述运动信息指向分数像素位置;
S96:基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:根据所述指示信息所指示的目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块。
S98:基于所述当前解码图像块的预测块,重建当前解码图像块的重建块。
需要说明的是,步骤S92可以在步骤S94之前或之后执行,也可以与步骤S94同时执行,本发明实施例不作限定。
本申请实施例中,无论何种帧间预测模式,得到的码流,只要在运动信息指向分数像素位置,该码流解析出的编码信息中都包括目标插值滤波器的指示信息。然而,解析相互的编码信息是否包含运动信息与帧间预测模式有关。当帧间预测模式为合并模式时,视频解码器可以继承合并模式下合并到的在先解码的图像块的运动信息;而当帧间预测模式为非合并模式时,视频解码器可以从码流中解析出当解码图像块的运动信息的索引,或从码流中解析出当解码图像块的运动信息的索引和运动矢量差值,以获取到运动信息。
在识别到当前解码图像块为非合并模式的情况下,步骤S94的一种实现可以包括:视频解码器可以从码流中解析出当解码图像块的运动信息的索引;进而,基于当解码图像块的运动信息的索引和当前解码图像块的候选运动信息列表确定当前解码图像块的运动信息。
可选地,步骤S94的另一种实现可以包括:视频解码器可以从码流中解析出当解码图像块的运动信息的索引和运动矢量差值;进而,基于当解码图像块的运动信息的索引和当前解码图像块的候选运动信息列表确定当前解码图像块的运动矢量预测值;基于运动矢量预测值和运动矢量差值,得到当前解码图像块的运动矢量。
在识别到当前解码图像块为合并模式的情况下,步骤S94的一种实现可以包括:视频解码器可以从继承合并模式下合并到的在先解码的图像块的运动信息。可以理解,在该线解码的图像块的运动信息与当前解码图像块的运动信息一致。
可选地,视频解码器可以先执行S94,当获取到的当前解码图像块的运动信息指向分数像素位置时,再执行步骤S92,从码流中解析出目标插值滤波器的指示信息。可以理解, 当运动信息指向整数像素位置时,从码流中解析出的当前图像块对应的编码信息不包括目标插值滤波器的指示信息;而当获取到的当前解码图像块的运动信息指向整数像素位置时,视频解码器不需要执行S92,此时,可以基于获取到的运动信息进行预测
如图10所示,描述了对应于8所示的视频图像编码方法的一种视频图像解码方法的实现过程,该实现过程可以包括如下部分或全部步骤:
S102:从码流中解析出当前解码图像块的用于指示所述当前解码图像块的帧间预测模式的信息;
S104:获取所述当前解码图像块的运动信息,其中,所述运动信息指向分数像素位置;
S106:若所述当前图像块的帧间预测模式为非目标帧间预测模式,基于所述当前解码图像块的运动信息对所述当前解码图像块执行帧间预测过程,其中,所述预测过程包括:根据从所述码流中解析出的目标插值滤波器的指示信息所指示的目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块;
S108:基于所述当前解码图像块的预测块,对所述当前解码图像块进行重建。
本申请实施例中,区分帧间预测模式,编码时,只有在帧间预测模式为非目标预测模式(比如非合并模式)且运动信息指向分数像素位置的情况下,才需要将目标插值滤波器的指示信息编入码流;而当帧间预测模式为目标帧间预测模式(比如合并模式)时,不需要将运动信息、运动信息的索引、运动矢量差值、目标插值滤波器的指示信息编入码流。对应地,在解码端,只有当帧间预测模式为非目标预测模式(例如非合并模式)时且当前解码图像块的运动信息指向分数像素位置时,才需要从解析出目标插值滤波器的指示信息;而,当帧间预测模式为目标预测模式(例如合并模式)时且当前解码图像块的运动信息指向分数像素位置时,可以继承合并模式下合并到的在先解码的图像块的运动信息和目标插值滤波器的指示信息。
若步骤S102中解析出当前解码图像块的帧间预测模式为非目标帧间预测模式(非合并模式)的情况下,步骤S104的一种实现可以包括:视频解码器可以从码流中解析出当解码图像块的运动信息的索引;进而,基于当解码图像块的运动信息的索引和当前解码图像块的候选运动信息列表确定当前解码图像块的运动信息。
可选地,步骤S104的另一种实现可以包括:视频解码器可以从码流中解析出当解码图像块的运动信息的索引和运动矢量差值;进而,基于当解码图像块的运动信息的索引和当前解码图像块的候选运动信息列表确定当前解码图像块的运动矢量预测值;基于运动矢量预测值和运动矢量差值,得到当前解码图像块的运动矢量。
进一步地,若当前解码图像的运动信息是否指向分数像素位置,则视频解码器需要从码流中解析出的目标插值滤波器的指示信息,在视频解码器执行帧间预测过程中需要,根据该指示信息指示的目标插值滤波器进行分像素插值,得到当前解码图像块的预测块。若当前解码图像的运动信息是否指向整数像素位置,则视频解码器根据该运动信息直接得到该运动信息指向的预测块。
若步骤S102中解析出当前解码图像块的帧间预测模式为目标帧间预测模式(合并模式)的情况下,步骤S104的一种实现可以包括:视频解码器可以继承合并模式下合并到的在先解码的图像块的运动信息。
进一步地,若当前解码图像的运动信息是否指向分数像素位置,则视频解码器还需要继承合并模式下合并到的在先解码的图像块在解码过程中使用的插值滤波器的指示信息,进而确定该指示信息指定的目标插值滤波器,在视频解码器执行帧间预测过程中需要,根据该指示信息指示的目标插值滤波器进行分像素插值,得到当前解码图像块的预测块。若当前解码图像的运动信息是否指向整数像素位置,则视频解码器根据该运动信息直接得到该运动信息指向的预测块。
应理解,在本申请的另一实施例中,编码端可以在帧间预测模式为目标帧间预测模式(比如合并模式)且运动信息执行分数像素位置时,也将目标插值滤波器的指示信息进行编码。对应地,解码端在帧间预测模式为目标帧间预测模式(比如合并模式)且运动信息执行分数像素位置时,可以从码流中解析出的目标插值滤波器的指示信息,确定该指示信息所指示的目标滤波器,以在进行帧间预测过程中,通过该目标滤波器进行插值得到当前解码图像块的预测块。
还应理解,步骤S106中,指示信息是可以指示通过分像素插值得到当前解码图像块的预测块的目标插值滤波器的指示信息,也可以是指示该目标插值滤波器的类型。若该指示信息为目标插值滤波器的类型,则视频解码器根据指示信息所指示的目标插值滤波器进行分像素插值的一种实现方法为:视频解码器根据运动信息在指示信息确定的插值滤波器类型中确定用分像素插值得到该运动信息指示的预测块的目标插值滤波器。
本申请一实施例中,若目标滤波器为通过上述插值滤波器的训练方法得到的第二插值滤波器,则:位于视频编码端的目标插值滤波器的滤波器参数可以是预设的滤波器参数,与视频编码端目标插值滤波器的滤波器参数一致;或者,位于视频编码端的目标插值滤波器的滤波器参数还可以是根据上述插值滤波器的训练方法得到的滤波器参数。
可选的,对应于编码端,若目标插值滤波器为训练得到的第二插值滤波器,编码信息还包括用于当前编码的图像单元的目标插值滤波器的滤波器参数。解码端的视频解码器还可以从码流中解析到滤波器参数,该滤波器参数可以是通过上述滤波器训练方法得到的用于当前解码的图像单元的目标插值滤波器的滤波器参数,视频编码器在根据指示信息所指示的目标插值滤波器进行分像素插值,得到当前解码图像块的预测块之前,还可以通过用于当前解码的图像单元的目标插值滤波器的滤波器参数配置目标插值滤波器。
可选的,对应于编码端,若目标插值滤波器为训练得到的第二插值滤波器,编码信息还包括滤波器参数差值。解码端的视频解码器还可以从码流中解析到滤波器参数插值,该滤波器参数差值为训练得到的用于当前解码的图像单元的目标插值滤波器的滤波器参数相对于训练得到的用于在先解码的图像单元的目标插值滤波器的滤波器参数的差值该滤波器参数;视频编码器在根据指示信息所指示的目标插值滤波器进行分像素插值,得到当前解码图像块的预测块之前,还可以:根据用于在先解码的图像单元的目标插值滤波器的滤波器参数和滤波器参数差值得到用于当前解码的图像单元的目标插值滤波器的滤波器参数;进而,通过用于当前解码的图像单元的目标插值滤波器的滤波器参数配置目标插值滤波器。
在本申请的另一实施例中,编码端和解码端的目标插值滤波器的滤波器参数可以固定为预测的滤波器参数。此时,编码信息可以不包括目标插值滤波器的滤波器参数或滤波器参数差值,解码端也不需要对该目标插值滤波器的滤波器参数或滤波器参数差值进行解析。
可选地,图像单元是图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)等。即,视频解码器以解码图像单元为所需时长为单位更新滤波器参数。
以下如图11所示提供了本申请提供的又一种视频图像的编码方法的流程示意图,该方法可以包括但不限于如下部分或全部步骤:
S1101:从码流中解析出当前解码图像块的用于指示当前解码图像块的帧间预测模式的信息。
S1102:判断用于指示当前解码图像块的帧间预测模式的信息所指定的帧间预测模式是否为合并模式。
若判断结果为是,即用于指示所述当前解码图像块的帧间预测模式的信息所指定的帧间预测模式为合并模式,则执行步骤S1103;否则,用于指示所述当前解码图像块的帧间预测模式的信息所指定的帧间预测模式为非合并模式,则执行步骤S1105。
S1103:获取在合并模式下合并到的在先解码的图像块的运动信息和目标插值滤波器的指示信息。
应理解,在合并模式下合并到的在先解码的图像块的运动信息即为当前解码图像块的运动信息。
S1104:判断当前解码图像块的运动信息是否指向整数像素位置。
步骤S1103之后可以执行步骤S1104。若S1104的判断结果为是,则该运动信息所指向的整数像素位置所对应的图像块即为当前解码图像块的预测块,视频解码器可以执行步骤S1109,否则,执行步骤S1108。
S1105:从码流中解析出当前解码图像块的运动信息。
S1106:判断当前解码图像块的运动信息是否指向整数像素位置。
步骤S1105之后可以执行步骤S1106。若S1106的判断结果为是,则说明则该运动信息所指向的整数像素位置所对应的图像块即为当前解码图像块的预测块,视频解码器可以执行步骤S1109,否则,说明当前解码图像块的预测块为分像素图像,视频解码器执行步骤S1107。
S1107:解析出用于当前编码图像块的目标插值滤波器的指示信息。
S1108:根据目标插值滤波器的指示信息所指示的目标插值滤波器进行分像素插值,得到所述解码图像块的预测块。
S1109:基于当前解码图像块的预测块,重建当前解码图像块的重建块。
进一步地,视频解码器判断上述过程中解码图像块是否为最后一个图像块,如果是,则解码过程结束,否则可以对下一个待解码的图像块进行上述解码流程。
下面介绍本发明实施例涉及的装置。
请参阅图12,图12是本发明实施例提供的一种插值滤波器训练装置的示意性框图,应理解,插值滤波器训练装置1200可以与设置于编码器100中的帧间预测单元或设置于编码器100中的其他单元中。应当理解的是,插值滤波器的训练还可以通过计算设备来实现, 该计算设备可以是计算机、服务器或其他包括可实现数据处理的器件或设备。该插值滤波器1200可以包括但不限于标签数据获取模块1201、插值模块1202以及参数确定模块1203。其中:
标签数据获取模块1201,用于通过第一插值滤波器对样本图像在整数像素位置的像素进行插值,得到所述样本图像在第一分数像素位置的第一分像素图像;
插值模块1202,用于将所述样本图像输入到第二插值滤波器中,得到第二分像素图像;
参数确定模块1203,用于通过最小化用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数确定所述第二插值滤波器的滤波器参数。
请参阅图13,图13是本发明实施例提供的另一种插值滤波器训练装置的示意性框图,该插值滤波器1300可以包括但不限于标签数据获取模块1301、插值模块1302、逆插值模块1303以及参数确定模块1304。
标签数据获取模块1301,用于通过第一插值滤波器对样本图像在整数像素位置的像素进行插值,得到所述样本图像在第一分数像素位置的第一分像素图像;
插值模块1302,用于将所述样本图像输入到第二插值滤波器中,得到第二分像素图像;
逆插值模块1303,用于将所述第二分像素图像经过翻转运算输入到第三插值滤波器中,得到第一图像,并将所述第一图像通过所述翻转运算的逆运算得到第二图像,其中,所述第二插值滤波器和所述第三插值滤波器共享滤波器参数;
参数确定模块1304,用于根据用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数。
在一种可能的实施方式中,参数确定模块1304具体用于:通过最小化第三函数确定所述滤波器参数,其中,所述第三函数为用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数和用于表示所述样本图像与所述第二图像的差值的第二函数的加权求和。
在一种可能的实施方式中,参数确定模块1304具体用于:通过交替最小化用于表示所述第一分像素图像与所述第二分像素图像的差值的第一损失函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数。
图14为本申请实施例提供的一种又一种插值滤波器训练装置的示意性框图,该装置1400可以包括处理器1410以及存储器1420,所述存储器1420通过总线1430连接到处理器1410,该存储器1420用于存储用于实现上述任一种插值滤波器训练方法的程序代码,该处理器1410用于调用该存储器存储的程序代码执行本申请描述的各种插值滤波器训练方法,具体可参见图6A-图6D所述的插值滤波器方法实施例中相关描述,本申请实施例不再赘述。
装置1400可以采用包含多个计算设备的计算系统的形式,或采用例如移动电话、平板计算机、膝上型计算机、笔记本电脑、台式计算机等单个计算设备的形式。
装置1400中的处理器1410可以为中央处理器。或者,处理器1410可以为现有的或今后将研发出的能够操控或处理信息的任何其它类型的设备或多个设备。如图14所示,虽然可以使用例如处理器1410的单个处理器实践所揭示的实施方式,但是使用一个以上处理器 可以实现速度和效率方面的优势。
在一实施方式中,装置140中的存储器1420可以为只读存储器(Read Only Memory,ROM)设备或随机存取存储器(random access memory,RAM)设备。任何其他合适类型的存储设备都可以用作存储器1420。存储器1420可以包括代码和由处理器1410使用总线1430访问的数据1401(例如样本图像)。存储器1420可进一步包括操作系统1402和应用程序1403,应用程序1403包含至少一个准许处理器1410执行本文所描述的方法的程序。例如,应用程序1403可以包括应用1到N,应用1到N进一步包括执行本文所描述的方法的视频译码应用。装置1400还可包含采用从存储器1402形式的附加存储器,该从存储器1402例如可以为与移动计算设备一起使用的存储卡。因为视频通信会话可能含有大量信息,这些信息可以整体或部分存储在从存储器1420中,并按需要加载到存储器1420用于处理。
可选地,装置1400还可以包括但不限于通信接口或模块、输入/输出装置等,通信接口或模块用于实现装置1400与其他设备(例如,编码设备或解码设备)之间的数据交换。输入装置用于实现信息(文字、图像、声音等)或指令的输入,可以包括但不限于触控屏、键盘、摄像头、录音器等。输出装置用于实现信息(文字、图像、声音等)或指令的输出,可以包括但不限于显示器、扩音器等,对此,本申请不作限定。
图15为本申请实施例提供的一种用于实现图7或图8所述的视频图像编码方法的编码器的示意性框图。
对应于图7所示的视频图像编码方法,在本申请的一实施例中,编码器1500中各个单元的具体功能如下:
帧间预测单元1501,用于对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息,其中,所述当前编码图像块的运动信息指向分数像素位置,所述帧间预测单元包括滤波器选择单元1502,滤波器选择单元1502具体用于:用于从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器;
熵编码单元1503,基于所述当前编码图像块的帧间预测模式和所述当前编码图像块的运动信息对所述当前编码图像块进行编码,得到编码信息,将所述编码信息编入码流,其中,所述编码信息包括目标插值滤波器的指示信息;所述目标插值滤波器的指示信息用于指示通过所述目标插值滤波器进行分像素插值得到所述当前编码图像块对应的分数像素位置的参考块。
对应于图7所示的视频图像编码方法,在本申请的另一实施例中,编码器1500中各个单元的具体功能如下:
帧间预测单元1501,用于对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息,其中,所述当前编码图像块的运动信息指向分数像素位置,其中,所述帧间预测单元包括滤波器选择单元1502,所述滤波器选择单元1502用于:从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器;
熵编码单元1503,用于基于所述当前编码图像块的帧间预测模式和所述当前编码图像块的运动信息对所述当前编码图像块进行编码,得到编码信息,将所述编码信息编入到码流,其中,若所述当前编码图像块的帧间预测模式是目标帧间预测模式,所述编码信息不包括所述目标插值滤波器的指示信息;若所述当前编码图像块的帧间预测模式为非目标帧 间预测模式,所述编码信息包括所述目标插值滤波器的指示信息,所述目标插值滤波器的指示信息用于指示所述当前编码图像块采用所述目标插值滤波器进行分像素插值。
在本申请实施例的一种可能的实现中,所述滤波器选择单元1502具体用于根据率失真代价准则从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器。
在本申请实施例的又一种可能的实现中,所述帧间预测单元1501具体用于:
确定与所述当前编码图像块最优匹配的整像素参考图像块;
通过候选插值滤波器集合中每一个插值滤波器对所述整像素参考图像块进行分像素插值,得到N个分像素参考图像块,N为正整数;
在所述整像素参考图像块和所述N个分像素参考图像块中确定与所述当前编码图像块最优匹配的预测块;
基于所述预测块确定所述运动信息,其中,插值得到所述预测块的插值滤波器即为目标插值滤波器。
在本申请实施例的又一种可能的实现中,所述候选插值滤波器集合包括通过上述图6A-图6D所述的任一种插值滤波器的训练方法得到的第二插值滤波器。
可选地,若所述目标滤波器为通过上述图6A-图6D所述的任一种插值滤波器的训练方法得到第二插值滤波器,则:所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据上述图6A-图6D所述的任一种插值滤波器的训练方法得到的滤波器参数。
进一步地,所述编码信息还包括训练得到的所述目标插值滤波器的滤波器参数;或者,所述编码信息还包括滤波器参数差值,所述滤波器参数差值为训练得到的用于当前图像单元的目标插值滤波器的滤波器参数相对于训练得到的用于在先编码的图像单元的目标插值滤波器的滤波器参数。
可选地,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
图16为本申请实施例提供的一种用于实现图8-图10所述的视频图像解码方法的解码器的示意性框图。
对应于图9所示的视频图像解码方法,在本申请的一实施例中,解码器1600中各个单元的具体功能如下:
熵解码单元1601,用于从码流中解析出目标插值滤波器的指示信息;以及,获取当前解码图像块的运动信息,其中,所述运动信息指向分数像素位置;
帧间预测单元1602,用于基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:根据所述指示信息所指示的目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块。
重构单元1603,用于基于所述当前解码图像块的预测块,重建所述当前解码图像块的重建块。
如图3所示的编码器的示意性框图。对应于图10或图11所示的视频图像解码方法,在本申请的另一实施例中,解码器200中各个单元的具体功能如下:
熵解码单元1601,用于从码流中解析出当前解码图像块的用于指示所述当前解码图像 块的帧间预测模式的信息;
帧间预测单元1602,用于获取所述当前解码图像块的运动信息,其中,所述运动信息指向分数像素位置;若所述当前图像块的帧间预测模式为非目标帧间预测模式,基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:根据从所述码流中解析出的目标插值滤波器的指示信息所指示的目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块;
重构单元1603,用于基于所述当前解码图像块的预测块,对所述当前解码图像块进行重建。
可选地,帧间预测单元1602还用于:若所述当前图像块的帧间预测模式是目标帧间预测模式,基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:确定用于所述当前解码图像块的目标插值滤波器;通过所述目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块。
应理解,若所述当前图像块的帧间预测模式是目标帧间预测模式,所述帧间预测单元1602确定用于所述当前解码图像块的目标插值滤波器,具体包括:确定在所述先解码的图像块在解码过程中使用的插值滤波器为所述用于所述当前解码图像块的目标插值滤波器;或,确定所述用于所述当前解码图像块的目标插值滤波器为从所述码流中解析出的目标插值滤波器的指示信息所指示的目标插值滤波器。
在本申请实施例的一种可能的实现中,解码器1600获取当前解码图像块的运动信息可以包括但不限于以下三种实施方式:
第一实施方式:
在非目标帧间预测模式(比如非合并模式)下,所述熵解码单元1601具体用于从码流中解析出所述当解码图像块的运动信息的索引;
所述帧间预测单元1602,还用于基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定所述当前解码图像块的运动信息。
第二实施方式:
在非目标帧间预测模式(比如非合并模式)下,所述熵解码单元1601具体用于:从码流中解析出所述当解码图像块的运动信息的索引和运动矢量差值;
所述帧间预测单元1602还用于:基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定当前解码图像块的运动矢量预测值;以及,基于所述运动矢量预测值和所述运动矢量差值,得到所述当前解码图像块的运动矢量。
第三实施方式:在非目标帧间预测模式(比如非合并模式)下,所述熵解码单元1601具体用于从码流中解析出所述当解码图像块的运动信息的索引和运动矢量差值;
所述帧间预测单元1602,还用于基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定当前解码图像块的运动矢量预测值;进而,基于所述运动矢量预测值和所述运动矢量差值,得到所述当前解码图像块的运动矢量。
在本申请实施例的一种可能的实现中,若所述目标滤波器为上述图6A-图6D所述的任一种插值滤波器的训练方法得到的第二插值滤波器,则:所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为上述图6A-图6D所述 的任一种插值滤波器的训练方法得到的滤波器参数。
在本申请实施例的一种可能的实现中,所述熵解码单元1601还用于:从码流中解析出用于当前解码的图像单元的目标插值滤波器的滤波器参数;
所述解码器1600还包括配置单元1604,用于通过所述当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
在本申请实施例的一种可能的实现中,所述熵解码单元1601还用于:从码流中解析出滤波器参数差值,所述滤波器参数差值为用于当前解码的图像单元的目标插值滤波器的滤波器参数相对于用于在先解码的图像单元的目标插值滤波器的滤波器参数用于当前解码的图像单元的目标插值滤波器的滤波器参数;
所述解码器还包括:配置单元1604,用于根据所述在先解码的图像单元的目标插值滤波器的滤波器参数和所述滤波器参数差值得到所述当前解码的图像单元的目标插值滤波器的滤波器参数;以及,通过所述当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
可选地,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
图17为本申请实施例提供的一种用于实现图7或图8所述的视频图像编码方法的编码设备或用于实现图9-图11所述的视频图像解码方法的解码设备的示意性框图,该设备1700可以包括处理器1702以及存储器1720,所述存储器1704通过总线1712连接到处理器1710,该存储器1720用于存储用于实现上述任一种视频图像编码方法的程序代码,该处理器1710用于调用该存储器存储的程序代码执行本申请描述的各种视频图像编码/解码方法,具体可参见图7-图11所述的视频图像编码方法的方法实施例中相关描述,本申请实施例不再赘述。
该设备1700可以采用包含多个计算设备的计算系统的形式,或采用例如移动电话、平板计算机、膝上型计算机、笔记本电脑、台式计算机等单个计算设备的形式。
设备1700中的处理器1702可以为中央处理器。或者,处理器1702可以为现有的或今后将研发出的能够操控或处理信息的任何其它类型的设备或多个设备。如图所示,虽然可以使用例如处理器1702的单个处理器实践所揭示的实施方式,但是使用一个以上处理器可以实现速度和效率方面的优势。
在一实施方式中,设备1700中的存储器1704可以为只读存储器(Read Only Memory,ROM)设备或随机存取存储器(random access memory,RAM)设备。任何其他合适类型的存储设备都可以用作存储器1704。存储器1704可以包括代码和由处理器1702使用总线1712访问的数据1706。存储器1704可进一步包括操作系统1708和应用程序1710,应用程序1710包含至少一个准许处理器1702执行本文所描述的方法的程序。例如,应用程序1710可以包括应用1到N,应用1到N进一步包括执行本文所描述的方法的视频译码应用。设备1700还可包含采用从存储器1714形式的附加存储器,该从存储器1714例如可以为与移动计算设备一起使用的存储卡。因为视频通信会话可能含有大量信息,这些信息可以整体或部分存储在从存储器1714中,并按需要加载到存储器1704用于处理。
设备1700还可包含一或多个输出设备,例如显示器1718。在一个实例中,显示器1718可以为将显示器和可操作以感测触摸输入的触敏元件组合的触敏显示器。显示器1718可以 通过总线1712耦合于处理器1702。除了显示器1718还可以提供其它准许用户对装置1700编程或以其它方式使用装置1700的输出设备,或提供其它输出设备作为显示器1718的替代方案。当输出设备是显示器或包含显示器时,显示器可以以不同方式实现,包含通过液晶显示器(liquid crystal display,LCD)、阴极射线管(cathode-ray tube,CRT)显示器、等离子显示器或发光二极管(light emitting diode,LED)显示器,如有机LED(organic LED,OLED)显示器。
设备1700还可包含图像感测设备1720或与其连通,图像感测设备1720例如为相机或为现有的或今后将研发出的可以感测图像的任何其它图像感测设备1720,所述图像例如为运行设备1700的用户的图像。图像感测设备1720可以放置为直接面向运行设备1700的用户。在一实例中,可以配置图像感测设备1720的位置和光轴以使其视野包含紧邻显示器1718的区域且从该区域可见显示器1718。
设备1700还可包含声音感测设备1722或与其连通,声音感测设备1722例如为麦克风或为现有的或今后将研发出的可以感测装置1700附近的声音的任何其它声音感测设备。声音感测设备1722可以放置为直接面向运行装置1700的用户,并可以用于接收用户在运行装置1700时发出的声音,例如语音或其它发声。
虽然图17中将设备1700的处理器1702和存储器1704绘示为集成在单个单元中,但是还可以使用其它配置。处理器1702的运行可以分布在多个可直接耦合的机器中(每个机器具有一个或多个处理器),或分布在本地区域或其它网络中。存储器1704可以分布在多个机器中,例如基于网络的存储器或多个运行设备1700的机器中的存储器。虽然此处只绘示单个总线,但设备1700的总线1712可以由多个总线形成。进一步地,从存储器1714可以直接耦合至设备1700的其它组件或可以通过网络访问,并且可包括单个集成单元,例如一个存储卡,或多个单元,例如多个存储卡。因此,可以以多种配置实施设备置1700。
在本文中,该处理器可以是中央处理单元(Central Processing Unit,简称为“CPU”),该处理器还可以是其他通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
该存储器可以包括只读存储器(ROM)设备或者随机存取存储器(RAM)设备。任何其他适宜类型的存储设备也可以用作存储器。存储器可以包括由处理器使用总线访问的代码和数据。存储器可以进一步包括操作系统和应用程序,该应用程序包括允许处理器执行本申请描述的视频编码或解码方法(尤其是本申请描述的帧间预测方法或运动信息预测方法)的至少一个程序。例如,应用程序可以包括应用1至N,其进一步包括执行在本申请描述的视频图像编码或解码方法的视频编码或解码应用(简称视频译码应用)。
该总线系统除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线系统。
可选的,译码设备还可以包括一个或多个输出设备,诸如显示器。在一个示例中,显示器可以是触感显示器,其将显示器与可操作地感测触摸输入的触感单元合并。显示器可以经由总线连接到处理器。
本领域技术人员能够领会,结合本文公开描述的各种说明性逻辑框、模块和算法步骤 所描述的功能可以硬件、软件、固件或其任何组合来实施。如果以软件来实施,那么各种说明性逻辑框、模块、和步骤描述的功能可作为一或多个指令或代码在计算机可读媒体上存储或传输,且由基于硬件的处理单元执行。计算机可读媒体可包含计算机可读存储媒体,其对应于有形媒体,例如数据存储媒体,或包括任何促进将计算机程序从一处传送到另一处的媒体(例如,根据通信协议)的通信媒体。以此方式,计算机可读媒体大体上可对应于(1)非暂时性的有形计算机可读存储媒体,或(2)通信媒体,例如信号或载波。数据存储媒体可为可由一或多个计算机或一或多个处理器存取以检索用于实施本申请中描述的技术的指令、代码和/或数据结构的任何可用媒体。计算机程序产品可包含计算机可读媒体。
作为实例而非限制,此类计算机可读存储媒体可包括RAM、ROM、EEPROM、CD-ROM或其它光盘存储装置、磁盘存储装置或其它磁性存储装置、快闪存储器或可用来存储指令或数据结构的形式的所要程序代码并且可由计算机存取的任何其它媒体。并且,任何连接被恰当地称作计算机可读媒体。举例来说,如果使用同轴缆线、光纤缆线、双绞线、数字订户线(DSL)或例如红外线、无线电和微波等无线技术从网站、服务器或其它远程源传输指令,那么同轴缆线、光纤缆线、双绞线、DSL或例如红外线、无线电和微波等无线技术包含在媒体的定义中。但是,应理解,所述计算机可读存储媒体和数据存储媒体并不包括连接、载波、信号或其它暂时媒体,而是实际上针对于非暂时性有形存储媒体。如本文中所使用,磁盘和光盘包含压缩光盘(CD)、激光光盘、光学光盘、数字多功能光盘(DVD)和蓝光光盘,其中磁盘通常以磁性方式再现数据,而光盘利用激光以光学方式再现数据。以上各项的组合也应包含在计算机可读媒体的范围内。
可通过例如一或多个数字信号处理器(DSP)、通用微处理器、专用集成电路(ASIC)、现场可编程逻辑阵列(FPGA)或其它等效集成或离散逻辑电路等一或多个处理器来执行指令。因此,如本文中所使用的术语“处理器”可指前述结构或适合于实施本文中所描述的技术的任一其它结构中的任一者。另外,在一些方面中,本文中所描述的各种说明性逻辑框、模块、和步骤所描述的功能可以提供于经配置以用于编码和解码的专用硬件和/或软件模块内,或者并入在组合编解码器中。而且,所述技术可完全实施于一或多个电路或逻辑元件中。
本申请的技术可在各种各样的装置或设备中实施,包含无线手持机、集成电路(IC)或一组IC(例如,芯片组)。本申请中描述各种组件、模块或单元是为了强调用于执行所揭示的技术的装置的功能方面,但未必需要由不同硬件单元实现。实际上,如上文所描述,各种单元可结合合适的软件和/或固件组合在编码解码器硬件单元中,或者通过互操作硬件单元(包含如上文所描述的一或多个处理器)来提供。
以上所述,仅为本申请示例性的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应该以权利要求的保护范围为准。

Claims (76)

  1. 一种插值滤波器训练方法,其特征在于,包括:
    通过第一插值滤波器对样本图像在整数像素位置的像素进行插值,得到所述样本图像在第一分数像素位置的第一分像素图像;
    将所述样本图像输入到第二插值滤波器中,得到第二分像素图像;
    通过最小化用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数确定所述第二插值滤波器的滤波器参数。
  2. 一种基于插值滤波器训练方法,其特征在于,包括:
    通过第一插值滤波器对样本图像在整数像素位置的像素进行插值,得到所述样本图像在第一分数像素位置的第一分像素图像;
    将所述样本图像输入到第二插值滤波器中,得到第二分像素图像;
    将所述第二分像素图像经过翻转运算输入到第三插值滤波器中,得到第一图像,并将所述第一图像通过所述翻转运算的逆运算得到第二图像,其中,所述第二插值滤波器和所述第三插值滤波器共享滤波器参数;
    根据用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数。
  3. 根据权利要求2所述的方法,其特征在于,所述根据用于表示所述第一分像素图像与所述第二分像素图像的差值的第一损失函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数,具体包括:
    通过最小化第三函数确定所述滤波器参数,其中,所述第三函数为用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数和用于表示所述样本图像与所述第二图像的差值的第二函数的加权求和。
  4. 根据权利要求2所述的方法,其特征在于,所述根据用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数,具体包括:
    通过交替最小化用于表示所述第一分像素图像与所述第二分像素图像的差值的第一损失函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数。
  5. 一种视频图像编码方法,其特征在于,包括:
    对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息,其中,所述当前编码图像块的运动信息指向分数像素位置,所述帧间预测过程包括:从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器;
    基于所述当前编码图像块的帧间预测模式和所述当前编码图像块的运动信息对所述当前编码图像块进行编码,得到编码信息,将所述编码信息编入码流,其中,所述编码信息 包括目标插值滤波器的指示信息;所述目标插值滤波器的指示信息用于指示通过所述目标插值滤波器进行分像素插值得到所述当前编码图像块对应的分数像素位置的参考块。
  6. 根据权利要求5所述的方法,其特征在于,所述从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器,包括:
    根据率失真代价准则从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器。
  7. 根据权利要求5所述的方法,其特征在于,所述对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息,包括:
    确定与所述当前编码图像块最优匹配的整像素参考图像块;
    通过候选插值滤波器集合中每一个插值滤波器对所述整像素参考图像块进行分像素插值,得到N个分像素参考图像块,N为正整数;
    在所述整像素参考图像块和所述N个分像素参考图像块中确定与所述当前编码图像块最优匹配的预测块;
    基于所述预测块确定所述运动信息,其中,插值得到所述预测块的插值滤波器即为目标插值滤波器。
  8. 根据权利要求5-7任一项所述的方法,其特征在于,所述候选插值滤波器集合包括通过如权利要求1-4任意权利要求所述插值滤波器的训练方法得到的第二插值滤波器。
  9. 根据权利要求8所述的方法,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,则:
    所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据权利要求1-4所述的插值滤波器的训练方法得到的滤波器参数。
  10. 根据权利要求9所述的方法,其特征在于,所述编码信息还包括训练得到的所述目标插值滤波器的滤波器参数;或者,所述编码信息还包括滤波器参数差值,所述滤波器参数差值为训练得到的用于当前图像单元的目标插值滤波器的滤波器参数相对于训练得到的用于在先编码的图像单元的目标插值滤波器的滤波器参数。
  11. 根据权利要求10所述的方法,其特征在于,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
  12. 一种视频图像编码方法,其特征在于,包括:
    对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息,其中,所述当前编码图像块的运动信息指向分数像素位置,所述帧间预测过程包括:从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器;
    基于所述当前编码图像块的帧间预测模式和所述当前编码图像块的运动信息对所述当前编码图像块进行编码,得到编码信息,将所述编码信息编入到码流,其中,若所述当前编码图像块的帧间预测模式是目标帧间预测模式,所述编码信息不包括所述目标插值滤波器的指示信息;若所述当前编码图像块的帧间预测模式为非目标帧间预测模式,所述编码信息包括所述目标插值滤波器的指示信息,所述目标插值滤波器的指示信息用于指示所述当前编码图像块采用所述目标插值滤波器进行分像素插值。
  13. 根据权利要求12所述的方法,其特征在于,所述从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器,包括:
    根据率失真代价准则从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器。
  14. 根据权利要求12所述的方法,其特征在于,所述对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息,包括:
    确定与所述当前编码图像块最优匹配的整像素参考图像块;
    通过候选插值滤波器集合中每一个插值滤波器对所述整像素参考图像块进行分像素插值,得到N个分像素参考图像块,N为正整数;
    在所述整像素参考图像块和所述N个分像素参考图像块中确定与所述当前编码图像块最优匹配的预测块;
    基于所述预测块确定所述运动信息,其中,插值得到所述预测块的插值滤波器即为目标插值滤波器。
  15. 根据权利要求12-14任一项所述的方法,其特征在于,所述候选插值滤波器集合包括通过如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器。
  16. 根据权利要求15所述的方法,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,则:
    所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据权利要求1-4所述的插值滤波器的训练方法得到的滤波器参数。
  17. 根据权利要求16所述的方法,其特征在于,所述编码信息还包括训练得到的所述目标插值滤波器的滤波器参数;或者,所述编码信息还包括滤波器参数差值,所述滤波器参数差值为训练得到的用于当前编码的图像单元的目标插值滤波器的滤波器参数相对于训练得到的用于在先编码的图像单元的目标插值滤波器的滤波器参数。
  18. 根据权利要求17所述的方法,其特征在于,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
  19. 一种视频图像解码方法,其特征在于,包括:
    从码流中解析出目标插值滤波器的指示信息;
    获取当前解码图像块的运动信息,其中,所述运动信息指向分数像素位置;
    基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:根据所述指示信息所指示的目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块;
    基于所述当前解码图像块的预测块,重建所述当前解码图像块的重建块。
  20. 根据权利要求19所述的方法,其特征在于,所述获取当前解码图像块的运动信息,包括:
    从码流中解析出所述当解码图像块的运动信息的索引;
    基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定所述当前解码图像块的运动信息。
  21. 根据权利要求19所述的方法,其特征在于,所述获取当前解码图像块的运动信息,包括:
    从码流中解析出所述当解码图像块的运动信息的索引和运动矢量差值;
    基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定当前解码图像块的运动矢量预测值;
    基于所述运动矢量预测值和所述运动矢量差值,得到所述当前解码图像块的运动矢量。
  22. 根据权利要求19所述的方法,其特征在于,所述获取当前解码图像块的运动信息包括:
    若所述当前解码图像块的帧间预测模式为合并模式(merge mode),获取在所述合并模式下合并到的在先解码的图像块的运动信息,即为当前解码图像块的运动信息。
  23. 根据权利要求20-22任一项所述的方法,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,则:
    所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据权利要求1-4所述的插值滤波器的训练方法得到的滤波器参数。
  24. 根据权利要求23所述的方法,其特征在于,所述方法还包括:
    从码流中解析出用于当前解码的图像单元的目标插值滤波器的滤波器参数;
    通过所述用于当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
  25. 根据权利要求23所述的方法,其特征在于,所述方法还包括:
    从码流中解析出滤波器参数差值,所述滤波器参数差值为用于当前解码的图像单元的目标插值滤波器的滤波器参数相对于用于在先解码的图像单元的目标插值滤波器的滤波器参数用于当前解码的图像单元的目标插值滤波器的滤波器参数;
    根据所述在先解码的图像单元的目标插值滤波器的滤波器参数和所述滤波器参数差值得到所述当前解码的图像单元的目标插值滤波器的滤波器参数;
    通过所述当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
  26. 根据权利要求24或25所述的方法,其特征在于,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
  27. 一种视频图像解码方法,其特征在于,包括:
    从码流中解析出当前解码图像块的用于指示所述当前解码图像块的帧间预测模式的信息;
    获取所述当前解码图像块的运动信息,其中,所述运动信息指向分数像素位置;
    若所述当前图像块的帧间预测模式为非目标帧间预测模式,基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:根据从所述码流中解析出的目标插值滤波器的指示信息所指示的目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块;
    基于所述当前解码图像块的预测块,对所述当前解码图像块进行重建。
  28. 根据权利要求27所述的方法,其特征在于,所述获取当前解码图像块的运动信息,包括:
    从码流中解析出所述当解码图像块的运动信息的索引;
    基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定所述当前解码图像块的运动信息。
  29. 根据权利要求27所述的方法,其特征在于,所述获取当前解码图像块的运动信息,包括:
    从码流中解析出所述当解码图像块的运动信息的索引和运动矢量差值;
    基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定所述当前解码图像块的运动矢量预测值;
    基于所述运动矢量预测值和所述运动矢量差值,得到所述当前解码图像块的运动矢量。
  30. 根据权利要求27所述的方法,其特征在于,若所述当前图像块的帧间预测模式是目标帧间预测模式,基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:确定用于所述当前解码图像块的目标插值滤波器;通过 所述目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块。
  31. 根据权利要求30所述的方法,其特征在于,所述目标帧间预测模式为合并模式,其中,
    所述获取所述当前解码图像块的运动信息,包括:获取在所述合并模式下合并到的在先解码的图像块的运动信息;
    所述确定用于所述当前解码图像块的目标插值滤波器,包括:确定在所述先解码的图像块在解码过程中使用的插值滤波器为所述用于所述当前解码图像块的目标插值滤波器;或,确定所述用于所述当前解码图像块的目标插值滤波器为从所述码流中解析出的目标插值滤波器的指示信息所指示的目标插值滤波器。
  32. 根据权利要求27-31任一项所述的方法,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,则:
    所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据权利要求1-4所述的插值滤波器的训练方法得到的滤波器参数。
  33. 根据权利要求27-32任一项所述的方法,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,所述方法还包括:
    从码流中解析出用于当前解码的图像单元的目标插值滤波器的滤波器参数;
    通过所述当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
  34. 根据权利要求27-32任一项所述的方法,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,所述方法还包括:
    从码流中解析出滤波器参数差值,所述滤波器参数差值为用于当前解码的图像单元的目标插值滤波器的滤波器参数相对于用于在先解码的图像单元的目标插值滤波器的滤波器参数用于当前解码的图像单元的目标插值滤波器的滤波器参数;
    根据所述在先解码的图像单元的目标插值滤波器的滤波器参数和所述滤波器参数差值得到所述当前解码的图像单元的目标插值滤波器的滤波器参数;
    通过所述当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
  35. 根据权利要求33或34所述的方法,其特征在于,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
  36. 一种插值滤波器训练装置,其特征在于,包括:
    标签数据获取模块,用于通过第一插值滤波器对样本图像在整数像素位置的像素进行插值,得到所述样本图像在第一分数像素位置的第一分像素图像;
    插值模块,用于将所述样本图像输入到第二插值滤波器中,得到第二分像素图像;
    参数确定模块,用于通过最小化用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数确定所述第二插值滤波器的滤波器参数。
  37. 一种插值滤波器训练装置,其特征在于,包括:
    标签数据获取模块,用于通过第一插值滤波器对样本图像在整数像素位置的像素进行插值,得到所述样本图像在第一分数像素位置的第一分像素图像;
    插值模块,用于将所述样本图像输入到第二插值滤波器中,得到第二分像素图像;
    逆插值模块,用于将所述第二分像素图像经过翻转运算输入到第三插值滤波器中,得到第一图像,并将所述第一图像通过所述翻转运算的逆运算得到第二图像,其中,所述第二插值滤波器和所述第三插值滤波器共享滤波器参数;
    参数确定模块,用于根据用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数。
  38. 根据权利要求37所述的装置,其特征在于,参数确定模块具体用于:通过最小化第三函数确定所述滤波器参数,其中,所述第三函数为用于表示所述第一分像素图像与所述第二分像素图像的差值的第一函数和用于表示所述样本图像与所述第二图像的差值的第二函数的加权求和。
  39. 根据权利要求37所述的装置,其特征在于,参数确定模块具体用于:通过交替最小化用于表示所述第一分像素图像与所述第二分像素图像的差值的第一损失函数和用于表示所述样本图像与所述第二图像的差值的第二函数确定所述滤波器参数。
  40. 一种编码器,其特征在于,包括:
    帧间预测单元,用于对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息,其中,所述当前编码图像块的运动信息指向分数像素位置,所述帧间预测单元包括滤波器选择单元,用于从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器;
    熵编码单元,基于所述当前编码图像块的帧间预测模式和所述当前编码图像块的运动信息对所述当前编码图像块进行编码,得到编码信息,将所述编码信息编入码流,其中,所述编码信息包括目标插值滤波器的指示信息;所述目标插值滤波器的指示信息用于指示通过所述目标插值滤波器进行分像素插值得到所述当前编码图像块对应的分数像素位置的参考块。
  41. 根据权利要求40所述的编码器,其特征在于,所述滤波器选择单元具体用于根据率失真代价准则从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器。
  42. 根据权利要求40所述的编码器,其特征在于,所述帧间预测单元具体用于:
    确定与所述当前编码图像块最优匹配的整像素参考图像块;
    通过候选插值滤波器集合中每一个插值滤波器对所述整像素参考图像块进行分像素插值,得到N个分像素参考图像块,N为正整数;
    在所述整像素参考图像块和所述N个分像素参考图像块中确定与所述当前编码图像块最优匹配的预测块;
    基于所述预测块确定所述运动信息,其中,插值得到所述预测块的插值滤波器即为目标插值滤波器。
  43. 根据权利要求40-42任一项所述的编码器,其特征在于,所述候选插值滤波器集合包括通过如权利要求1-4任意权利要求所述插值滤波器的训练方法得到的第二插值滤波器。
  44. 根据权利要求43所述的编码器,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,则:
    所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据权利要求1-4所述的插值滤波器的训练方法得到的滤波器参数。
  45. 根据权利要求44所述的编码器,其特征在于,所述编码信息还包括训练得到的所述目标插值滤波器的滤波器参数;或者,所述编码信息还包括滤波器参数差值,所述滤波器参数差值为训练得到的用于当前图像单元的目标插值滤波器的滤波器参数相对于训练得到的用于在先编码的图像单元的目标插值滤波器的滤波器参数。
  46. 根据权利要求45所述的编码器,其特征在于,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
  47. 一种编码器,其特征在于,包括:
    帧间预测单元,用于对所述当前编码图像块进行帧间预测,得到所述当前编码图像块的运动信息,其中,所述当前编码图像块的运动信息指向分数像素位置,所述帧间预测单元包括滤波器选择单元,所述滤波器选择单元用于:从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器;
    熵编码单元,用于基于所述当前编码图像块的帧间预测模式和所述当前编码图像块的运动信息对所述当前编码图像块进行编码,得到编码信息,将所述编码信息编入到码流,其中,若所述当前编码图像块的帧间预测模式是目标帧间预测模式,所述编码信息不包括 所述目标插值滤波器的指示信息;若所述当前编码图像块的帧间预测模式为非目标帧间预测模式,所述编码信息包括所述目标插值滤波器的指示信息,所述目标插值滤波器的指示信息用于指示所述当前编码图像块采用所述目标插值滤波器进行分像素插值。
  48. 根据权利要求47所述的编码器,其特征在于,所述滤波器选择单元具体用于:
    根据率失真代价准则从候选插值滤波器集合中确定用于所述当前编码图像块的目标插值滤波器。
  49. 根据权利要求47所述的编码器,其特征在于,所述帧间预测单元具体用于:
    确定与所述当前编码图像块最优匹配的整像素参考图像块;
    通过候选插值滤波器集合中每一个插值滤波器对所述整像素参考图像块进行分像素插值,得到N个分像素参考图像块,N为正整数;
    在所述整像素参考图像块和所述N个分像素参考图像块中确定与所述当前编码图像块最优匹配的预测块;
    基于所述预测块确定所述运动信息,其中,插值得到所述预测块的插值滤波器即为目标插值滤波器。
  50. 根据权利要求47-49任一项所述的编码器,其特征在于,所述候选插值滤波器集合包括通过如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器。
  51. 根据权利要求50所述的编码器,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,则:
    所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据权利要求1-4所述的插值滤波器的训练方法得到的滤波器参数。
  52. 根据权利要求51所述的编码器,其特征在于,所述编码信息还包括训练得到的所述目标插值滤波器的滤波器参数;或者,所述编码信息还包括滤波器参数差值,所述滤波器参数差值为训练得到的用于当前编码的图像单元的目标插值滤波器的滤波器参数相对于训练得到的用于在先编码的图像单元的目标插值滤波器的滤波器参数。
  53. 根据权利要求52所述的编码器,其特征在于,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
  54. 一种解码器,其特征在于,包括:
    熵解码单元,用于从码流中解析出目标插值滤波器的指示信息;以及,获取当前解码图像块的运动信息,其中,所述运动信息指向分数像素位置;
    帧间预测单元,用于基于所述当前解码图像块的运动信息对所述当前解码图像块执行 预测过程,其中,所述预测过程包括:根据所述指示信息所指示的目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块;
    重构单元,用于基于所述当前解码图像块的预测块,重建所述当前解码图像块的重建块。
  55. 根据权利要求54所述的解码器,其特征在于,
    所述熵解码单元具体用于从码流中解析出所述当解码图像块的运动信息的索引;
    所述帧间预测单元,还用于基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定所述当前解码图像块的运动信息。
  56. 根据权利要求54所述的解码器,其特征在于,
    所述熵解码单元具体用于:从码流中解析出所述当解码图像块的运动信息的索引和运动矢量差值;
    所述帧间预测单元还用于:基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定当前解码图像块的运动矢量预测值;以及,基于所述运动矢量预测值和所述运动矢量差值,得到所述当前解码图像块的运动矢量。
  57. 根据权利要求54所述的解码器,其特征在于,所述帧间预测单元还用于:
    若所述当前解码图像块的帧间预测模式为合并模式(merge mode),获取在所述合并模式下合并到的在先解码的图像块的运动信息,即为当前解码图像块的运动信息。
  58. 根据权利要求54-57任一项所述的解码器,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练解码器得到的第二插值滤波器,则:
    所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据权利要求1-4所述的插值滤波器的训练方法得到的滤波器参数。
  59. 根据权利要求58所述的解码器,其特征在于,
    所述熵解码单元还用于:从码流中解析出用于当前解码的图像单元的目标插值滤波器的滤波器参数;
    所述解码器还包括配置单元,用于通过所述当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
  60. 根据权利要求58所述的解码器,其特征在于,所述熵解码单元还用于:从码流中解析出滤波器参数差值,所述滤波器参数差值为用于当前解码的图像单元的目标插值滤波器的滤波器参数相对于用于在先解码的图像单元的目标插值滤波器的滤波器参数用于当前解码的图像单元的目标插值滤波器的滤波器参数;
    所述解码器还包括:配置单元,用于根据所述在先解码的图像单元的目标插值滤波器 的滤波器参数和所述滤波器参数差值得到所述当前解码的图像单元的目标插值滤波器的滤波器参数;以及,通过所述当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
  61. 根据权利要求59或60所述的解码器,其特征在于,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
  62. 一种解码器,其特征在于,包括:
    熵解码单元,用于从码流中解析出当前解码图像块的用于指示所述当前解码图像块的帧间预测模式的信息;
    帧间预测单元,用于获取所述当前解码图像块的运动信息,其中,所述运动信息指向分数像素位置;若所述当前图像块的帧间预测模式为非目标帧间预测模式,基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:根据从所述码流中解析出的目标插值滤波器的指示信息所指示的目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块;
    重构单元,用于基于所述当前解码图像块的预测块,对所述当前解码图像块进行重建。
  63. 根据权利要求62所述的解码器,其特征在于,
    所述熵解码单元还用于:从码流中解析出所述当解码图像块的运动信息的索引;
    所述帧间预测单元还用于:基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定所述当前解码图像块的运动信息。
  64. 根据权利要求62所述的解码器,其特征在于,
    所述熵解码单元还用于:从码流中解析出所述当解码图像块的运动信息的索引和运动矢量差值;
    所述帧间预测单元还用于:基于所述当解码图像块的运动信息的索引和所述当前解码图像块的候选运动信息列表确定所述当前解码图像块的运动矢量预测值;以及,基于所述运动矢量预测值和所述运动矢量差值,得到所述当前解码图像块的运动矢量。
  65. 根据权利要求62所述的解码器,其特征在于,所述帧间预测单元还用于:若所述当前图像块的帧间预测模式是目标帧间预测模式,基于所述当前解码图像块的运动信息对所述当前解码图像块执行预测过程,其中,所述预测过程包括:确定用于所述当前解码图像块的目标插值滤波器;根据所述目标插值滤波器进行分像素插值,得到所述当前解码图像块的预测块。
  66. 根据权利要求65所述的解码器,其特征在于,所述目标帧间预测模式为合并模式,其中,
    所述获取所述当前解码图像块的运动信息,包括:获取在所述合并模式下合并到的在 先解码的图像块的运动信息;
    所述确定用于所述当前解码图像块的目标插值滤波器,包括:确定在所述先解码的图像块在解码过程中使用的插值滤波器为所述用于所述当前解码图像块的目标插值滤波器;或,确定所述用于所述当前解码图像块的目标插值滤波器为从所述码流中解析出的目标插值滤波器的指示信息所指示的目标插值滤波器。
  67. 根据权利要求62-66任一项所述的解码器,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,则:
    所述目标插值滤波器的滤波器参数为预设的滤波器参数;或者,所述目标插值滤波器的滤波器参数为根据权利要求1-4所述的插值滤波器的训练方法得到的滤波器参数。
  68. 根据权利要求62-67任一项所述的解码器,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,所述熵解码单元还用于:从码流中解析出用于当前解码的图像单元的目标插值滤波器的滤波器参数;
    所述解码器还包括:配置单元,用于通过所述当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
  69. 根据权利要求62-67任一项所述的解码器,其特征在于,若所述目标滤波器为通过所述如权利要求1-4任意权利要求所述的插值滤波器的训练方法得到的第二插值滤波器,所述熵解码单元还用于:从码流中解析出滤波器参数差值,所述滤波器参数差值为用于当前解码的图像单元的目标插值滤波器的滤波器参数相对于用于在先解码的图像单元的目标插值滤波器的滤波器参数用于当前解码的图像单元的目标插值滤波器的滤波器参数;
    所述解码器还包括:配置单元,所述配置单元用于根据所述在先解码的图像单元的目标插值滤波器的滤波器参数和所述滤波器参数差值得到所述当前解码的图像单元的目标插值滤波器的滤波器参数;以及,通过所述当前解码的图像单元的目标插值滤波器的滤波器参数配置所述目标插值滤波器。
  70. 根据权利要求68或69所述的解码器,其特征在于,所述图像单元包括图像帧、条带(slice)、视频序列子组、编码树单元(CTU)、编码单元(CU)或预测单元(PU)。
  71. 一种插值滤波器的训练装置,其特征在于,包括存储器和处理器;所述存储器用于存储程序代码,所述处理器用于调用所述程序代码,执行如权利要求1-4任一项所述的插值滤波器训练方法。
  72. 一种编码装置,其特征在于,包括存储器和处理器;所述存储器用于存储程序代码;所述处理器用于调用所述程序代码,以执行如权利要求5-19任一项所述的视频图像编码方法。
  73. 一种解码装置,其特征在于,包括存储器和处理器;所述存储器用于存储程序代码;所述处理器用于调用所述程序代码,以执行如权利要求20-35任一项所述的视频图像解码方法。
  74. 一种计算机可读存储介质,其特征在于,包括程序代码,所述程序代码在计算机上运行时,使得所述计算机执行如权利要求1-4任一项所述的插值滤波器训练方法。
  75. 一种计算机可读存储介质,其特征在于,包括程序代码,所述程序代码在计算机上运行时,使得所述计算机执行如权利要求5-19任一项所述的视频图像编码方法。
  76. 一种计算机可读存储介质,其特征在于,包括程序代码,所述程序代码在计算机上运行时,使得所述计算机执行如权利要求20-35任一项所述的视频图像解码方法。
PCT/CN2019/108311 2018-10-06 2019-09-26 插值滤波器的训练方法、装置及视频图像编解码方法、编解码器 WO2020069655A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2021518927A JP7331095B2 (ja) 2018-10-06 2019-09-26 補間フィルタトレーニング方法及び装置、ビデオピクチャエンコーディング及びデコーディング方法、並びに、エンコーダ及びデコーダ
KR1020217013057A KR102592279B1 (ko) 2018-10-06 2019-09-26 보간 필터 훈련 방법 및 장치, 비디오 화상 인코딩 및 디코딩 방법, 및 인코더 및 디코더
EP19869096.8A EP3855741A4 (en) 2018-10-06 2019-09-26 TRAINING METHOD AND DEVICE FOR INTERPOLATION FILTER, VIDEO IMAGE CODING METHOD, VIDEO PICTURE DECODING METHOD, CODERS AND DECODERS
US17/221,184 US20210227243A1 (en) 2018-10-06 2021-04-02 Interpolation filter training method and apparatus, video picture encoding and decoding method, and encoder and decoder

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811166872.XA CN111010568B (zh) 2018-10-06 2018-10-06 插值滤波器的训练方法、装置及视频图像编解码方法、编解码器
CN201811166872.X 2018-10-06

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/221,184 Continuation US20210227243A1 (en) 2018-10-06 2021-04-02 Interpolation filter training method and apparatus, video picture encoding and decoding method, and encoder and decoder

Publications (1)

Publication Number Publication Date
WO2020069655A1 true WO2020069655A1 (zh) 2020-04-09

Family

ID=70054947

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/108311 WO2020069655A1 (zh) 2018-10-06 2019-09-26 插值滤波器的训练方法、装置及视频图像编解码方法、编解码器

Country Status (6)

Country Link
US (1) US20210227243A1 (zh)
EP (1) EP3855741A4 (zh)
JP (1) JP7331095B2 (zh)
KR (1) KR102592279B1 (zh)
CN (1) CN111010568B (zh)
WO (1) WO2020069655A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021249290A1 (zh) * 2020-06-10 2021-12-16 华为技术有限公司 环路滤波方法和装置

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3993414A1 (en) * 2020-11-03 2022-05-04 Ateme Method for image processing and apparatus for implementing the same
CN112911286B (zh) * 2021-01-29 2022-11-15 杭州电子科技大学 一种分像素插值滤波器的设计方法
US11750847B2 (en) 2021-04-19 2023-09-05 Tencent America LLC Quality-adaptive neural network-based loop filter with smooth quality control by meta-learning
WO2022246809A1 (zh) * 2021-05-28 2022-12-01 Oppo广东移动通信有限公司 编解码方法、码流、编码器、解码器以及存储介质
CN117597930A (zh) * 2021-08-20 2024-02-23 深圳传音控股股份有限公司 图像处理方法、移动终端及存储介质
US11924467B2 (en) * 2021-11-16 2024-03-05 Google Llc Mapping-aware coding tools for 360 degree videos
WO2023097095A1 (en) * 2021-11-29 2023-06-01 Beijing Dajia Internet Information Technology Co., Ltd. Invertible filtering for video coding
WO2023153891A1 (ko) * 2022-02-13 2023-08-17 엘지전자 주식회사 영상 인코딩/디코딩 방법 및 장치, 그리고 비트스트림을 저장한 기록 매체
CN117201782A (zh) * 2022-05-31 2023-12-08 华为技术有限公司 滤波方法、滤波模型训练方法及相关装置
CN115277331B (zh) * 2022-06-17 2023-09-12 哲库科技(北京)有限公司 信号补偿方法及装置、调制解调器、通信设备、存储介质
CN115278248B (zh) * 2022-09-28 2023-04-07 广东电网有限责任公司中山供电局 一种视频图像编码设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1666429A (zh) * 2002-07-09 2005-09-07 诺基亚有限公司 用于在视频编码中选择插值滤波器类型的方法和系统
CN101043621A (zh) * 2006-06-05 2007-09-26 华为技术有限公司 一种自适应插值处理方法及编解码模块
CN101616325A (zh) * 2009-07-27 2009-12-30 清华大学 一种视频编码中自适应插值滤波计算的方法
CN101790092A (zh) * 2010-03-15 2010-07-28 河海大学常州校区 基于图像块编码信息的智能滤波器设计方法
CN102084655A (zh) * 2008-07-07 2011-06-01 高通股份有限公司 通过过滤器选择进行的视频编码
CN102638678A (zh) * 2011-02-12 2012-08-15 乐金电子(中国)研究开发中心有限公司 视频编解码帧间图像预测方法及视频编解码器
CN103875246A (zh) * 2011-10-18 2014-06-18 日本电信电话株式会社 影像编码方法、装置、影像解码方法、装置及它们的程序
US20160191946A1 (en) * 2014-12-31 2016-06-30 Microsoft Technology Licensing, Llc Computationally efficient motion estimation

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040076333A1 (en) * 2002-10-22 2004-04-22 Huipin Zhang Adaptive interpolation filter system for motion compensated predictive video coding
US8516026B2 (en) * 2003-03-10 2013-08-20 Broadcom Corporation SIMD supporting filtering in a video decoding system
CN100551073C (zh) * 2006-12-05 2009-10-14 华为技术有限公司 编解码方法及装置、分像素插值处理方法及装置
EP2048886A1 (en) * 2007-10-11 2009-04-15 Panasonic Corporation Coding of adaptive interpolation filter coefficients
KR101648818B1 (ko) * 2008-06-12 2016-08-17 톰슨 라이센싱 움직임 보상 보간 및 참조 픽쳐 필터링에 대한 국부적 적응형 필터링을 위한 방법 및 장치
EP2136565A1 (en) * 2008-06-19 2009-12-23 Thomson Licensing Method for determining a filter for interpolating one or more pixels of a frame, method for encoding or reconstructing a frame and method for transmitting a frame
KR101647376B1 (ko) * 2009-03-30 2016-08-10 엘지전자 주식회사 비디오 신호 처리 방법 및 장치
JP5570363B2 (ja) * 2010-09-22 2014-08-13 Kddi株式会社 動画像符号化装置、動画像復号装置、動画像符号化方法、動画像復号方法、およびプログラム
JP5485851B2 (ja) * 2010-09-30 2014-05-07 日本電信電話株式会社 映像符号化方法,映像復号方法,映像符号化装置,映像復号装置およびそれらのプログラム
MX346561B (es) * 2012-07-02 2017-03-24 Samsung Electronics Co Ltd Metodo y aparato para predecir un vector de movimiento para la codificacion de video o decodificacion de video.
CN110177276B (zh) 2014-02-26 2021-08-24 杜比实验室特许公司 处理视频图像的空间区域的方法、存储介质及计算装置
JP2018530244A (ja) * 2015-09-25 2018-10-11 華為技術有限公司Huawei Technologies Co.,Ltd. 選択可能な補間フィルタを用いるビデオ動き補償のための装置および方法
US10341659B2 (en) * 2016-10-05 2019-07-02 Qualcomm Incorporated Systems and methods of switching interpolation filters
KR102595689B1 (ko) * 2017-09-29 2023-10-30 인텔렉추얼디스커버리 주식회사 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체
CN108012157B (zh) * 2017-11-27 2020-02-04 上海交通大学 用于视频编码分数像素插值的卷积神经网络的构建方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1666429A (zh) * 2002-07-09 2005-09-07 诺基亚有限公司 用于在视频编码中选择插值滤波器类型的方法和系统
CN101043621A (zh) * 2006-06-05 2007-09-26 华为技术有限公司 一种自适应插值处理方法及编解码模块
CN102084655A (zh) * 2008-07-07 2011-06-01 高通股份有限公司 通过过滤器选择进行的视频编码
CN101616325A (zh) * 2009-07-27 2009-12-30 清华大学 一种视频编码中自适应插值滤波计算的方法
CN101790092A (zh) * 2010-03-15 2010-07-28 河海大学常州校区 基于图像块编码信息的智能滤波器设计方法
CN102638678A (zh) * 2011-02-12 2012-08-15 乐金电子(中国)研究开发中心有限公司 视频编解码帧间图像预测方法及视频编解码器
CN103875246A (zh) * 2011-10-18 2014-06-18 日本电信电话株式会社 影像编码方法、装置、影像解码方法、装置及它们的程序
US20160191946A1 (en) * 2014-12-31 2016-06-30 Microsoft Technology Licensing, Llc Computationally efficient motion estimation

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021249290A1 (zh) * 2020-06-10 2021-12-16 华为技术有限公司 环路滤波方法和装置

Also Published As

Publication number Publication date
EP3855741A1 (en) 2021-07-28
US20210227243A1 (en) 2021-07-22
JP2022514160A (ja) 2022-02-10
CN111010568A (zh) 2020-04-14
KR102592279B1 (ko) 2023-10-19
KR20210064370A (ko) 2021-06-02
CN111010568B (zh) 2023-09-29
JP7331095B2 (ja) 2023-08-22
EP3855741A4 (en) 2021-11-24

Similar Documents

Publication Publication Date Title
WO2020069655A1 (zh) 插值滤波器的训练方法、装置及视频图像编解码方法、编解码器
US11438578B2 (en) Video picture prediction method and apparatus
WO2020125595A1 (zh) 视频译码器及相应方法
WO2020103800A1 (zh) 视频解码方法和视频解码器
CN111919444A (zh) 色度块的预测方法和装置
CN112385234A (zh) 图像和视频译码的设备和方法
US20220094947A1 (en) Method for constructing mpm list, method for obtaining intra prediction mode of chroma block, and apparatus
WO2020114394A1 (zh) 视频编解码方法、视频编码器和视频解码器
WO2020038378A1 (zh) 色度块预测方法及装置
US20230370597A1 (en) Picture partitioning method and apparatus
WO2020135467A1 (zh) 帧间预测方法、装置以及相应的编码器和解码器
KR20220024877A (ko) 이중 예측 옵티컬 플로 계산 및 이중 예측 보정에서 블록 레벨 경계 샘플 그레이디언트 계산을 위한 정수 그리드 참조 샘플의 위치를 계산하는 방법
EP3890322A1 (en) Video coder-decoder and corresponding method
CN110876061B (zh) 色度块预测方法及装置
WO2020147514A1 (zh) 视频编码器、视频解码器及相应方法
WO2020135371A1 (zh) 一种标志位的上下文建模方法及装置
WO2020114509A1 (zh) 视频图像解码、编码方法及装置
WO2020114393A1 (zh) 变换方法、反变换方法以及视频编码器和视频解码器
WO2020114508A1 (zh) 视频编解码方法及装置
WO2020135615A1 (zh) 视频图像解码方法及装置
US11917203B2 (en) Non-separable transform method and device
WO2020143292A1 (zh) 一种帧间预测方法及装置
WO2020057506A1 (zh) 色度块预测方法及装置
WO2020119742A1 (zh) 块划分方法、视频编解码方法、视频编解码器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19869096

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021518927

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20217013057

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019869096

Country of ref document: EP

Effective date: 20210421