WO2020056798A1 - 一种视频编解码的方法与装置 - Google Patents

一种视频编解码的方法与装置 Download PDF

Info

Publication number
WO2020056798A1
WO2020056798A1 PCT/CN2018/109233 CN2018109233W WO2020056798A1 WO 2020056798 A1 WO2020056798 A1 WO 2020056798A1 CN 2018109233 W CN2018109233 W CN 2018109233W WO 2020056798 A1 WO2020056798 A1 WO 2020056798A1
Authority
WO
WIPO (PCT)
Prior art keywords
processed
target pixel
pixel point
image block
prediction
Prior art date
Application number
PCT/CN2018/109233
Other languages
English (en)
French (fr)
Inventor
徐巍炜
杨海涛
赵寅
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP19863868.6A priority Critical patent/EP3849197A4/en
Priority to AU2019343426A priority patent/AU2019343426B2/en
Priority to CA3106125A priority patent/CA3106125C/en
Priority to PCT/CN2019/107060 priority patent/WO2020057648A1/zh
Priority to JP2021507527A priority patent/JP7259009B2/ja
Priority to CA3200616A priority patent/CA3200616A1/en
Priority to KR1020217003101A priority patent/KR102616711B1/ko
Priority to CN202210435828.4A priority patent/CN115695782A/zh
Priority to CN201980011364.0A priority patent/CN112655218B/zh
Priority to CN202010846274.8A priority patent/CN112437299B/zh
Priority to KR1020237043657A priority patent/KR20230175341A/ko
Priority to SG11202100063YA priority patent/SG11202100063YA/en
Priority to MX2021002868A priority patent/MX2021002868A/es
Priority to BR112021001563-9A priority patent/BR112021001563A2/pt
Publication of WO2020056798A1 publication Critical patent/WO2020056798A1/zh
Priority to PH12021550058A priority patent/PH12021550058A1/en
Priority to US17/249,189 priority patent/US11647207B2/en
Priority to US18/150,742 priority patent/US20230164328A1/en
Priority to JP2023014532A priority patent/JP2023065381A/ja
Priority to AU2023222943A priority patent/AU2023222943A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Definitions

  • the present application relates to the technical field of video encoding and decoding, and in particular, to a method and an apparatus for inter prediction.
  • Digital video technology can be widely used in various devices, including digital TV, digital live broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), notebook computers, tablet computers, e-book readers, digital cameras, digital recording devices, digital Media players, video game devices, video game consoles, cellular or satellite radio telephones, video teleconferencing devices, video streaming devices, and the like.
  • Digital video devices implement video decoding technology, such as MPEG-2, MPEG-4, ITU-T H.263, ITU-T H.264 / MPEG-4 Part 10 Advanced Video Decoding (AVC), ITU-T Video decoding technology described in standards defined by .265 (also known as High Efficiency Video Decoding (HEVC)) and extensions to these standards.
  • Digital video devices can implement these video decoding techniques to more efficiently send, receive, encode, decode, and / or store digital video information.
  • Video compression techniques perform spatial (intra-image) prediction and / or temporal (inter-image) prediction to reduce or remove redundant information inherent in a video sequence.
  • a video block may be divided into video blocks, and the video block may also be referred to as a tree block, a coding unit / decoding unit (coding unit, CU), or a coding node / decoding node.
  • the spatial prediction of reference samples located in adjacent blocks in the same image is used to encode the video blocks in the intra decoded (I) slice of the image.
  • Video blocks in an inter-decoded (P or B) slice of an image may use spatial prediction of reference samples located in neighboring blocks in the same image or temporal prediction of reference samples located in other reference images.
  • An image may be referred to as a frame, and a reference image may be referred to as a reference frame.
  • a first aspect of the present application provides an inter prediction method, including: parsing a bitstream to obtain motion information of an image block to be processed; and performing motion compensation on the image block to be processed based on the motion information to obtain the The prediction block of the image block to be processed is described; the reconstruction value of one or more reference pixel points and the prediction value of the target pixel point in the image block to be processed are weighted to update the prediction value of the target pixel point.
  • the reference pixel point and the target pixel point have a preset spatial relationship in space.
  • the one or more reference pixel points include a reconstructed pixel point having the same abscissa and a preset ordinate difference as the target pixel point, or, and The target pixel points have a reconstructed pixel point having the same ordinate and a preset abscissa difference.
  • the predicted value of the target pixel is updated according to the following formula:
  • the coordinates of the target pixel point are (xP, yP), the coordinates of the upper left pixel point in the image block to be processed are (xN, yN), and predP (xP, yP) is the target pixel point's Prediction value before update, predQ (xP, yP) is the updated prediction value of the target pixel, recon (xN-M1, yP), recon (xP, yN-M2) are located at the coordinate position (xN- M1, yP), (xP, yN-M2) of the reference pixel reconstruction values, w1, w2, w3, w4, w5, w6 are preset constants, and M1, M2 are preset positive integers.
  • the predicted value of the target pixel is updated according to the following formula:
  • the coordinates of the target pixel point are (xP, yP), the coordinates of the upper left pixel point in the image block to be processed are (xN, yN), and predP (xP, yP) is the target pixel point's Prediction value before update, predQ (xP, yP) is the updated prediction value of the target pixel, recon (xN-M1, yP), recon (xN-M2, yP), recon (xP, yN-M3 ), recon (xP, yN-M4) are the references at (xN-M1, yP), (xN-M2, yP), (xP, yN-M3), (xP, yN-M4) Pixel reconstruction values, w1, w2, w3, w4, w5, w6, w7, w8, w9, w10, and w11 are preset constants, and M1, M2, M3, and M4 are preset positive integers.
  • the one or more reference pixel points include one or more of the following pixel points: having the same abscissa with the target pixel point and with the image to be processed A reconstructed pixel point adjacent to the upper edge of the block; or a reconstructed pixel point having the same ordinate as the target pixel point and adjacent to the left edge of the image block to be processed; or The reconstructed pixel point in the upper right corner of the image block to be processed; or the reconstructed pixel point in the lower left corner of the image block to be processed; or the reconstructed pixel point in the upper left corner of the image block to be processed .
  • predH (xP, yP) ((nTbW-1-xP) * p (-1, yP) + (xP + 1) * p (nTbW, -1)) ⁇ Log2 (nTbH), the target pixel
  • predP (xP, yP) is the predicted value of the target pixel point before update
  • predQ (xP, yP) is the updated predicted value of the target pixel
  • w1 and w2 are preset constants, nTbW and nT
  • the predicted value of the target pixel is updated according to the following formula:
  • predQ (xP, yP) (w1 * predP (xP, yP)
  • predV (xP, yP) ((nTbH-1-yP) * p (xP, -1) + (yP + 1) * p (-1, nTbH) + nTbH / 2) >> Log2 (nTbH),
  • predH (xP, yP) ((nTbW-1-xP) * p (-1, yP) + (xP + 1) * p (nTbW, -1) + nTbW / 2) >> Log2 (nTbW), so
  • the coordinates of the target pixel point are (xP, yP)
  • the coordinates of the upper left pixel point in the image block to be processed are (0, 0)
  • predP (xP, yP) is the value before the update of the target pixel point.
  • Predicted value, predQ (xP, yP) is the updated predicted value of the target pixel
  • p (xP, -1), p (-1, nTbH), p (-1, yP), p (nTbW, -1) are the reconstructed values of the reference pixel points located at coordinate positions (xP, -1), (-1, nTbH), (-1, yP), (nTbW, -1), w1, w2, w3 is a preset constant
  • nTbW and nTbH are the width and height of the image block to be processed.
  • the predicted value of the target pixel is updated according to the following formula:
  • predQ (xP, yP) (((w1 * predP (xP, yP)) ⁇ (Log2 (nTbW) + Log2 (nTbH) +1))
  • predV (xP, yP) ((nTbH-1-yP) * p (xP, -1) + (yP + 1) * p (-1, nTbH)) ⁇ Log2 (nTbW),
  • predH (xP, yP) ((nTbW-1-xP) * p (-1, yP) + (xP + 1) * p (nTbW, -1)) ⁇ Log2 (nTbH), the target pixel
  • predP (xP, yP) is the predicted value of the target pixel point before update
  • predQ (xP, yP) is the updated predicted value of the target pixel
  • w1 and w2 are preset constants, nTbW and nT
  • the sum of w1 and w2 is an n-th power of 2, where n is a non-negative integer.
  • the predicted value of the target pixel is updated according to the following formula:
  • predQ (xP, yP) clip1Cmp ((refL (xP, yP) * wL (xP) + refT (xP, yP) * wT (yP) -p (-1, -1) * wTL (xP, yP) + (64-wL (xP) -wT (yP) + wTL (xP, yP)) * predP (xP, yP) +32) >> 6),
  • the predicted value of the target pixel is updated according to the following formula:
  • predQ (xP, yP) clip1Cmp ((refL (xP, yP) * wL (xP) + refT (xP, yP) * wT (yP) + (64-wL (xP) -wT (yP)) * predP ( xP, yP) +32) >> 6),
  • the method before the performing weighted calculation on the reconstructed value of one or more reference pixel points and the predicted value of a target pixel point in the image block to be processed, the method includes: When the reference pixel point is located above the image block to be processed, weighted calculation is performed on the reconstruction value of the reference pixel point and the reconstruction values of left and right adjacent pixel points of the reference pixel point; when the When the reference pixel point is located to the left of the image block to be processed, weighting calculation is performed on the reconstruction value of the reference pixel point and the reconstruction value of adjacent pixel points above and below the reference pixel point; using the weighting calculation The result updates the reconstructed value of the reference pixel.
  • the method before the performing motion compensation on the image block to be processed based on the motion information, the method further includes: initializing the motion information by using a first preset algorithm.
  • the performing motion compensation on the image block to be processed based on the motion information includes: performing motion compensation on the image block to be processed based on the initially updated motion information.
  • the method further includes: performing a pre-update on the prediction block by using a second preset algorithm; correspondingly, all The method of performing weighted calculation on the reconstructed value of one or more reference pixel points and the predicted value of the target pixel point in the image block to be processed includes: reconstructing the reconstructed value of one or more reference pixel points and The weighted calculation is performed on the pre-updated prediction value of the target pixel point in the image block to be processed.
  • the reconstructed value of one or more reference pixel points and the predicted value of the target pixel point in the image block to be processed are weighted to update the calculated value.
  • the method further includes: updating the prediction value of the target pixel point by using a second preset algorithm.
  • the method before the performing weighted calculation on the reconstructed value of one or more reference pixel points and the predicted value of a target pixel point in the image block to be processed, the method further includes: : Parse the code stream to obtain a prediction mode of the image block to be processed; determine that the prediction mode is a merge mode or a skip mode.
  • the method before the performing weighted calculation on the reconstructed value of one or more reference pixel points and the predicted value of a target pixel point in the image block to be processed, the method further includes: : Parse the code stream to obtain update identification identifier information of the image block to be processed; determine that the update identification identifier information indicates to update the prediction block of the image block to be processed.
  • the method before the performing weighted calculation on the reconstructed value of one or more reference pixel points and the predicted value of a target pixel point in the image block to be processed, the method further includes: Obtain preset update identification identifier information of the image block to be processed; determine that the update identification identifier information indicates to update the prediction block of the image block to be processed.
  • a second aspect of the present application provides an inter prediction apparatus, including: an analysis module configured to parse a code stream to obtain motion information of an image block to be processed; and a compensation module configured to perform an analysis on the target image based on the motion information.
  • the one or more reference pixel points include a reconstructed pixel point having the same abscissa and a preset ordinate difference as the target pixel point, or, and The target pixel points have a reconstructed pixel point having the same ordinate and a preset abscissa difference.
  • the predicted value of the target pixel is updated according to the following formula:
  • the coordinates of the target pixel point are (xP, yP), the coordinates of the upper left pixel point in the image block to be processed are (xN, yN), and predP (xP, yP) is the target pixel point's Prediction value before update, predQ (xP, yP) is the updated prediction value of the target pixel, recon (xN-M1, yP), recon (xP, yN-M2) are located at the coordinate position (xN- M1, yP), (xP, yN-M2) of the reference pixel reconstruction values, w1, w2, w3, w4, w5, w6 are preset constants, and M1, M2 are preset positive integers.
  • the predicted value of the target pixel is updated according to the following formula:
  • the coordinates of the target pixel point are (xP, yP), the coordinates of the upper left pixel point in the image block to be processed are (xN, yN), and predP (xP, yP) is the target pixel point's Prediction value before update, predQ (xP, yP) is the updated prediction value of the target pixel, recon (xN-M1, yP), recon (xN-M2, yP), recon (xP, yN-M3 ), recon (xP, yN-M4) are the references at coordinate positions (xN-M1, yP), (xN-M2, yP), (xP, yN-M3), (xP, yN-M4) Pixel reconstruction values, w1, w2, w3, w4, w5, w6, w7, w8, w9, w10, and w11 are preset constants, and M1, M2, M3, and M4 are preset positive integers.
  • the one or more reference pixel points include one or more of the following pixel points: having the same abscissa with the target pixel point and with the image to be processed A reconstructed pixel point adjacent to the upper edge of the block; or a reconstructed pixel point having the same ordinate as the target pixel point and adjacent to the left edge of the image block to be processed; or The reconstructed pixel point in the upper right corner of the image block to be processed; or the reconstructed pixel point in the lower left corner of the image block to be processed; or the reconstructed pixel point in the upper left corner of the image block to be processed .
  • predH (xP, yP) ((nTbW-1-xP) * p (-1, yP) + (xP + 1) * p (nTbW, -1)) ⁇ Log2 (nTbH), the target pixel
  • predP (xP, yP) is the predicted value of the target pixel point before update
  • predQ (xP, yP) is the updated predicted value of the target pixel
  • w1 and w2 are preset constants, nTbW and nT
  • the predicted value of the target pixel is updated according to the following formula:
  • predQ (xP, yP) (w1 * predP (xP, yP)
  • predV (xP, yP) ((nTbH-1-yP) * p (xP, -1) + (yP + 1) * p (-1, nTbH) + nTbH / 2) >> Log2 (nTbH),
  • predH (xP, yP) ((nTbW-1-xP) * p (-1, yP) + (xP + 1) * p (nTbW, -1) + nTbW / 2) >> Log2 (nTbW), so
  • the coordinates of the target pixel point are (xP, yP)
  • the coordinates of the upper left pixel point in the image block to be processed are (0, 0)
  • predP (xP, yP) is the value before the update of the target pixel point.
  • Predicted value, predQ (xP, yP) is the updated predicted value of the target pixel
  • p (xP, -1), p (-1, nTbH), p (-1, yP), p (nTbW, -1) are the reconstructed values of the reference pixels at the coordinate positions (xP, -1), (-1, nTbH), (-1, yP), (nTbW, -1), w1, w2, w3 is a preset constant
  • nTbW and nTbH are the width and height of the image block to be processed.
  • the predicted value of the target pixel is updated according to the following formula:
  • predQ (xP, yP) (((w1 * predP (xP, yP)) ⁇ (Log2 (nTbW) + Log2 (nTbH) +1))
  • predV (xP, yP) ((nTbH-1-yP) * p (xP, -1) + (yP + 1) * p (-1, nTbH)) ⁇ Log2 (nTbW),
  • predH (xP, yP) ((nTbW-1-xP) * p (-1, yP) + (xP + 1) * p (nTbW, -1)) ⁇ Log2 (nTbH), the target pixel
  • predP (xP, yP) is the predicted value of the target pixel point before update
  • predQ (xP, yP) is the updated predicted value of the target pixel
  • w1 and w2 are preset constants, nTbW and nT
  • the sum of w1 and w2 is an n-th power of 2, where n is a non-negative integer.
  • the predicted value of the target pixel is updated according to the following formula:
  • predQ (xP, yP) clip1Cmp ((refL (xP, yP) * wL (xP) + refT (xP, yP) * wT (yP) -p (-1, -1) * wTL (xP, yP) + (64-wL (xP) -wT (yP) + wTL (xP, yP)) * predP (xP, yP) +32) >> 6),
  • the predicted value of the target pixel is updated according to the following formula:
  • predQ (xP, yP) clip1Cmp ((refL (xP, yP) * wL (xP) + refT (xP, yP) * wT (yP) + (64-wL (xP) -w T (yP)) * predP (xP, yP) +32) >> 6),
  • the calculation module is further configured to: when the reference pixel point is located above the image block to be processed, reconstruct a value and a value of the reference pixel point.
  • the reconstruction values of the left and right adjacent pixel points of the reference pixel point are weighted; when the reference pixel point is located to the left of the image block to be processed, the reconstructed value of the reference pixel point and the reference
  • the reconstruction values of adjacent pixel points above and below the pixel point are subjected to weighted calculation; and the result of the weighted calculation is used to update the reconstruction value of the reference pixel point.
  • the calculation module is further configured to: initially update the motion information by using a first preset algorithm; correspondingly, the compensation module is specifically configured to: based on the The initially updated motion information performs motion compensation on the image block to be processed.
  • the calculation module is further configured to: pre-update the prediction block by using a second preset algorithm; correspondingly, the calculation module is specifically configured to: The weighted calculation is performed on the reconstruction values of one or more reference pixel points and the pre-updated prediction values of the target pixel points in the image block to be processed.
  • the calculation module is further configured to update the predicted value of the target pixel point by using a second preset algorithm.
  • the analysis module is further configured to: parse the code stream to obtain a prediction mode of the image block to be processed; and determine that the prediction mode is a merge mode Or skip mode.
  • the analysis module is further configured to: parse the code stream to obtain update identification information of the image block to be processed; and determine that the update identification identification information indicates update A prediction block of the image block to be processed.
  • the calculation module is further configured to: obtain preset update discrimination identification information of the image block to be processed; determine that the update discrimination identification information indicates to update the to-be-processed The prediction block of the image block.
  • a prediction device that provides motion information, including: a processor and a memory coupled to the processor; and the processor is configured to execute the method described in the first aspect.
  • a computer-readable storage medium stores instructions, and when the instructions are run on a computer, the computer is caused to execute the above-mentioned first aspect. method.
  • a computer program product containing instructions is provided, and when the instructions are run on a computer, the computer is caused to execute the method described in the first aspect above.
  • FIG. 1 is an exemplary block diagram of a video encoding and decoding system that can be configured for use in the embodiments of the present application;
  • FIG. 1 is an exemplary block diagram of a video encoding and decoding system that can be configured for use in the embodiments of the present application;
  • FIG. 2 is an exemplary system block diagram of a video encoder that can be configured for use in an embodiment of the present application
  • FIG. 3 is an exemplary system block diagram of a video decoder that can be configured for use in embodiments of the present application
  • FIG. 4 is a block diagram of an exemplary inter prediction module that can be configured for use in an embodiment of the present application
  • FIG. 5 is an exemplary implementation flowchart of a merge prediction mode
  • FIG. 6 is an exemplary implementation flowchart of an advanced motion vector prediction mode
  • FIG. 7 is an exemplary implementation flowchart of a motion compensation performed by a video decoder that can be configured for an embodiment of the present application
  • FIG. 8 is a schematic diagram of an exemplary coding unit and adjacent position image blocks associated with the coding unit
  • FIG. 9 is an exemplary implementation flowchart of constructing a candidate prediction motion vector list
  • 10 is an exemplary implementation diagram of adding a combined candidate motion vector to a merge mode candidate prediction motion vector list
  • 11 is an exemplary implementation diagram of adding a scaled candidate motion vector to a merge mode candidate prediction motion vector list
  • FIG. 12 is an exemplary implementation diagram of adding a zero motion vector to a merge mode candidate prediction motion vector list
  • FIG. 13 is a schematic flowchart of an inter prediction method according to an embodiment of the present application.
  • FIG. 14 is an exemplary schematic diagram of a candidate motion vector source
  • 15 is a schematic block diagram of an inter prediction apparatus according to an embodiment of the present application.
  • 16 is a schematic block diagram of an inter prediction device according to an embodiment of the present application.
  • FIG. 1 is a block diagram of a video decoding system 1 according to an example described in the embodiment of the present application.
  • video coder generally refers to both video encoders and video decoders.
  • video coding or “coding” may generally refer to video encoding or video decoding.
  • the video encoder 100 and the video decoder 200 of the video decoding system 1 are configured to predict a current coded image block according to various method examples described in any of a variety of new inter prediction modes proposed in the present application.
  • the motion information of the sub-block or its sub-blocks makes the predicted motion vector close to the motion vector obtained using the motion estimation method to the greatest extent, so that the motion vector difference is not transmitted during encoding, thereby further improving the encoding and decoding performance.
  • the video decoding system 1 includes a source device 10 and a destination device 20.
  • the source device 10 generates encoded video data. Therefore, the source device 10 may be referred to as a video encoding device.
  • the destination device 20 may decode the encoded video data generated by the source device 10. Therefore, the destination device 20 may be referred to as a video decoding device.
  • Various implementations of the source device 10, the destination device 20, or both may include one or more processors and a memory coupled to the one or more processors.
  • the memory may include, but is not limited to, RAM, ROM, EEPROM, flash memory, or any other media that can be used to store the desired program code in the form of instructions or data structures accessible by a computer, as described herein.
  • the source device 10 and the destination device 20 may include various devices including desktop computers, mobile computing devices, notebook (e.g., laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called “smart” phones, etc.
  • desktop computers mobile computing devices
  • notebook (e.g., laptop) computers tablet computers
  • set-top boxes telephone handsets such as so-called “smart” phones, etc.
  • Cameras televisions, cameras, display devices, digital media players, video game consoles, on-board computers, or the like.
  • the destination device 20 may receive the encoded video data from the source device 10 via the link 30.
  • the link 30 may include one or more media or devices capable of moving the encoded video data from the source device 10 to the destination device 20.
  • the link 30 may include one or more communication media enabling the source device 10 to directly transmit the encoded video data to the destination device 20 in real time.
  • the source device 10 may modulate the encoded video data according to a communication standard, such as a wireless communication protocol, and may transmit the modulated video data to the destination device 20.
  • the one or more communication media may include wireless and / or wired communication media, such as a radio frequency (RF) spectrum or one or more physical transmission lines.
  • RF radio frequency
  • the one or more communication media may form part of a packet-based network, such as a local area network, a wide area network, or a global network (eg, the Internet).
  • the one or more communication media may include a router, a switch, a base station, or other devices that facilitate communication from the source device 10 to the destination device 20.
  • the encoded data may be output from the output interface 140 to the storage device 40.
  • the encoded data can be accessed from the storage device 40 through the input interface 240.
  • the storage device 40 may include any of a variety of distributed or locally accessed data storage media, such as a hard drive, Blu-ray disc, DVD, CD-ROM, flash memory, volatile or non-volatile memory, Or any other suitable digital storage medium for storing encoded video data.
  • the storage device 40 may correspond to a file server or another intermediate storage device that may hold the encoded video produced by the source device 10.
  • the destination device 20 may access the stored video data from the storage device 40 via streaming or download.
  • the file server may be any type of server capable of storing encoded video data and transmitting the encoded video data to the destination device 20.
  • Example file servers include a web server (eg, for a website), an FTP server, a network attached storage (NAS) device, or a local disk drive.
  • the destination device 20 can access the encoded video data through any standard data connection, including an Internet connection.
  • This may include a wireless channel (e.g., Wi-Fi connection), a wired connection (e.g., DSL, cable modem, etc.), or a combination of both suitable for accessing encoded video data stored on a file server.
  • the transmission of the encoded video data from the storage device 40 may be a streaming transmission, a download transmission, or a combination of the two.
  • the motion vector prediction technology of the present application can be applied to video codecs to support a variety of multimedia applications, such as over-the-air television broadcasting, cable television transmission, satellite television transmission, streaming video transmission (e.g., via the Internet), for storage in data storage Encoding of video data on media, decoding of video data stored on data storage media, or other applications.
  • the video coding system 1 may be used to support one-way or two-way video transmission to support applications such as video streaming, video playback, video broadcasting, and / or video telephony.
  • the video decoding system 1 illustrated in FIG. 1 is only an example, and the techniques of the present application can be applied to a video decoding setting (for example, video encoding or video decoding) that does not necessarily include any data communication between the encoding device and the decoding device .
  • data is retrieved from local storage, streamed over a network, and so on.
  • the video encoding device may encode the data and store the data to a memory, and / or the video decoding device may retrieve the data from the memory and decode the data.
  • encoding and decoding are performed by devices that do not communicate with each other, but only encode data to and / or retrieve data from memory and decode data.
  • the source device 10 includes a video source 120, a video encoder 100, and an output interface 140.
  • the output interface 140 may include a regulator / demodulator (modem) and / or a transmitter.
  • Video source 120 may include a video capture device (e.g., a video camera), a video archive containing previously captured video data, a video feed interface to receive video data from a video content provider, and / or a computer for generating video data Graphics systems, or a combination of these sources of video data.
  • the video encoder 100 may encode video data from the video source 120.
  • the source device 10 transmits the encoded video data directly to the destination device 20 via the output interface 140.
  • the encoded video data may also be stored on the storage device 40 for later access by the destination device 20 for decoding and / or playback.
  • the destination device 20 includes an input interface 240, a video decoder 200, and a display device 220.
  • the input interface 240 includes a receiver and / or a modem.
  • the input interface 240 may receive the encoded video data via the link 30 and / or from the storage device 40.
  • the display device 220 may be integrated with the destination device 20 or may be external to the destination device 20. Generally, the display device 220 displays decoded video data.
  • the display device 220 may include various display devices, such as a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or other types of display devices.
  • LCD liquid crystal display
  • OLED organic light emitting diode
  • video encoder 100 and video decoder 200 may each be integrated with an audio encoder and decoder, and may include an appropriate multiplexer-demultiplexer unit Or other hardware and software to handle encoding of both audio and video in a common or separate data stream.
  • the MUX-DEMUX unit may conform to the ITU-T H.223 multiplexer protocol, or other protocols such as the User Datagram Protocol (UDP).
  • UDP User Datagram Protocol
  • Video encoder 100 and video decoder 200 may each be implemented as any of a variety of circuits such as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), Field Programmable Gate Array (FPGA), discrete logic, hardware, or any combination thereof. If the present application is implemented partially in software, the device may store instructions for the software in a suitable non-volatile computer-readable storage medium and may use one or more processors to execute the instructions in hardware Thus implementing the technology of the present application. Any of the foregoing (including hardware, software, a combination of hardware and software, etc.) may be considered as one or more processors. Each of video encoder 100 and video decoder 200 may be included in one or more encoders or decoders, any of which may be integrated as a combined encoder in a corresponding device / Decoder (codec).
  • codec device / Decoder
  • This application may generally refer to video encoder 100 as “signaling” or “transmitting” certain information to another device, such as video decoder 200.
  • the terms “signaling” or “transmitting” may generally refer to the transmission of syntax elements and / or other data to decode the compressed video data. This transfer can occur in real time or almost real time. Alternatively, this communication may occur over a period of time, such as when a syntax element is stored in a coded stream to a computer-readable storage medium at the time of encoding, and the decoding device may then store the syntax element after the syntax element is stored on this medium. retrieve the syntax element at any time.
  • H.265 JCT-VC has developed the H.265 (HEVC) standard.
  • the HEVC standardization is based on an evolution model of a video decoding device called a HEVC test model (HM).
  • HM HEVC test model
  • the latest standard document of H.265 can be obtained from http://www.itu.int/rec/T-REC-H.265.
  • the latest version of the standard document is H.265 (12/16).
  • the standard document is in full text.
  • the citation is incorporated herein.
  • HM assumes that video decoding devices have several additional capabilities over existing algorithms of TU-TH.264 / AVC. For example, H.264 provides 9 intra-prediction encoding modes, while HM provides up to 35 intra-prediction encoding modes.
  • H.266 test model The evolution model of the video decoding device.
  • the algorithm description of H.266 can be obtained from http://phenix.int-evry.fr/jvet. The latest algorithm description is included in JVET-F1001-v2.
  • the algorithm description document is incorporated herein by reference in its entirety.
  • the reference software for the JEM test model can be obtained from https://jvet.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/, which is also incorporated herein by reference in its entirety.
  • HM can divide a video frame or image into a sequence of tree blocks or maximum coding units (LCUs) containing both luminance and chrominance samples.
  • LCUs are also known as CTUs.
  • the tree block has a similar purpose as the macro block of the H.264 standard.
  • a slice contains several consecutive tree blocks in decoding order. You can split a video frame or image into one or more slices.
  • Each tree block can be split into coding units according to a quadtree. For example, a tree block that is a root node of a quad tree may be split into four child nodes, and each child node may be a parent node and split into another four child nodes.
  • the final indivisible child nodes that are leaf nodes of the quadtree include decoding nodes, such as decoded video blocks.
  • decoding nodes such as decoded video blocks.
  • the syntax data associated with the decoded codestream can define the maximum number of times a tree block can be split, and can also define the minimum size of a decoding node.
  • the coding unit includes a decoding node and a prediction unit (PU) and a transform unit (TU) associated with the decoding node.
  • the size of the CU corresponds to the size of the decoding node and the shape must be square.
  • the size of the CU can range from 8 ⁇ 8 pixels to a maximum 64 ⁇ 64 pixels or larger tree block size.
  • Each CU may contain one or more PUs and one or more TUs.
  • the syntax data associated with a CU may describe a case where a CU is partitioned into one or more PUs.
  • the partitioning mode may be different between cases where the CU is skipped or is encoded in direct mode, intra prediction mode, or inter prediction mode.
  • the PU can be divided into non-square shapes.
  • the syntax data associated with a CU may also describe a case where a CU is partitioned into one or more TUs according to a quadtree.
  • the shape of the TU can be square or non-square.
  • the HEVC standard allows transformation based on the TU, which can be different for different CUs.
  • the TU is usually sized based on the size of the PUs within a given CU defined for the partitioned LCU, but this may not always be the case.
  • the size of the TU is usually the same as or smaller than the PU.
  • a quad-tree structure called "residual quad-tree" (RQT) can be used to subdivide the residual samples corresponding to the CU into smaller units.
  • the leaf node of RQT may be called TU.
  • the pixel difference values associated with the TU may be transformed to produce a transformation coefficient, which may be quantized.
  • the PU contains data related to the prediction process.
  • the PU may include data describing the intra-prediction mode of the PU.
  • the PU may include data defining a motion vector of the PU.
  • the data defining the motion vector of the PU may describe the horizontal component of the motion vector, the vertical component of the motion vector, the resolution of the motion vector (e.g., quarter-pixel accuracy or eighth-pixel accuracy), motion vector The reference image pointed to, and / or the reference image list of the motion vector (eg, list 0, list 1 or list C).
  • TU uses transform and quantization processes.
  • a given CU with one or more PUs may also contain one or more TUs.
  • video encoder 100 may calculate a residual value corresponding to the PU.
  • the residual values include pixel differences that can be transformed into transform coefficients, quantized, and scanned using TU to generate serialized transform coefficients for entropy decoding.
  • This application generally uses the term "video block" to refer to the decoding node of a CU.
  • the term “video block” may also be used in this application to refer to a tree block including a decoding node and a PU and a TU, such as an LCU or a CU.
  • a video sequence usually contains a series of video frames or images.
  • a group of pictures exemplarily includes a series, one or more video pictures.
  • the GOP may include syntax data in the header information of the GOP, the header information of one or more of the pictures, or elsewhere, and the syntax data describes the number of pictures included in the GOP.
  • Each slice of the image may contain slice syntax data describing the coding mode of the corresponding image.
  • Video encoder 100 typically operates on video blocks within individual video slices to encode video data.
  • a video block may correspond to a decoding node within a CU.
  • Video blocks may have fixed or varying sizes, and may differ in size according to a specified decoding standard.
  • HM supports prediction of various PU sizes. Assuming that the size of a specific CU is 2N ⁇ 2N, HM supports intra prediction of PU sizes of 2N ⁇ 2N or N ⁇ N, and symmetric PU sizes of 2N ⁇ 2N, 2N ⁇ N, N ⁇ 2N, or N ⁇ N between frames. prediction. HM also supports asymmetric partitioning of PU-sized inter predictions of 2N ⁇ nU, 2N ⁇ nD, nL ⁇ 2N, and nR ⁇ 2N. In asymmetric partitioning, one direction of the CU is not partitioned, and the other direction is partitioned into 25% and 75%.
  • 2N x nU refers to a horizontally divided 2N x 2NCU, with 2N x 0.5NPU at the top and 2N x 1.5NPU at the bottom.
  • N ⁇ N and “N times N” are used interchangeably to refer to the pixel size of a video block according to vertical and horizontal dimensions, for example, 16 ⁇ 16 pixels or 16 ⁇ 16 pixels.
  • an N ⁇ N block has N pixels in the vertical direction and N pixels in the horizontal direction, where N represents a non-negative integer value.
  • Pixels in a block can be arranged in rows and columns.
  • the block does not necessarily need to have the same number of pixels in the horizontal direction as in the vertical direction.
  • a block may include N ⁇ M pixels, where M is not necessarily equal to N.
  • the video encoder 100 may calculate the residual data of the TU of the CU.
  • a PU may include pixel data in a spatial domain (also referred to as a pixel domain), and a TU may include transforming (e.g., discrete cosine transform (DCT), integer transform, wavelet transform, or conceptually similar transform) Coefficients in the transform domain after being applied to the residual video data.
  • the residual data may correspond to a pixel difference between a pixel of an uncoded image and a prediction value corresponding to a PU.
  • the video encoder 100 may form a TU including residual data of a CU, and then transform the TU to generate a transform coefficient of the CU.
  • video encoder 100 may perform quantization of the transform coefficients.
  • Quantization exemplarily refers to the process of quantizing coefficients to possibly reduce the amount of data used to represent the coefficients to provide further compression.
  • the quantization process may reduce the bit depth associated with some or all of the coefficients. For example, n-bit values may be rounded down to m-bit values during quantization, where n is greater than m.
  • the JEM model further improves the coding structure of video images.
  • a block coding structure called "Quad Tree Combined with Binary Tree” (QTBT) is introduced.
  • QTBT Quality Tree Combined with Binary Tree
  • a CU can be square or rectangular.
  • a CTU first performs a quadtree partition, and the leaf nodes of the quadtree further perform a binary tree partition.
  • there are two partitioning modes in binary tree partitioning symmetrical horizontal partitioning and symmetrical vertical partitioning.
  • the leaf nodes of a binary tree are called CUs.
  • JEM's CUs cannot be further divided during the prediction and transformation process, which means that JEM's CU, PU, and TU have the same block size.
  • the maximum size of the CTU is 256 ⁇ 256 luminance pixels.
  • the video encoder 100 may utilize a predefined scan order to scan the quantized transform coefficients to generate a serialized vector that can be entropy encoded.
  • the video encoder 100 may perform adaptive scanning. After scanning the quantized transform coefficients to form a one-dimensional vector, video encoder 100 may perform context-adaptive variable length decoding (CAVLC), context-adaptive binary arithmetic decoding (CABAC), syntax-based context-adaptive binary Arithmetic decoding (SBAC), probability interval partition entropy (PIPE) decoding, or other entropy decoding methods to entropy decode a one-dimensional vector.
  • Video encoder 100 may also entropy encode syntax elements associated with the encoded video data for use by video decoder 200 to decode the video data.
  • video encoder 100 may assign a context within a context model to a symbol to be transmitted. Context can be related to whether adjacent values of a symbol are non-zero.
  • Context can be related to whether adjacent values of a symbol are non-zero.
  • the video encoder 100 may select a variable length code of a symbol to be transmitted. Codewords in Variable Length Decoding (VLC) may be constructed such that relatively short codes correspond to more likely symbols and longer codes correspond to less likely symbols. In this way, the use of VLC can achieve the goal of saving code rates relative to using equal length codewords for each symbol to be transmitted.
  • the probability in CABAC can be determined based on the context assigned to the symbol.
  • the video encoder may perform inter prediction to reduce temporal redundancy between images.
  • a CU may have one or more prediction units PU according to the provisions of different video compression codec standards.
  • multiple PUs may belong to a CU, or PUs and CUs are the same size.
  • the CU's partitioning mode is not divided, or it is divided into one PU, and the PU is uniformly used for expression.
  • the video encoder may signal the video decoder motion information for the PU.
  • the motion information of the PU may include: a reference image index, a motion vector, and a prediction direction identifier.
  • a motion vector may indicate a displacement between an image block (also called a video block, a pixel block, a pixel set, etc.) of a PU and a reference block of the PU.
  • the reference block of the PU may be a part of the reference picture similar to the image block of the PU.
  • the reference block may be located in a reference image indicated by a reference image index and a prediction direction identifier.
  • the video encoder may generate candidate prediction motion vectors (Motion Vector, MV) for each of the PUs according to the merge prediction mode or advanced motion vector prediction mode process. List.
  • Each candidate prediction motion vector in the candidate prediction motion vector list for the PU may indicate motion information.
  • the motion information indicated by some candidate prediction motion vectors in the candidate prediction motion vector list may be based on the motion information of other PUs. If the candidate prediction motion vector indicates motion information specifying one of a spatial candidate prediction motion vector position or a temporal candidate prediction motion vector position, the present application may refer to the candidate prediction motion vector as an "original" candidate prediction motion vector.
  • a merge mode also referred to herein as a merge prediction mode
  • the video encoder may generate additional candidate prediction motion vectors by combining partial motion vectors from different original candidate prediction motion vectors, modifying the original candidate prediction motion vectors, or inserting only zero motion vectors as candidate prediction motion vectors. These additional candidate prediction motion vectors are not considered as original candidate prediction motion vectors and may be referred to as artificially generated candidate prediction motion vectors in this application.
  • the techniques of this application generally relate to a technique for generating a list of candidate prediction motion vectors at a video encoder and a technique for generating the same list of candidate prediction motion vectors at a video decoder.
  • the video encoder and video decoder may generate the same candidate prediction motion vector list by implementing the same techniques used to construct the candidate prediction motion vector list. For example, both a video encoder and a video decoder may build a list with the same number of candidate prediction motion vectors (eg, five candidate prediction motion vectors).
  • Video encoders and decoders may first consider spatial candidate prediction motion vectors (e.g., neighboring blocks in the same image), then consider temporal candidate prediction motion vectors (e.g., candidate prediction motion vectors in different images), and finally consider The artificially generated candidate prediction motion vectors are added until a desired number of candidate prediction motion vectors are added to the list.
  • a pruning operation may be used for certain types of candidate prediction motion vectors during the construction of the candidate prediction motion vector list in order to remove duplicates from the candidate prediction motion vector list, while for other types of candidate prediction motion vectors, it may not be Use pruning to reduce decoder complexity.
  • a pruning operation may be performed to exclude candidate prediction motion vectors with duplicate motion information from the list of candidate prediction motion vectors.
  • artificially generated candidate predicted motion vectors may be added without performing a trimming operation on the artificially generated candidate predicted motion vectors.
  • the video encoder may select the candidate prediction motion vector from the candidate prediction motion vector list and output the candidate prediction motion vector index in the code stream.
  • the selected candidate prediction motion vector may be a candidate prediction motion vector having a motion vector that most closely matches the predictor of the target PU being decoded.
  • the candidate prediction motion vector index may indicate a position where a candidate prediction motion vector is selected in the candidate prediction motion vector list.
  • the video encoder may also generate a predictive image block for the PU based on a reference block indicated by the motion information of the PU. The motion information of the PU may be determined based on the motion information indicated by the selected candidate prediction motion vector.
  • the motion information of the PU may be the same as the motion information indicated by the selected candidate prediction motion vector.
  • the motion information of the PU may be determined based on the motion vector difference of the PU and the motion information indicated by the selected candidate prediction motion vector.
  • the video encoder may generate one or more residual image blocks for the CU based on the predictive image blocks of the PU of the CU and the original image blocks for the CU. The video encoder may then encode one or more residual image blocks and output one or more residual image blocks in a code stream.
  • the codestream may include data identifying a selected candidate prediction motion vector in the candidate prediction motion vector list of the PU.
  • the video decoder may determine the motion information of the PU based on the motion information indicated by the selected candidate prediction motion vector in the candidate prediction motion vector list of the PU.
  • the video decoder may identify one or more reference blocks for the PU based on the motion information of the PU. After identifying one or more reference blocks of the PU, the video decoder may generate predictive image blocks for the PU based on the one or more reference blocks of the PU.
  • the video decoder may reconstruct an image block for a CU based on a predictive image block for a PU of the CU and one or more residual image blocks for the CU.
  • the present application may describe a position or an image block as having various spatial relationships with a CU or a PU. This description can be interpreted to mean that the position or image block and the image block associated with the CU or PU have various spatial relationships.
  • a PU currently being decoded by a video decoder may be referred to as a current PU, and may also be referred to as a current image block to be processed.
  • This application may refer to the CU that the video decoder is currently decoding as the current CU.
  • This application may refer to the image currently being decoded by the video decoder as the current image. It should be understood that this application is applicable to a case where the PU and the CU have the same size, or the PU is the CU, and the PU is used to represent the same.
  • video encoder 100 may use inter prediction to generate predictive image blocks and motion information for a PU of a CU.
  • the motion information of a given PU may be the same or similar to the motion information of one or more nearby PUs (ie, PUs whose image blocks are spatially or temporally near the image blocks of the given PU). Because nearby PUs often have similar motion information, video encoder 100 may refer to the motion information of nearby PUs to encode motion information for a given PU. Encoding the motion information of a given PU with reference to the motion information of nearby PUs can reduce the number of encoding bits required to indicate the motion information of a given PU in the code stream.
  • Video encoder 100 may refer to motion information of nearby PUs in various ways to encode motion information for a given PU.
  • video encoder 100 may indicate that the motion information of a given PU is the same as the motion information of nearby PUs.
  • This application may use a merge mode to refer to indicating that the motion information of a given PU is the same as that of nearby PUs or may be derived from the motion information of nearby PUs.
  • the video encoder 100 may calculate a Motion Vector Difference (MVD) for a given PU.
  • MVD Motion Vector Difference
  • MVD indicates the difference between the motion vector of a given PU and the motion vector of a nearby PU.
  • Video encoder 100 may include MVD instead of a motion vector of a given PU in the motion information of a given PU. Representing MVD in the codestream requires fewer coding bits than representing the motion vector of a given PU.
  • This application may use advanced motion vector prediction mode to refer to the motion information of a given PU by using the MVD and an index value identifying a candidate motion vector.
  • the video encoder 100 may generate a list of candidate predicted motion vectors for a given PU.
  • the candidate prediction motion vector list may include one or more candidate prediction motion vectors.
  • Each of the candidate prediction motion vectors in the candidate prediction motion vector list for a given PU may specify motion information.
  • the motion information indicated by each candidate prediction motion vector may include a motion vector, a reference image index, and a prediction direction identifier.
  • the candidate prediction motion vectors in the candidate prediction motion vector list may include "raw" candidate prediction motion vectors, each of which indicates motion information that is different from one of the specified candidate prediction motion vector positions within a PU of a given PU.
  • the video encoder 100 may select one of the candidate prediction motion vectors from the candidate prediction motion vector list for the PU. For example, a video encoder may compare each candidate prediction motion vector with the PU being decoded and may select a candidate prediction motion vector with a desired code rate-distortion cost. Video encoder 100 may output a candidate prediction motion vector index for a PU. The candidate prediction motion vector index may identify the position of the selected candidate prediction motion vector in the candidate prediction motion vector list.
  • the video encoder 100 may generate a predictive image block for a PU based on a reference block indicated by motion information of the PU.
  • the motion information of the PU may be determined based on the motion information indicated by the selected candidate prediction motion vector in the candidate prediction motion vector list for the PU.
  • the motion information of the PU may be the same as the motion information indicated by the selected candidate prediction motion vector.
  • motion information of a PU may be determined based on a motion vector difference for the PU and motion information indicated by a selected candidate prediction motion vector.
  • Video encoder 100 may process predictive image blocks for a PU as described previously.
  • video decoder 200 may generate a list of candidate predicted motion vectors for each of the PUs of the CU.
  • the candidate prediction motion vector list generated by the video decoder 200 for the PU may be the same as the candidate prediction motion vector list generated by the video encoder 100 for the PU.
  • the syntax element parsed from the bitstream may indicate the position of the candidate prediction motion vector selected in the candidate prediction motion vector list of the PU.
  • the video decoder 200 may generate predictive image blocks for the PU based on one or more reference blocks indicated by the motion information of the PU.
  • Video decoder 200 may determine motion information of the PU based on the motion information indicated by the selected candidate prediction motion vector in the candidate prediction motion vector list for the PU. Video decoder 200 may reconstruct an image block for a CU based on a predictive image block for a PU and a residual image block for a CU.
  • the construction of the candidate prediction motion vector list and the parsing of the selected candidate prediction motion vector from the code stream in the candidate prediction motion vector list are independent of each other, and can be arbitrarily Sequentially or in parallel.
  • the position of the selected candidate prediction motion vector in the candidate prediction motion vector list is first parsed from the code stream, and a candidate prediction motion vector list is constructed based on the parsed position.
  • a candidate prediction motion vector list is constructed based on the parsed position.
  • the selected candidate predictive motion vector is obtained by parsing the bitstream and is a candidate predictive motion vector with an index of 3 in the candidate predictive motion vector list, only the candidate predictive motion vector from index 0 to index 3 needs to be constructed
  • the list can determine the candidate predicted motion vector with the index of 3, which can achieve the technical effect of reducing complexity and improving decoding efficiency.
  • FIG. 2 is a block diagram of a video encoder 100 according to an example described in the embodiment of the present application.
  • the video encoder 100 is configured to output a video to the post-processing entity 41.
  • the post-processing entity 41 represents an example of a video entity that can process the encoded video data from the video encoder 100, such as a media-aware network element (MANE) or a stitching / editing device.
  • the post-processing entity 41 may be an instance of a network entity.
  • the post-processing entity 41 and the video encoder 100 may be parts of separate devices, while in other cases, the functionality described with respect to the post-processing entity 41 may be performed by the same device including the video encoder 100 carried out.
  • the post-processing entity 41 is an example of the storage device 40 of FIG. 1.
  • the video encoder 100 includes a prediction processing unit 108, a filter unit 106, a decoded image buffer (DPB) 107, a summer 112, a transformer 101, a quantizer 102, and an entropy encoder 103.
  • the prediction processing unit 108 includes an inter predictor 110 and an intra predictor 109.
  • the video encoder 100 further includes an inverse quantizer 104, an inverse transformer 105, and a summer 111.
  • the filter unit 106 is intended to represent one or more loop filters, such as a deblocking filter, an adaptive loop filter (ALF), and a sample adaptive offset (SAO) filter.
  • the filter unit 106 is shown as an in-loop filter in FIG. 2A, in other implementations, the filter unit 106 may be implemented as a post-loop filter.
  • the video encoder 100 may further include a video data memory and a segmentation unit (not shown in the figure).
  • the video data memory may store video data to be encoded by the components of the video encoder 100.
  • the video data stored in the video data storage may be obtained from the video source 120.
  • the DPB 107 may be a reference image memory that stores reference video data used by the video encoder 100 to encode video data in an intra-frame or inter-frame decoding mode.
  • Video data memory and DPB 107 can be formed by any of a variety of memory devices, such as dynamic random access memory (DRAM), synchronous resistive RAM (MRAM), resistive RAM (RRAM) including synchronous DRAM (SDRAM), Or other types of memory devices.
  • Video data storage and DPB 107 can be provided by the same storage device or separate storage devices.
  • the video data memory may be on-chip with other components of video encoder 100 or off-chip relative to those components.
  • the video encoder 100 receives video data and stores the video data in a video data memory.
  • the segmentation unit divides the video data into several image blocks, and these image blocks can be further divided into smaller blocks, such as image block segmentation based on a quad tree structure or a binary tree structure. This segmentation may also include segmentation into slices, tiles, or other larger units.
  • Video encoder 100 typically illustrates components that encode image blocks within a video slice to be encoded.
  • the slice can be divided into multiple image patches (and possibly into a collection of image patches called slices).
  • the prediction processing unit 108 may select one of a plurality of possible coding modes for the current image block, such as one of a plurality of intra coding modes or one of a plurality of inter coding modes.
  • the prediction processing unit 108 may provide the obtained intra, inter-coded block to the summer 112 to generate a residual block, and to the summer 111 to reconstruct an encoded block used as a reference image.
  • the intra predictor 109 within the prediction processing unit 108 may perform intra predictive encoding of the current image block with respect to one or more neighboring blocks in the same frame or slice as the current block to be encoded to remove spatial redundancy.
  • the inter predictor 110 within the prediction processing unit 108 may perform inter predictive coding of the current image block with respect to one or more prediction blocks in the one or more reference images to remove temporal redundancy.
  • the inter predictor 110 may be configured to determine an inter prediction mode for encoding a current image block. For example, the inter predictor 110 may use a rate-distortion analysis to calculate the rate-distortion values of various inter-prediction modes in the set of candidate inter-prediction modes, and select from them the best rate-distortion characteristics Inter prediction mode.
  • Code rate distortion analysis generally determines the amount of distortion (or error) between the coded block and the original uncoded block that was coded to produce the coded block, and the bit rate (also That is, the number of bits).
  • the inter predictor 110 may determine that the inter prediction mode with the lowest code rate distortion cost of encoding the current image block in the candidate inter prediction mode set is the inter prediction mode used for inter prediction of the current image block.
  • the inter predictor 110 is configured to predict motion information (such as a motion vector) of one or more sub-blocks in the current image block based on the determined inter prediction mode, and use the motion information (such as the motion vector) of one or more sub-blocks in the current image block. Motion vector) to obtain or generate a prediction block of the current image block.
  • the inter predictor 110 may locate a prediction block pointed to by the motion vector in one of the reference image lists.
  • the inter predictor 110 may also generate syntax elements associated with image blocks and video slices for use by the video decoder 200 when decoding image blocks of the video slice.
  • the inter predictor 110 uses the motion information of each sub-block to perform a motion compensation process to generate a prediction block of each sub-block, thereby obtaining a prediction block of the current image block. It should be understood that the The inter predictor 110 performs motion estimation and motion compensation processes.
  • the inter predictor 110 may provide information indicating the selected inter prediction mode of the current image block to the entropy encoder 103 so that the entropy encoder 103 encodes the instruction. Information on the selected inter prediction mode.
  • the intra predictor 109 may perform intra prediction on the current image block.
  • the intra predictor 109 may determine an intra prediction mode used to encode the current block.
  • the intra predictor 109 may use a rate-distortion analysis to calculate the rate-distortion values of various intra-prediction modes to be tested, and select the one with the best rate-distortion characteristics from the modes to be tested.
  • Intra prediction mode In any case, after the intra prediction mode is selected for the image block, the intra predictor 109 may provide information indicating the selected intra prediction mode of the current image block to the entropy encoder 103 so that the entropy encoder 103 encodes the indication Information on the selected intra prediction mode.
  • the video encoder 100 forms a residual image block by subtracting the prediction block from the current image block to be encoded.
  • the summer 112 represents one or more components that perform this subtraction operation.
  • the residual video data in the residual block may be included in one or more TUs and applied to the transformer 101.
  • the transformer 101 transforms the residual video data into residual transform coefficients using a transform such as a discrete cosine transform (DCT) or a conceptually similar transform.
  • the transformer 101 may transform the residual video data from a pixel value domain to a transform domain, such as a frequency domain.
  • DCT discrete cosine transform
  • the transformer 101 may send the obtained transform coefficients to a quantizer 102.
  • a quantizer 102 quantizes the transform coefficients to further reduce the bit code rate.
  • the quantizer 102 may then perform a scan of a matrix containing the quantized transform coefficients.
  • the entropy encoder 103 may perform scanning.
  • the entropy encoder 103 After quantization, the entropy encoder 103 entropy encodes the quantized transform coefficients. For example, the entropy encoder 103 can perform context-adaptive variable-length coding (CAVLC), context-adaptive binary arithmetic coding (CABAC), syntax-based context-adaptive binary arithmetic coding (SBAC), and probability interval segmentation entropy (PIPE ) Coding or another entropy coding method or technique.
  • CAVLC context-adaptive variable-length coding
  • CABAC context-adaptive binary arithmetic coding
  • SBAC syntax-based context-adaptive binary arithmetic coding
  • PIPE probability interval segmentation entropy Coding or another entropy coding method or technique.
  • the encoded code stream may be transmitted to the video decoder 200, or archived for later transmission or retrieved by the video decoder 200.
  • the entropy encoder 103 may also perform entrop
  • the inverse quantizer 104 and the inverse changer 105 respectively apply inverse quantization and inverse transform to reconstruct the residual block in the pixel domain, for example, for later use as a reference block of a reference image.
  • the summer 111 adds the reconstructed residual block to a prediction block generated by the inter predictor 110 or the intra predictor 109 to generate a reconstructed image block.
  • the filter unit 106 may be adapted to reconstructed image blocks to reduce distortion, such as block artifacts. This reconstructed image block is then stored as a reference block in the decoded image buffer 107 and can be used by the inter predictor 110 as a reference block to perform inter prediction on subsequent video frames or blocks in the image.
  • the video encoder 100 may directly quantize the residual signal without processing by the transformer 101 and correspondingly does not need to be processed by the inverse transformer 105; or, for some image blocks Or image frames, the video encoder 100 does not generate residual data, and accordingly does not need to be processed by the transformer 101, quantizer 102, inverse quantizer 104, and inverse transformer 105; or, the video encoder 100 may convert the reconstructed image
  • the blocks are stored directly as reference blocks without being processed by the filter unit 106; alternatively, the quantizer 102 and the inverse quantizer 104 in the video encoder 100 may be merged together.
  • FIG. 3 is a block diagram of an example video decoder 200 described in the embodiment of the present application.
  • the video decoder 200 includes an entropy decoder 203, a prediction processing unit 208, an inverse quantizer 204, an inverse transformer 205, a summer 211, a filter unit 206, and a decoded image buffer 207.
  • the prediction processing unit 208 may include an inter predictor 210 and an intra predictor 209.
  • video decoder 200 may perform a decoding process that is substantially inverse to the encoding process described with respect to video encoder 100 from FIG. 2.
  • the video decoder 200 receives from the video encoder 100 an encoded video codestream representing image blocks of the encoded video slice and associated syntax elements.
  • the video decoder 200 may receive video data from the network entity 42, optionally, the video data may also be stored in a video data storage (not shown in the figure).
  • the video data memory may store video data, such as an encoded video code stream, to be decoded by components of the video decoder 200.
  • the video data stored in the video data storage can be obtained, for example, from the storage device 40, from a local video source such as a camera, via a wired or wireless network of video data, or by accessing a physical data storage medium.
  • the video data memory can be used as a decoded image buffer (CPB) for storing encoded video data from the encoded video codestream. Therefore, although the video data storage is not shown in FIG. 3, the video data storage and the DPB 207 may be the same storage, or may be separately provided storages. Video data memory and DPB 207 can be formed by any of a variety of memory devices, such as: dynamic random access memory (DRAM) including synchronous DRAM (SDRAM), magnetoresistive RAM (MRAM), and resistive RAM (RRAM) , Or other types of memory devices. In various examples, the video data memory may be integrated on a chip with other components of the video decoder 200 or provided off-chip relative to those components.
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • MRAM magnetoresistive RAM
  • RRAM resistive RAM
  • the video data memory may be integrated on a chip with other components of the video decoder 200 or provided off-chip relative to those components.
  • the network entity 42 may be, for example, a server, a MANE, a video editor / splicer, or other such device for implementing one or more of the techniques described above.
  • the network entity 42 may or may not include a video encoder, such as video encoder 100.
  • the network entity 42 may implement some of the techniques described in this application.
  • the network entity 42 and the video decoder 200 may be part of separate devices, while in other cases, the functionality described with respect to the network entity 42 may be performed by the same device including the video decoder 200.
  • the network entity 42 may be an example of the storage device 40 of FIG. 1.
  • the entropy decoder 203 of the video decoder 200 entropy decodes the code stream to generate quantized coefficients and some syntax elements.
  • the entropy decoder 203 forwards the syntax elements to the prediction processing unit 208.
  • Video decoder 200 may receive syntax elements at a video slice level and / or an image block level.
  • the intra predictor 209 of the prediction processing unit 208 may be based on the signaled intra prediction mode and the previously decoded block from the current frame or image. Data to generate prediction blocks for image blocks of the current video slice.
  • the inter predictor 210 of the prediction processing unit 208 may determine, based on the syntax elements received from the entropy decoder 203, the An inter prediction mode in which a current image block of a video slice is decoded, and based on the determined inter prediction mode, the current image block is decoded (for example, inter prediction is performed).
  • the inter predictor 210 may determine whether to use the new inter prediction mode to predict the current image block of the current video slice. If the syntax element indicates that the new inter prediction mode is used to predict the current image block, based on A new inter prediction mode (for example, a new inter prediction mode specified by a syntax element or a default new inter prediction mode) predicts the current image block of the current video slice or a sub-block of the current image block. Motion information, so that the motion information of the current image block or a sub-block of the current image block is used to obtain or generate a prediction block of the current image block or a sub-block of the current image block through a motion compensation process.
  • a new inter prediction mode for example, a new inter prediction mode specified by a syntax element or a default new inter prediction mode
  • the motion information here may include reference image information and motion vectors, where the reference image information may include but is not limited to unidirectional / bidirectional prediction information, a reference image list number, and a reference image index corresponding to the reference image list.
  • a prediction block may be generated from one of reference pictures within one of the reference picture lists.
  • the video decoder 200 may construct a reference image list, that is, a list 0 and a list 1, based on the reference images stored in the DPB 207.
  • the reference frame index of the current image may be included in one or more of the reference frame list 0 and list 1.
  • the video encoder 100 may signal whether to use a new inter prediction mode to decode a specific syntax element of a specific block, or may be a signal to indicate whether to use a new inter prediction mode. And indicating which new inter prediction mode is used to decode a specific syntax element of a specific block. It should be understood that the inter predictor 210 here performs a motion compensation process.
  • the inverse quantizer 204 inverse quantizes, that is, dequantizes, the quantized transform coefficients provided in the code stream and decoded by the entropy decoder 203.
  • the inverse quantization process may include using a quantization parameter calculated by the video encoder 100 for each image block in the video slice to determine the degree of quantization that should be applied and similarly to determine the degree of inverse quantization that should be applied.
  • the inverse transformer 205 applies an inverse transform to transform coefficients, such as an inverse DCT, an inverse integer transform, or a conceptually similar inverse transform process to generate a residual block in the pixel domain.
  • the video decoder 200 works by comparing the residual block from the inverse transformer 205 with the corresponding prediction generated by the inter predictor 210 The blocks are summed to get the reconstructed block, that is, the decoded image block.
  • the summer 211 represents a component that performs this summing operation.
  • a loop filter in or after the decoding loop
  • the filter unit 206 may represent one or more loop filters, such as a deblocking filter, an adaptive loop filter (ALF), and a sample adaptive offset (SAO) filter.
  • the filter unit 206 is shown as an in-loop filter in FIG. 2B, in other implementations, the filter unit 206 may be implemented as a post-loop filter.
  • the filter unit 206 is adapted to reconstruct a block to reduce block distortion, and the result is output as a decoded video stream.
  • a decoded image block in a given frame or image may also be stored in a decoded image buffer 207, and the decoded image buffer 207 stores a reference image for subsequent motion compensation.
  • the decoded image buffer 207 may be part of a memory, which may also store the decoded video for later presentation on a display device, such as the display device 220 of FIG. 1, or may be separate from such memory.
  • the video decoder 200 may generate an output video stream without being processed by the filter unit 206; or, for certain image blocks or image frames, the entropy decoder 203 of the video decoder 200 does not decode the quantized coefficients, and accordingly, It does not need to be processed by the inverse quantizer 204 and the inverse transformer 205.
  • the techniques of this application exemplarily involve inter-frame decoding. It should be understood that the techniques of this application may be performed by any of the video decoders described in this application.
  • the video decoder includes, for example, the video encoder 100 and video decoding as shown and described with respect to FIGS. 1-3. ⁇ 200 ⁇ 200. That is, in one feasible implementation, the inter predictor 110 described with respect to FIG. 2 may perform specific techniques described below when performing inter prediction during encoding of a block of video data. In another possible implementation, the inter predictor 210 described with respect to FIG. 3 may perform specific techniques described below when performing inter prediction during decoding of blocks of video data.
  • a reference to a generic "video encoder" or "video decoder” may include video encoder 100, video decoder 200, or another video encoding or coding unit.
  • FIG. 4 is a schematic block diagram of an inter prediction module according to an embodiment of the present application.
  • the inter prediction module 121 may include a motion estimation unit 42 and a motion compensation unit 44.
  • the relationship between PU and CU is different in different video compression codecs.
  • the inter prediction module 121 may partition a current CU into a PU according to a plurality of partitioning modes.
  • the inter prediction module 121 may partition a current CU into a PU according to 2N ⁇ 2N, 2N ⁇ N, N ⁇ 2N, and N ⁇ N partition modes.
  • the current CU is the current PU, which is not limited.
  • the inter prediction module 121 may perform integer motion estimation (IME) and then perform fractional motion estimation (FME) on each of the PUs.
  • IME integer motion estimation
  • FME fractional motion estimation
  • the inter prediction module 121 may search a reference block for a PU in one or more reference images. After the reference block for the PU is found, the inter prediction module 121 may generate a motion vector indicating the spatial displacement between the PU and the reference block for the PU with integer precision.
  • the inter prediction module 121 may improve a motion vector generated by performing IME on the PU.
  • a motion vector generated by performing FME on a PU may have sub-integer precision (eg, 1/2 pixel precision, 1/4 pixel precision, etc.).
  • the inter prediction module 121 may use the motion vector for the PU to generate a predictive image block for the PU.
  • the inter prediction module 121 may generate a list of candidate prediction motion vectors for the PU.
  • the candidate prediction motion vector list may include one or more original candidate prediction motion vectors and one or more additional candidate prediction motion vectors derived from the original candidate prediction motion vectors.
  • the inter prediction module 121 may select the candidate prediction motion vector from the candidate prediction motion vector list and generate a motion vector difference (MVD) for the PU.
  • the MVD for a PU may indicate a difference between a motion vector indicated by a selected candidate prediction motion vector and a motion vector generated for the PU using IME and FME.
  • the inter prediction module 121 may output a candidate prediction motion vector index that identifies the position of the selected candidate prediction motion vector in the candidate prediction motion vector list.
  • the inter prediction module 121 may also output the MVD of the PU.
  • a detailed implementation of the advanced motion vector prediction (AMVP) mode in the embodiment of the present application in FIG. 6 is described in detail below.
  • the inter prediction module 121 may also perform a merge operation on each of the PUs.
  • the inter prediction module 121 may generate a list of candidate prediction motion vectors for the PU.
  • the candidate prediction motion vector list for the PU may include one or more original candidate prediction motion vectors and one or more additional candidate prediction motion vectors derived from the original candidate prediction motion vectors.
  • the original candidate prediction motion vector in the candidate prediction motion vector list may include one or more spatial candidate prediction motion vectors and temporal candidate prediction motion vectors.
  • the spatial candidate prediction motion vector may indicate motion information of other PUs in the current image.
  • the temporal candidate prediction motion vector may be based on motion information of a corresponding PU different from the current picture.
  • the temporal candidate prediction motion vector may also be referred to as temporal motion vector prediction (TMVP).
  • the inter prediction module 121 may select one of the candidate prediction motion vectors from the candidate prediction motion vector list. The inter prediction module 121 may then generate a predictive image block for the PU based on the reference block indicated by the motion information of the PU. In the merge mode, the motion information of the PU may be the same as the motion information indicated by the selected candidate prediction motion vector.
  • Figure 5 described below illustrates an exemplary flowchart for Merge.
  • the inter prediction module 121 may select a predictive image block generated through the FME operation or a merge operation. Predictive image blocks. In some feasible implementations, the inter prediction module 121 may select a predictive image for a PU based on a code rate-distortion cost analysis of the predictive image block generated by the FME operation and the predictive image block generated by the merge operation. Piece.
  • the inter prediction module 121 may select a partitioning mode for the current CU. In some embodiments, the inter prediction module 121 may select a rate-distortion cost analysis for a selected predictive image block of the PU generated by segmenting the current CU according to each of the partitioning modes to select the Split mode.
  • the inter prediction module 121 may output a predictive image block associated with a PU belonging to the selected partition mode to the residual generation module 102.
  • the inter prediction module 121 may output a syntax element indicating motion information of a PU belonging to the selected partitioning mode to the entropy encoding module 116.
  • the inter prediction module 121 includes IME modules 180A to 180N (collectively referred to as “IME module 180”), FME modules 182A to 182N (collectively referred to as “FME module 182”), and merge modules 184A to 184N (collectively referred to as Are “merging module 184"), PU mode decision modules 186A to 186N (collectively referred to as “PU mode decision module 186”) and CU mode decision module 188 (which may also include performing a mode decision process from CTU to CU).
  • IME module 180 IME modules 180A to 180N
  • FME module 182 FME modules 182A to 182N
  • merge modules 184A to 184N collectively referred to as Are “merging module 184"
  • PU mode decision modules 186A to 186N collectively referred to as "PU mode decision module 186”
  • CU mode decision module 188 which may also include performing a mode decision process from CTU to CU).
  • the IME module 180, the FME module 182, and the merge module 184 may perform an IME operation, an FME operation, and a merge operation on a PU of the current CU.
  • the inter prediction module 121 is illustrated in the schematic diagram of FIG. 4 as including a separate IME module 180, an FME module 182, and a merging module 184 for each PU of each partitioning mode of the CU. In other feasible implementations, the inter prediction module 121 does not include a separate IME module 180, an FME module 182, and a merge module 184 for each PU of each partitioning mode of the CU.
  • the IME module 180A, the FME module 182A, and the merge module 184A may perform IME operations, FME operations, and merge operations on a PU generated by dividing a CU according to a 2N ⁇ 2N split mode.
  • the PU mode decision module 186A may select one of the predictive image blocks generated by the IME module 180A, the FME module 182A, and the merge module 184A.
  • the IME module 180B, the FME module 182B, and the merge module 184B may perform an IME operation, an FME operation, and a merge operation on a left PU generated by dividing a CU according to an N ⁇ 2N division mode.
  • the PU mode decision module 186B may select one of the predictive image blocks generated by the IME module 180B, the FME module 182B, and the merge module 184B.
  • the IME module 180C, the FME module 182C, and the merge module 184C may perform an IME operation, an FME operation, and a merge operation on a right PU generated by dividing a CU according to an N ⁇ 2N division mode.
  • the PU mode decision module 186C may select one of the predictive image blocks generated by the IME module 180C, the FME module 182C, and the merge module 184C.
  • the IME module 180N, the FME module 182N, and the merge module 184 may perform an IME operation, an FME operation, and a merge operation on a lower right PU generated by dividing a CU according to an N ⁇ N division mode.
  • the PU mode decision module 186N may select one of the predictive image blocks generated by the IME module 180N, the FME module 182N, and the merge module 184N.
  • the PU mode decision module 186 may select a predictive image block based on a code rate-distortion cost analysis of a plurality of possible predictive image blocks, and select a predictive image block that provides the best code rate-distortion cost for a given decoding situation. For example, for bandwidth-constrained applications, the PU mode decision module 186 may prefer to select predictive image blocks that increase the compression ratio, while for other applications, the PU mode decision module 186 may prefer to select predictive images that increase the quality of the reconstructed video. Piece.
  • the CU mode decision module 188 selects a partition mode for the current CU and outputs the predictive image block and motion information of the PU belonging to the selected partition mode .
  • FIG. 5 is an exemplary flowchart of a merge mode in an embodiment of the present application.
  • a video encoder eg, video encoder 20
  • the video encoder may perform a merge operation different from the merge operation 200.
  • the video encoder may perform a merge operation, where the video encoder performs more or fewer steps than the merge operation 200 or steps different from the merge operation 200.
  • the video encoder may perform the steps of the merge operation 200 in a different order or in parallel.
  • the encoder may also perform a merge operation 200 on a PU encoded in a skip mode.
  • the video encoder may generate a list of candidate predicted motion vectors for the current PU (202).
  • the video encoder may generate a list of candidate prediction motion vectors for the current PU in various ways. For example, the video encoder may generate a list of candidate prediction motion vectors for the current PU according to one of the example techniques described below with respect to FIGS. 8-12.
  • the candidate prediction motion vector list for the current PU may include a temporal candidate prediction motion vector.
  • the temporal candidate prediction motion vector may indicate motion information of a co-located PU in the time domain.
  • a co-located PU may be spatially in the same position in the image frame as the current PU, but in a reference picture instead of the current picture.
  • a reference picture including a PU corresponding to the time domain may be referred to as a related reference picture.
  • a reference image index of a related reference image may be referred to as a related reference image index in this application.
  • the current image may be associated with one or more reference image lists (eg, list 0, list 1, etc.).
  • the reference image index may indicate a reference image by indicating a position in a reference image list of the reference image.
  • the current image may be associated with a combined reference image list.
  • the related reference picture index is the reference picture index of the PU covering the reference index source position associated with the current PU.
  • the reference index source location associated with the current PU is adjacent to the left of the current PU or above the current PU.
  • the PU may "cover" the specific location.
  • the video encoder can use a zero reference image index.
  • the reference index source location associated with the current PU is within the current CU.
  • the PU may need to access motion information of another PU of the current CU in order to determine a reference picture containing a co-located PU. Therefore, these video encoders may use motion information (ie, a reference picture index) of a PU belonging to the current CU to generate a temporal candidate prediction motion vector for the current PU. In other words, these video encoders may use temporal information of a PU belonging to the current CU to generate a temporal candidate prediction motion vector. Therefore, the video encoder may not be able to generate a list of candidate prediction motion vectors for the current PU and the PU covering the reference index source position associated with the current PU in parallel.
  • motion information ie, a reference picture index
  • the video encoder may explicitly set the relevant reference picture index without referring to the reference picture index of any other PU. This may enable the video encoder to generate candidate prediction motion vector lists for the current PU and other PUs of the current CU in parallel. Because the video encoder explicitly sets the relevant reference picture index, the relevant reference picture index is not based on the motion information of any other PU of the current CU. In some feasible implementations where the video encoder explicitly sets the relevant reference picture index, the video encoder may always set the relevant reference picture index to a fixed, predefined preset reference picture index (eg, 0).
  • a fixed, predefined preset reference picture index eg, 0
  • the video encoder may generate a temporal candidate prediction motion vector based on the motion information of the co-located PU in the reference frame indicated by the preset reference picture index, and may include the temporal candidate prediction motion vector in the candidate prediction of the current CU List of motion vectors.
  • the video encoder may be explicitly used in a syntax structure (e.g., image header, slice header, APS, or another syntax structure)
  • the related reference picture index is signaled.
  • the video encoder may signal the decoder to the relevant reference picture index for each LCU (ie, CTU), CU, PU, TU, or other type of sub-block. For example, the video encoder may signal that the relevant reference picture index for each PU of the CU is equal to "1".
  • the relevant reference image index may be set implicitly rather than explicitly.
  • the video encoder may use the motion information of the PU in the reference image indicated by the reference image index of the PU covering the location outside the current CU to generate a candidate prediction motion vector list for the PU of the current CU. Each time candidate predicts a motion vector, even if these locations are not strictly adjacent to the current PU.
  • the video encoder may generate predictive image blocks associated with the candidate prediction motion vectors in the candidate prediction motion vector list (204).
  • the video encoder may generate the candidate prediction motion vector by determining the motion information of the current PU based on the motion information of the indicated candidate prediction motion vector and then generating a predictive image block based on one or more reference blocks indicated by the motion information of the current PU. Associated predictive image blocks.
  • the video encoder may then select one of the candidate prediction motion vectors from the candidate prediction motion vector list (206).
  • the video encoder can select candidate prediction motion vectors in various ways. For example, a video encoder may select one of the candidate prediction motion vectors based on a code rate-distortion cost analysis of each of the predictive image blocks associated with the candidate prediction motion vector.
  • the video encoder may output a candidate prediction motion vector index (208).
  • the candidate prediction motion vector index may indicate a position where a candidate prediction motion vector is selected in the candidate prediction motion vector list.
  • the candidate prediction motion vector index may be represented as "merge_idx".
  • FIG. 6 is an exemplary flowchart of an advanced motion vector prediction (AMVP) mode in an embodiment of the present application.
  • a video encoder eg, video encoder 20
  • the video encoder may generate one or more motion vectors for the current PU (211).
  • the video encoder may perform integer motion estimation and fractional motion estimation to generate motion vectors for the current PU.
  • the current image may be associated with two reference image lists (List 0 and List 1).
  • the video encoder may generate a list 0 motion vector or a list 1 motion vector for the current PU.
  • the list 0 motion vector may indicate a spatial displacement between an image block of the current PU and a reference block in a reference image in list 0.
  • the list 1 motion vector may indicate a spatial displacement between an image block of the current PU and a reference block in a reference image in list 1.
  • the video encoder may generate a list 0 motion vector and a list 1 motion vector for the current PU.
  • the video encoder may generate predictive image blocks for the current PU (212).
  • the video encoder may generate predictive image blocks for the current PU based on one or more reference blocks indicated by one or more motion vectors for the current PU.
  • the video encoder may generate a list of candidate predicted motion vectors for the current PU (213).
  • the video decoder may generate a list of candidate prediction motion vectors for the current PU in various ways.
  • the video encoder may generate a list of candidate prediction motion vectors for the current PU according to one or more of the possible implementations described below with respect to FIGS. 8 to 12.
  • the list of candidate prediction motion vectors may be limited to two candidate prediction motion vectors.
  • the list of candidate prediction motion vectors may include more candidate prediction motion vectors (eg, five candidate prediction motion vectors).
  • the video encoder may generate one or more motion vector differences (MVD) for each candidate prediction motion vector in the list of candidate prediction motion vectors (214).
  • the video encoder may generate a motion vector difference for the candidate prediction motion vector by determining a difference between the motion vector indicated by the candidate prediction motion vector and a corresponding motion vector of the current PU.
  • the video encoder may generate a single MVD for each candidate prediction motion vector. If the current PU is bi-predicted, the video encoder may generate two MVDs for each candidate prediction motion vector.
  • the first MVD may indicate a difference between the motion vector of the candidate prediction motion vector and the list 0 motion vector of the current PU.
  • the second MVD may indicate a difference between the motion vector of the candidate prediction motion vector and the list 1 motion vector of the current PU.
  • the video encoder may select one or more of the candidate prediction motion vectors from the candidate prediction motion vector list (215).
  • the video encoder may select one or more candidate prediction motion vectors in various ways. For example, a video encoder may select a candidate prediction motion vector with an associated motion vector that matches the motion vector to be encoded with minimal error, which may reduce the number of bits required to represent the motion vector difference for the candidate prediction motion vector.
  • the video encoder may output one or more reference image indexes for the current PU, one or more candidate prediction motion vector indexes, and one or more selected candidate motion vectors.
  • One or more motion vector differences of the predicted motion vector (216).
  • the video encoder may output a reference picture index ("ref_idx_10") for List 0 or for Reference image index of list 1 ("ref_idx_11").
  • the video encoder may also output a candidate prediction motion vector index (“mvp_10_flag") indicating the position of the selected candidate prediction motion vector for the list 0 motion vector of the current PU in the candidate prediction motion vector list.
  • the video encoder may output a candidate prediction motion vector index (“mvp_11_flag”) indicating the position of the selected candidate prediction motion vector for the list 1 motion vector of the current PU in the candidate prediction motion vector list.
  • the video encoder may also output a list 0 motion vector or a list 1 motion vector MVD for the current PU.
  • the video encoder may output the reference picture index ("ref_idx_10") for List 0 and the list Reference image index of 1 ("ref_idx_11").
  • the video encoder may also output a candidate prediction motion vector index (“mvp_10_flag") indicating the position of the selected candidate prediction motion vector for the list 0 motion vector of the current PU in the candidate prediction motion vector list.
  • the video encoder may output a candidate prediction motion vector index (“mvp_11_flag”) indicating the position of the selected candidate prediction motion vector for the list 1 motion vector of the current PU in the candidate prediction motion vector list.
  • the video encoder may also output the MVD of the list 0 motion vector for the current PU and the MVD of the list 1 motion vector for the current PU.
  • FIG. 7 is an exemplary flowchart of motion compensation performed by a video decoder (such as video decoder 30) in an embodiment of the present application.
  • the video decoder may receive an indication of the selected candidate prediction motion vector for the current PU (222). For example, the video decoder may receive a candidate prediction motion vector index indicating the position of the selected candidate prediction motion vector within the candidate prediction motion vector list of the current PU.
  • the video decoder may receive the first candidate prediction motion vector index and the second candidate prediction motion vector index.
  • the first candidate prediction motion vector index indicates the position of the selected candidate prediction motion vector for the list 0 motion vector of the current PU in the candidate prediction motion vector list.
  • the second candidate prediction motion vector index indicates the position of the selected candidate prediction motion vector for the list 1 motion vector of the current PU in the candidate prediction motion vector list.
  • a single syntax element may be used to identify two candidate prediction motion vector indexes.
  • the video decoder may generate a list of candidate predicted motion vectors for the current PU (224).
  • the video decoder may generate this candidate prediction motion vector list for the current PU in various ways.
  • the video decoder may use the techniques described below with reference to FIGS. 8 to 12 to generate a list of candidate prediction motion vectors for the current PU.
  • the video decoder may explicitly or implicitly set a reference image index identifying a reference image including a co-located PU, as described above Figure 5 describes this.
  • the video decoder may determine the current PU's based on the motion information indicated by one or more selected candidate prediction motion vectors in the candidate prediction motion vector list for the current PU.
  • Motion information (225). For example, if the motion information of the current PU is encoded using a merge mode, the motion information of the current PU may be the same as the motion information indicated by the selected candidate prediction motion vector. If the motion information of the current PU is encoded using the AMVP mode, the video decoder may use one or more MVDs indicated in the one or more motion vectors and the code stream indicated by the or the selected candidate prediction motion vector. To reconstruct one or more motion vectors of the current PU.
  • the reference image index and prediction direction identifier of the current PU may be the same as the reference image index and prediction direction identifier of the one or more selected candidate prediction motion vectors.
  • the video decoder may generate a predictive image block for the current PU based on one or more reference blocks indicated by the motion information of the current PU (226).
  • FIG. 8 is an exemplary schematic diagram of a coding unit (CU) and an adjacent position image block associated with the coding unit (CU) in the embodiment of the present application, illustrating CU250 and schematic candidate prediction motion vector positions 252A to 252E associated with CU250.
  • This application may collectively refer to the candidate prediction motion vector positions 252A to 252E as the candidate prediction motion vector positions 252.
  • the candidate prediction motion vector position 252 indicates a spatial candidate prediction motion vector in the same image as the CU 250.
  • the candidate prediction motion vector position 252A is positioned to the left of CU250.
  • the candidate prediction motion vector position 252B is positioned above the CU250.
  • the candidate prediction motion vector position 252C is positioned at the upper right of CU250.
  • the candidate prediction motion vector position 252D is positioned at the lower left of CU250.
  • the candidate prediction motion vector position 252E is positioned at the upper left of the CU250.
  • FIG. 8 is a schematic embodiment of a manner for providing an inter prediction module 121 and a motion compensation module 162 to generate a list of candidate prediction motion vectors. The embodiments will be explained below with reference to the inter prediction module 121, but it should be understood that the motion compensation module 162 may implement the same technique and thus generate the same candidate prediction motion vector list.
  • FIG. 9 is an exemplary flowchart of constructing a candidate prediction motion vector list in an embodiment of the present application.
  • the technique of FIG. 9 will be described with reference to a list including five candidate prediction motion vectors, but the techniques described herein may also be used with lists of other sizes.
  • the five candidate prediction motion vectors may each have an index (eg, 0 to 4).
  • the technique of FIG. 9 will be described with reference to a general video decoder.
  • a general video decoder may be, for example, a video encoder (such as video encoder 20) or a video decoder (such as video decoder 30).
  • the video decoder first considers four spatial candidate prediction motion vectors (902).
  • the four spatial candidate prediction motion vectors may include candidate prediction motion vector positions 252A, 252B, 252C, and 252D.
  • the four spatial candidate prediction motion vectors correspond to motion information of four PUs in the same image as the current CU (for example, CU250).
  • the video decoder may consider the four spatial candidate prediction motion vectors in the list in a particular order. For example, the candidate prediction motion vector position 252A may be considered first. If the candidate prediction motion vector position 252A is available, the candidate prediction motion vector position 252A may be assigned to index 0.
  • the video decoder may not include the candidate prediction motion vector position 252A in the candidate prediction motion vector list.
  • Candidate prediction motion vector positions may be unavailable for various reasons. For example, if the candidate prediction motion vector position is not within the current image, the candidate prediction motion vector position may not be available. In another feasible implementation, if the candidate prediction motion vector position is intra-predicted, the candidate prediction motion vector position may not be available. In another feasible implementation, if the candidate prediction motion vector position is in a slice different from the current CU, the candidate prediction motion vector position may not be available.
  • the video decoder may next consider the candidate prediction motion vector position 252B. If the candidate prediction motion vector position 252B is available and different from the candidate prediction motion vector position 252A, the video decoder may add the candidate prediction motion vector position 252B to the candidate prediction motion vector list.
  • the terms "same” and “different” refer to motion information associated with candidate predicted motion vector locations. Therefore, two candidate prediction motion vector positions are considered the same if they have the same motion information, and are considered different if they have different motion information. If the candidate prediction motion vector position 252A is not available, the video decoder may assign the candidate prediction motion vector position 252B to index 0.
  • the video decoder may assign the candidate prediction motion vector position 252 to index 1. If the candidate prediction motion vector position 252B is not available or the same as the candidate prediction motion vector position 252A, the video decoder skips the candidate prediction motion vector position 252B and does not include it in the candidate prediction motion vector list.
  • the candidate prediction motion vector position 252C is similarly considered by the video decoder for inclusion in the list. If the candidate prediction motion vector position 252C is available and not the same as the candidate prediction motion vector positions 252B and 252A, the video decoder assigns the candidate prediction motion vector position 252C to the next available index. If the candidate prediction motion vector position 252C is unavailable or different from at least one of the candidate prediction motion vector positions 252A and 252B, the video decoder does not include the candidate prediction motion vector position 252C in the candidate prediction motion vector list. Next, the video decoder considers the candidate prediction motion vector position 252D.
  • the video decoder assigns the candidate prediction motion vector position 252D to the next available index. If the candidate prediction motion vector position 252D is unavailable or different from at least one of the candidate prediction motion vector positions 252A, 252B, and 252C, the video decoder does not include the candidate prediction motion vector position 252D in the candidate prediction motion vector list.
  • candidate prediction motion vectors 252A to 252D for inclusion in the candidate prediction motion vector list, but in some embodiments, all candidate prediction motion vectors 252A to 252D may be first added to the candidate A list of predicted motion vectors, with duplicates removed from the list of candidate predicted motion vectors later.
  • the candidate prediction motion vector list may include four spatial candidate prediction motion vectors or the list may include less than four spatial candidate prediction motion vectors. If the list includes four spatial candidate prediction motion vectors (904, Yes), the video decoder considers temporal candidate prediction motion vectors (906).
  • the temporal candidate prediction motion vector may correspond to motion information of a co-located PU of a picture different from the current picture. If a temporal candidate prediction motion vector is available and different from the first four spatial candidate prediction motion vectors, the video decoder assigns the temporal candidate prediction motion vector to index 4.
  • the video decoder does not include the temporal candidate prediction motion vector in the candidate prediction motion vector list. Therefore, after the video decoder considers temporal candidate prediction motion vectors (906), the candidate prediction motion vector list may include five candidate prediction motion vectors (the first four spatial candidate prediction motion vectors considered at block 902 and the The temporal candidate prediction motion vector) or may include four candidate prediction motion vectors (the first four spatial candidate prediction motion vectors considered at block 902). If the candidate prediction motion vector list includes five candidate prediction motion vectors (908, Yes), the video decoder completes building the list.
  • the video decoder may consider the fifth spatial candidate prediction motion vector (910).
  • the fifth spatial candidate prediction motion vector may, for example, correspond to the candidate prediction motion vector position 252E. If the candidate prediction motion vector at position 252E is available and different from the candidate prediction motion vectors at positions 252A, 252B, 252C, and 252D, the video decoder may add a fifth spatial candidate prediction motion vector to the candidate prediction motion vector list.
  • the five-space candidate prediction motion vector is assigned to index 4.
  • the video decoder may not include the candidate prediction motion vector at position 252 Candidate prediction motion vector list. So after considering the fifth spatial candidate prediction motion vector (910), the list may include five candidate prediction motion vectors (the first four spatial candidate prediction motion vectors considered at block 902 and the fifth spatial candidate prediction motion considered at block 910) Vector) or may include four candidate prediction motion vectors (the first four spatial candidate prediction motion vectors considered at block 902).
  • the video decoder finishes generating the candidate prediction motion vector list. If the candidate prediction motion vector list includes four candidate prediction motion vectors (912, No), the video decoder adds artificially generated candidate prediction motion vectors (914) until the list includes five candidate prediction motion vectors (916, Yes).
  • the video decoder may consider the fifth spatial candidate prediction motion vector (918).
  • the fifth spatial candidate prediction motion vector may, for example, correspond to the candidate prediction motion vector position 252E. If the candidate prediction motion vector at position 252E is available and different from the candidate prediction motion vectors already included in the candidate prediction motion vector list, the video decoder may add a fifth spatial candidate prediction motion vector to the candidate prediction motion vector list, the The five-space candidate prediction motion vector is assigned to the next available index.
  • the video decoder may not include the candidate prediction motion vector at position 252E in Candidate prediction motion vector list.
  • the video decoder may then consider the temporal candidate prediction motion vector (920). If a temporal candidate prediction motion vector is available and different from the candidate prediction motion vectors already included in the candidate prediction motion vector list, the video decoder may add the temporal candidate prediction motion vector to the candidate prediction motion vector list, the temporal candidate The predicted motion vector is assigned to the next available index. If the temporal candidate prediction motion vector is not available or is not different from one of the candidate prediction motion vectors already included in the candidate prediction motion vector list, the video decoder may not include the temporal candidate prediction motion vector in the candidate prediction motion vector. List.
  • the candidate prediction motion vector list includes five candidate prediction motion vectors (922, Yes)
  • the video decoder finishes generating List of candidate prediction motion vectors. If the list of candidate prediction motion vectors includes less than five candidate prediction motion vectors (922, No), the video decoder adds artificially generated candidate prediction motion vectors (914) until the list includes five candidate prediction motion vectors (916, Yes) until.
  • an additional merge candidate prediction motion vector may be artificially generated after the spatial candidate prediction motion vector and the temporal candidate prediction motion vector to fix the size of the merge candidate prediction motion vector list to a specified number of merge candidate prediction motion vectors (for example, (Five of the previous possible implementations of FIG. 9).
  • Additional merge candidate prediction motion vectors may include exemplary combined bi-predictive merge candidate prediction motion vectors (candidate prediction motion vector 1), scaled bi-directional predictive merge candidate prediction motion vectors (candidate prediction motion vector 2), and zero vectors Merge / AMVP candidate prediction motion vector (candidate prediction motion vector 3).
  • FIG. 10 is an exemplary schematic diagram of adding a combined candidate motion vector to a merge mode candidate prediction motion vector list in an embodiment of the present application.
  • the combined bi-directional predictive merge candidate prediction motion vector may be generated by combining the original merge candidate prediction motion vector.
  • two candidate prediction motion vectors (which have mvL0 and refIdxL0 or mvL1 and refIdxL1) among the original candidate prediction motion vectors may be used to generate a bidirectional predictive merge candidate prediction motion vector.
  • two candidate prediction motion vectors are included in the original merge candidate prediction motion vector list.
  • the prediction type of one candidate prediction motion vector is List 0 unidirectional prediction
  • the prediction type of the other candidate prediction motion vector is List 1 unidirectional prediction.
  • mvL0_A and ref0 are picked from list 0
  • mvL1_B and ref0 are picked from list 1
  • a bidirectional predictive merge candidate prediction motion vector (which has mvL0_A and ref0 in list 0 and MvL1_B and ref0) in Listing 1 and check whether it is different from the candidate prediction motion vectors that have been included in the candidate prediction motion vector list. If it is different, the video decoder may include the bi-directional predictive merge candidate prediction motion vector in the candidate prediction motion vector list.
  • FIG. 11 is an exemplary schematic diagram of adding a scaled candidate motion vector to a merge mode candidate prediction motion vector list in an embodiment of the present application.
  • the scaled bi-directional predictive merge candidate prediction motion vector may be generated by scaling the original merge candidate prediction motion vector.
  • a candidate prediction motion vector (which may have mvLX and refIdxLX) from the original candidate prediction motion vector may be used to generate a bidirectional predictive merge candidate prediction motion vector.
  • two candidate prediction motion vectors are included in the original merge candidate prediction motion vector list.
  • the prediction type of one candidate prediction motion vector is List 0 unidirectional prediction
  • the prediction type of the other candidate prediction motion vector is List 1 unidirectional prediction.
  • mvL0_A and ref0 may be picked from list 0, and ref0 may be copied to the reference index ref0 ′ in list 1. Then, mvL0′_A may be calculated by scaling mvL0_A with ref0 and ref0 ′. The scaling can depend on the POC (Picture Order Count) distance.
  • a bi-directional predictive merge candidate prediction motion vector (which has mvL0_A and ref0 in list 0 and mvL0'_A and ref0 'in list 1) can be generated and checked if it is a duplicate. If it is not duplicate, it can be added to the merge candidate prediction motion vector list.
  • FIG. 12 is an exemplary schematic diagram of adding a zero motion vector to a merge mode candidate prediction motion vector list in an embodiment of the present application.
  • the zero vector merge candidate prediction motion vector may be generated by combining the zero vector with a reference index that can be referred to. If the zero vector candidate prediction motion vector is not duplicated, it can be added to the merge candidate prediction motion vector list. For each generated merge candidate prediction motion vector, the motion information may be compared with the motion information of the previous candidate prediction motion vector in the list.
  • the pruning operation may include comparing one or more new candidate prediction motion vectors with candidate prediction motion vectors already in the candidate prediction motion vector list and not adding as candidates already in the candidate prediction motion vector list. Repeated new candidate prediction motion vector for prediction motion vector.
  • the pruning operation may include adding one or more new candidate prediction motion vectors to a list of candidate prediction motion vectors and removing duplicate candidate prediction motion vectors from the list later.
  • the first preset algorithm and the second preset algorithm in this application may include one or more of them.
  • Inter-picture prediction uses the temporal correlation between pictures to obtain motion-compensated prediction (MCP) for image sample blocks.
  • MCP motion-compensated prediction
  • the video picture is divided into rectangular blocks. Assuming that a block moves uniformly and the moving object is larger than one block, for each block, a corresponding block in a previously decoded picture can be found as a prediction value.
  • a motion vector ( ⁇ x, ⁇ y), where ⁇ x specifies a horizontal displacement relative to the current block position, and ⁇ y specifies a vertical displacement relative to the current block position.
  • Motion vectors ( ⁇ x, ⁇ y) may have fractional sample precision to more accurately capture the movement of underlying objects.
  • interpolation is applied to the reference picture to obtain a prediction signal.
  • the previously decoded picture is called a reference picture and is indicated by a reference index ⁇ t corresponding to the reference picture list.
  • These translational motion model parameters namely the motion vector and the reference index, are further referred to as motion data.
  • Modern video coding standards allow two types of inter-picture prediction, namely unidirectional prediction and bidirectional prediction.
  • bidirectional prediction use two sets of motion data ( ⁇ x 0 , ⁇ y 0 , ⁇ t 0 and ⁇ x 1 , ⁇ y 1 , ⁇ t 1 ) to generate two MCPs (possibly from different pictures), and then combine them to get the final MCP.
  • the reference pictures that can be used in bidirectional prediction are stored in two separate lists, List 0 and List 1.
  • List 0 and List 1 To limit the memory bandwidth in slices that allow bidirectional prediction, the HEVC standard restricts PUs with 4 ⁇ 8 and 8 ⁇ 4 luma prediction blocks to use only unidirectional prediction.
  • Motion data is obtained at the encoder using a motion estimation process. Motion estimation is not specified in the video standard, so different encoders can use different tradeoffs of complexity and quality in their implementation.
  • the motion data of one block is related to neighboring blocks.
  • motion data is not directly encoded in the code stream, but predictively encoded based on neighboring motion data.
  • HEVC two concepts are used for this.
  • AMVP advanced motion vector prediction
  • inter-predicted block merging obtains all motion data of a block from neighboring blocks, thereby replacing the pass-through and skip modes in H.264 / AVC.
  • the HEVC motion vector is encoded as a difference from a so-called motion vector predictor (MVP) according to the horizontal (x) and vertical (y) components.
  • MVP motion vector predictor
  • the calculation of the two motion vector difference (MVD) components is shown in equations (1.1) and (1.2).
  • the motion vector of the current block is usually related to the motion vectors of neighboring blocks in the current picture or in an earlier coded picture. This is because neighboring blocks may correspond to the same moving object with similar motion, and the motion of the object is unlikely to change suddenly over time. Therefore, using a motion vector in a neighboring block as a prediction value reduces the magnitude of a signaled motion vector difference.
  • MVP is usually obtained from decoded motion vectors from spatially neighboring blocks or from temporally neighboring blocks in co-located pictures. In some cases, zero motion vectors can also be used as MVP. In H.264 / AVC, this is done by performing the component form median of three spatially adjacent motion vectors. With this method, there is no need to signal the predicted value.
  • the temporal MVP from co-located pictures is only considered in the so-called temporal pass-through mode of H.264 / AVC.
  • H.264 / AVC pass-through mode is also used to obtain motion data other than motion vectors.
  • the method of implicitly obtaining MVP is replaced by a technique called motion vector competition, which explicitly signals which MVP in the MVP list is used for motion vector acquisition.
  • the variable-coded quad-tree block structure in HEVC can lead to a block making several neighboring blocks with motion vectors as potential MVP candidates.
  • the initial design of Advanced Motion Vector Prediction (AMVP) includes five MVPs from three different categories of predictions: three motion vectors from spatial neighbors, the median value of three spatial predictions, and parity Scaled motion vectors of temporally neighboring blocks.
  • the list of predicted values is modified by reordering to place the most probable motion prediction values in the first position and by removing redundant candidates to ensure minimal signaling overhead.
  • the final design of the AMVP candidate list construction includes the following two MVP candidates: a. A maximum of two spatial candidate MVPs obtained from five spatially neighboring blocks; b. When two spatial candidate MVPs are not available or they are the same, from One temporal candidate MVP obtained from two temporally co-located blocks; c. Zero motion vector when spatial candidate, temporal candidate, or neither is available.
  • two spatial MVP candidates A and B are obtained from five spatially neighboring blocks.
  • the positions of spatial candidate blocks are the same.
  • candidate A the motion data from the two blocks A0 and A1 in the lower left corner are considered in a two-pass method.
  • the first pass it is checked whether any candidate block contains a reference index equal to the reference index of the current block.
  • the first motion vector found will be candidate A.
  • the motion vector needs to be scaled according to the temporal distance between the candidate reference picture and the current reference picture.
  • Equation (1.3) shows how to scale the candidate motion vector mvcand according to the scaling factor.
  • the ScaleFactor is calculated based on the time distance between the current picture and the reference picture of the candidate block td and the time distance between the current picture and the reference picture of the current block tb.
  • the time distance is expressed as a difference between picture order number (POC) values that define a picture display order.
  • POC picture order number
  • the scaling operation is basically the same as the scheme used for time pass-through mode in H.264 / AVC. This decomposition allows pre-calculation of the ScaleFactor at the slice level, as it only depends on the signaled reference picture list structure in the slice header. It should be noted that MV scaling is performed only when the current reference picture and the candidate reference picture are both short-term reference pictures.
  • the parameter td is defined as the POC difference between the co-located picture of the co-located candidate block and the reference picture.
  • candidate B the candidates B0 to B2 are sequentially checked in the same manner as A0 and A1 are checked in the first pass.
  • the second pass is performed only when the blocks A0 and A1 do not contain any motion information, that is, when it is not available or encoded using intra-picture prediction.
  • candidate A is found, candidate A is set equal to unscaled candidate B, and candidate B is set to a second unscaled or scaled variant equal to candidate B.
  • the second pass searches the unscaled and scaled MVs obtained from the candidates B0 to B2. Overall, this design allows A0 and A1 to be processed independently of B0, B1, and B2.
  • TMVP temporal motion vector predictor
  • HEVC provides an indication for each picture which The possibility of a reference picture being considered co-located. This is done by signalling the co-located reference picture list and reference picture index in the slice header and requiring that these syntax elements in all slices in the picture should specify the same reference picture.
  • temporal MVP candidates introduce additional dependencies, their use may need to be disabled for error robustness reasons.
  • the temporal pass-through mode (direct_spatial_mv_pred_flag) of the bidirectional prediction slice in the slice header may be disabled.
  • the HEVC syntax extends this signaling by allowing TMVP (sps / slice_temporal_mvp_enabled_flag) to be disabled at the sequence level or at the picture level.
  • TMVP sps / slice_temporal_mvp_enabled_flag
  • inter_pred_idc signals whether the reference list 0, 1 or both are used.
  • the corresponding reference picture ( ⁇ t) is signaled by the index ref_idx_l0 / 1 of the reference picture list, and MV ( ⁇ x, ⁇ y) is indicated by the index mvp_l0 / 1_flag of the MVP and its MVD .
  • a newly introduced flag mvd_l1_zero_flag in the slice header indicates whether the MVD of the second reference picture list is equal to zero and is therefore not signaled in the code stream.
  • the AMVP list contains only the motion vectors of one reference list, and the merge candidate contains all motion data, including whether to use one or two reference picture lists and information about the reference index and motion vector of each list.
  • the merge candidate list is constructed based on the following candidates: a. Up to four spatial merge candidates obtained from five spatially neighboring blocks; b. One temporal merge candidate obtained from two temporally co-located blocks; c. Additional merge candidates containing combined bi-prediction candidates and zero motion vector candidates.
  • the first candidate in the merge candidate list is a spatial neighbor. By sequentially checking A1, B1, B0, A0, and B2 in sequence, a maximum of four candidates can be inserted in the merged list in that order.
  • redundancy checks can be divided into two categories for two different purposes: a. Avoiding candidates with redundant motion data in the list; b. Preventing the merging of two redundant syntaxes that can be represented in other ways Partition.
  • N is the number of spatial merge candidates
  • the complete redundancy check will be determined by Comparison of secondary exercise data.
  • ten motion data comparisons will be required to ensure that all candidates in the merge list have different motion data.
  • the inspection of redundant motion data has been reduced to a subset, thereby maintaining a significant reduction in comparison logic while maintaining coding efficiency.
  • no more than two comparisons are performed for each candidate, resulting in a total of five comparisons. Given the order of ⁇ A1, B1, B0, A0, B2 ⁇ , B0 checks only B1, A0 checks only A1, and B2 checks only A1 and B1.
  • partition redundancy check the bottom PU and top PU of the 2N ⁇ N partition are merged by selecting candidate B1. This will result in one CU having two PUs with the same motion data, which can be signaled equally as a 2N ⁇ 2N CU. Overall, this check applies to all second PUs with rectangular and asymmetric partitions 2N ⁇ N, 2N ⁇ nU, 2N ⁇ nD, N ⁇ 2N, nR ⁇ 2N, and nL ⁇ 2N. It should be noted that for the spatial merge candidate, only the redundancy check is performed, and the motion data is copied from the candidate block as it is. Therefore, no motion vector scaling is needed here.
  • the motion vector of the temporal merge candidate is obtained in the same way as TMVP. Since the merge candidate includes all motion data and TMVP is only one motion vector, the acquisition of the entire motion data depends only on the type of slice.
  • TMVP is obtained for each reference picture list.
  • set the prediction type to bi-directional prediction or the list available to TMVP. All related reference picture indexes are set equal to zero. Therefore, for unidirectional prediction slices, only the TMVP of list 0 is obtained along with the reference picture index equal to zero.
  • the length of the merge candidate list is fixed. After spatial and temporal merging candidates have been added, it may happen that the list does not yet have a fixed length. To compensate for the coding efficiency loss that occurs with non-length adaptive list index signaling, additional candidates are generated. Depending on the type of slice, up to two candidates can be used to completely populate the list: a. Combined bi-directional prediction candidates; b. Zero motion vector candidates.
  • another candidate can be generated based on the existing candidate. This is done by copying ⁇ x 0 , ⁇ y 0 , ⁇ t 0 from one candidate such as the first candidate, and ⁇ x 1 , ⁇ y 1 , ⁇ t 1 from another candidate such as the second candidate.
  • Different combinations are predefined and given in Table 1.1.
  • zero motion vector candidates are calculated to make the list complete. All zero motion vector candidates have one zero displacement motion vector for one-way prediction slices and two zero displacement motion vectors for two-way prediction slices.
  • the reference index is set equal to zero and incremented by one for each additional candidate until the maximum number of reference indexes is reached. If this is the case, and there are other candidates missing, these candidates are created using a reference index equal to zero. For all other candidates, no redundancy check is performed, as the results show that omitting these checks does not cause a loss of coding efficiency.
  • merge_flag indicates that the block merge is used to obtain motion data.
  • merge_idx further determines the candidates in the merge list that provide all the motion data required by the MCP.
  • the number of candidates in the merge list is also signaled in the slice header. Since the default value is five, it is expressed as the difference from five (five_minus_max_num_merge_cand). In this way, five are signaled with a short codeword of 0, while only one candidate is signaled with a longer codeword of 4.
  • the impact on the merge candidate list construction process the entire process remains the same, but after the list contains the maximum number of merge candidates, the process terminates.
  • the maximum value of the merge index encoding is given by the number of space and time candidates available in the list.
  • the index can be efficiently encoded as a flag when only two candidates are available, for example.
  • the entire merge candidate list must be constructed to understand the actual number of candidates. Assuming neighboring blocks that are unavailable due to a transmission error, it will no longer be possible to parse the merge index.
  • the key application of the block merge concept in HEVC is the combination with the skip mode.
  • a skip mode was used to indicate blocks that speculate rather than explicitly signal motion data, and predict that the residual is zero, that is, no transform coefficients are sent.
  • the skip_flag is signaled at the beginning of each CU in the inter-picture prediction slice, which means the following: a. The CU contains only one PU (2N ⁇ 2N partition type); b. Use merge mode to Get motion data (merge_flag is equal to 1); c. There is no residual data in the code stream.
  • a parallel merge estimation level indicating regions is introduced in HEVC, where a merge candidate list can be obtained independently by checking whether a candidate block is located in the merge estimation region (MER). Candidate blocks in the same MER are not included in the merge candidate list. Therefore, its motion data does not need to be available during list construction.
  • this level is, for example, 32, then all prediction units in a 32 ⁇ 32 region can build a merge candidate list in parallel, because all merge candidates in the same 32 ⁇ 32MER are not inserted into the list. All potential merge candidates for the first PU0 are available because they are outside the first 32 ⁇ 32MER.
  • the merge candidate list for PU 2-6 cannot contain motion data from these PUs.
  • the merged list of PU5 consists of only temporal candidates (if available) and zero MV candidates.
  • the parallel merge estimation level is adaptive, and is signaled as log2_parallel_merge_level_minus2 in the picture parameter set.
  • each CU can have up to a set of motion parameters for each prediction direction.
  • two sub-CU-level motion vector prediction methods are considered by dividing a large CU into sub-CUs and obtaining motion information of all the sub-CUs of the large CU.
  • the alternative temporal motion vector prediction (alternative temporal motion vector prediction) method allows each CU to extract multiple sets of motion information from multiple co-located reference pictures that are smaller than the current CU.
  • STMVP spatial-temporal motion vector prediction
  • a motion vector of a sub-CU is recursively obtained by using a temporal motion vector prediction value and a spatially adjacent motion vector.
  • the motion vector temporal motion vector prediction is modified by extracting multiple sets of motion information (including motion vectors and reference indexes) from blocks smaller than the current CU.
  • vector prediction TMVP
  • the sub-CU is a square N ⁇ N block (N is set to 4 by default).
  • ATMVP predicts the motion vector of a sub-CU within a CU in two steps.
  • the first step is to use a so-called time vector to identify the corresponding block in the reference picture.
  • the reference picture is called a motion source picture.
  • the second step is to divide the current CU into sub-CUs, and obtain the motion vector and reference index of each sub-CU from the block corresponding to each sub-CU.
  • the reference picture and the corresponding block are determined by the motion information of the spatially neighboring blocks of the current CU.
  • the first merge candidate in the merge candidate list of the current CU is used.
  • the first available motion vector and its associated reference index are set as the index of the time vector and the motion source picture. In this way, in ATMVP, compared with TMVP, the corresponding block can be more accurately identified, where the corresponding block (sometimes referred to as a co-located block) is always located in the lower right or center position relative to the current CU.
  • the corresponding block of the sub-CU is identified by adding the time vector to the coordinates of the current CU and the time vector in the motion source picture. For each sub-CU, use the motion information of its corresponding block (the minimum motion grid covering the central sample) to obtain the motion information of the sub-CU. After identifying the motion information of the corresponding N ⁇ N block, it is converted into the motion vector and reference index of the current sub-CU in the same way as TMVP of HEVC, where motion scaling and other procedures are applicable.
  • the decoder checks whether the low-latency condition is satisfied (i.e., the POC of all reference pictures of the current picture is less than the POC of the current picture) and may use motion vector MVx (the motion vector corresponding to the reference picture list X) to predict each sub-CU Motion vector MVy (where X equals 0 or 1 and Y equals 1-X).
  • the motion vectors of the sub-CUs are recursively obtained in the raster scan order.
  • the movement of the child CU A starts by identifying its two spatial neighbors.
  • the first neighbor is an N ⁇ N block (block c) above the child CU. If this block c is unavailable or intra-coded, other N ⁇ N blocks (from left to right, starting from block c) above the sub-CU A are checked.
  • the second neighbor is the block to the left of the child CU A (block b). If block b is unavailable or is intra-coded, the other blocks to the left of sub-CU A are checked (from top to bottom, starting from block b). For a given list, the motion information obtained from the neighboring blocks of each list is scaled to the first reference frame.
  • the temporal motion vector predictor (TMVP) of the sub-block A is obtained by following the same process as the TMVP specified in HEVC.
  • the motion information of the co-located block at position D is extracted and scaled accordingly.
  • all available motion vectors (up to 3) of each reference list are averaged separately.
  • the average motion vector is assigned as the motion vector of the current sub-CU.
  • the sub-CU mode is enabled as a further merge candidate, and no additional syntax elements are required to signal the mode.
  • Two additional merge candidates are added to the merge candidate list of each CU to represent the ATMVP mode and the STMVP mode. If the sequence parameter set indicates that ATMVP and STMVP are enabled, a maximum of seven merge candidates are used.
  • the encoding logic of the other merge candidates is the same as that of the merge candidates in HM, which means that for each CU in a P or B slice, two more RD checks are required for two other merge candidates.
  • the affine motion field of a block is described by two control point motion vectors.
  • the motion vector field (MVF) of a block is described by the following equation:
  • sub-block-based affine transformation prediction is applied.
  • the sub-block size M ⁇ N is obtained in equation (1.7), where MvPre is the motion vector fractional precision (for example, 1/16), and (v 2x , v 2y ) is the motion vector of the lower left control point calculated according to equation (1.6).
  • M and N should be adjusted downwards as necessary to make them divisors of w and h, respectively.
  • the motion vector of the center sample of each sub-block is calculated according to equation (1.6) and rounded to a fractional accuracy of 1/16.
  • AF_INTER mode can be applied.
  • the affine flag in the CU hierarchy is signaled in the code stream to indicate whether to use the AF_INTER mode.
  • the motion vector from the neighboring block is scaled according to the reference list and the relationship between the POC referenced by the neighboring block, the POC referenced by the current CU and the POC of the current CU.
  • the method of selecting v 1 from adjacent blocks D and E is similar. If the number of candidate lists is less than 2, the list is populated by duplicating a motion vector pair composed of each AMVP candidate. When the candidate list is greater than 2, the candidates are first sorted according to the consistency of neighboring motion vectors (similarity of two motion vectors among the candidates), and only the first two candidates are retained.
  • the RD cost check is used to determine which motion vector is selected for the candidate as the control point motion vector prediction (CPMVP) of the current CU. And an index indicating the position of the CPMVP in the candidate list is signaled in the code stream. The difference between CPMV and CPMVP is signaled in the code stream.
  • CPMVP control point motion vector prediction
  • the CU When the CU is applied in AF_MERGE mode, it obtains the first block encoded in affine mode from the valid neighboring reconstruction block.
  • the selection order of candidate blocks is from left, top, top right, bottom left to top left. If the adjacent lower left block A is encoded in an affine mode, motion vectors v 2 , v 3 , and v 4 of the upper left, upper right, and lower left corners of the CU containing the block A are obtained. And the motion vector v 0 in the upper left corner on the current CU is calculated according to v 2 , v 3 and v 4 . Next, calculate the motion vector v 1 at the upper right of the current CU.
  • the affine flag is signaled in the code stream.
  • the pattern matching motion vector is obtained
  • the pattern matching motion vector (PMMVD) mode is based on the Frame-Rate Up Conversion (FRUC) technology. In this mode, the motion information of the block is not signaled, but is obtained at the decoder.
  • FRUC Frame-Rate Up Conversion
  • the CU's merge flag When the CU's merge flag is true, its FRUC flag is signaled. When the FRUC flag is false, the merge index is signaled and the normal merge mode is used. When the FRUC flag is true, another FRUC mode flag is signaled to indicate which method (bilateral matching or template matching) will be used to obtain the motion information of the block.
  • the decision on whether to use the FRUC merge mode for the CU is based on the RD cost selection made for normal merge candidates. These two matching modes (bilateral matching and template matching) are checked against the CU by using RD cost selection. The one with the lowest cost is further compared with other CU modes. If the FRUC matching mode is the most efficient mode, the CU's FRUC flag is set to true and the relevant matching mode is used.
  • CU-level motion search is performed first, and then sub-CU-level motion refinement is performed.
  • the initial motion vector of the entire CU is obtained based on bilateral matching or template matching.
  • a MV candidate list is generated, and a candidate that minimizes the matching cost is selected as a starting point for further CU level refinement.
  • a local search based on bilateral matching or template matching around the starting point is performed, and the MV that minimizes the matching cost is taken as the MV of the entire CU.
  • the obtained CU motion vector is used as a starting point to further refine the motion information at the sub-CU level.
  • the following obtaining process is performed.
  • the MVs of the entire W ⁇ H CUs are obtained.
  • the CU is further divided into M ⁇ M sub-CUs.
  • D is the predefined segmentation depth, which is set to 3 by default in JEM. Then get the MV of each sub-CU.
  • the bilateral matching is used to obtain the motion information of the current CU by finding the closest match between two blocks along the motion trajectory of the current CU in two different reference pictures.
  • the motion vectors MV0 and MV1 pointing to two reference blocks should be proportional to the time distance between the current picture and the two reference pictures, that is, TD0 and TD1.
  • the bilateral matching becomes a mirror-based two-way MV.
  • the encoder can choose between unidirectional prediction from list0, unidirectional prediction from list1, or bidirectional prediction for CU. Select based on template matching costs, as follows:
  • costBi ⁇ factor * min (cost0, cost1)
  • cost0 is the SAD matched by the list0 template
  • cost1 is the SAD matched by the list1 template
  • costBi is the SAD matched by the bidirectional prediction template.
  • the value of the factor is equal to 1.25, which means that the selection process is biased towards bidirectional prediction.
  • the inter prediction direction selection is only applicable to the CU-level template matching process.
  • template matching to find the motion information of the current CU by finding the closest match between the template in the current picture (the top and / or left side neighboring blocks of the current CU) and the block in the reference picture (the same size as the template).
  • template matching is also applicable to AMVP mode. Use the template matching method to get new candidates. If the newly obtained candidate is different from the first existing AMVP candidate through template matching, it is inserted at the beginning of the AMVP candidate list, and then the list size is set to 2 (meaning that the second existing AMVP is removed Candidate). When applied to the AMVP mode, only the CU-level search is applied.
  • the MV candidates set at the CU level include: a. If the current CU is in AMVP mode, select the original AMVP candidate; b. All merge candidates; c. Interpolate several MVs in the MV field; d. Top and left Side adjacent motion vector.
  • the above-mentioned interpolated MV field is generated before the picture of the entire picture is encoded based on the single-sided ME.
  • the playground can then be used as a CU-level or sub-CU-level MV candidate.
  • the motion field of each reference picture in the two reference lists is traversed at the 4 ⁇ 4 block level. For each 4 ⁇ 4 block, if the block-related motion passes through the 4 ⁇ 4 block in the current picture and the block has not been assigned any interpolation motion, then the motion of the reference block is scaled to The current picture (in the same way as the MV scaling of TMVP in HEVC), and the scaled motion is assigned to the block in the current frame. If a scaled MV is not assigned to a 4 ⁇ 4 block, the motion of the block is marked as unavailable in the interpolated motion field.
  • each valid MV of the merge candidate is used as an input to generate a MV pair, assuming bilateral matching.
  • one valid MV for the merge candidate is (MVa, refa) at reference list A.
  • the reference picture refb of its paired bilateral MV is found in another reference list B, so that refa and refb are on different sides of the current picture in time. If such refb is not available in reference list B, refb is determined to be a different reference from refa, and its time distance to the current picture is the minimum value in list B.
  • MVb is obtained by scaling MVA based on the time distance between the current picture and refa, refb.
  • MVs from the interpolated MV field are also added to the CU-level candidate list. More specifically, interpolation MVs at positions (0,0), (W / 2,0), (0, H / 2), and (W / 2, H / 2) of the current CU are added.
  • the original AMVP candidate is also added to the CU-level MV candidate set.
  • the MV candidates set at the sub-CU level include: a. MVs determined from the CU level search; b. Top, left, top left, and top right adjacent MVs; c. Scaled versions of co-located MVs from reference pictures; d. Up to 4 ATMVP candidates; e. Up to 4 STMVP candidates.
  • a scaled MV from the reference picture is obtained as follows. Iterate through all reference pictures in both lists. The MV at the co-located position of the sub-CU in the reference picture is scaled to a reference of the starting CU-level MV.
  • ATMVP and STMVP candidates are limited to the first four.
  • Motion vectors can be refined by different methods combining different inter prediction modes.
  • MV refinement is a pattern-based MV search with criteria for bilateral matching costs or template matching costs.
  • two search modes are supported-unrestricted center-biased diamond search (UCBDS) at the CU level and sub-CU level, and adaptive cross search for MV refinement.
  • UMBDS center-biased diamond search
  • the MV is searched directly with a quarter brightness sample MV accuracy, and then an eighth brightness sample MV is refined.
  • the search range for MV refinement for CU and sub-CU steps is set equal to 8 luminance samples.
  • the bidirectional prediction operation in order to predict a block region, two prediction blocks formed using the MV of list0 and the MV of list1, respectively, are combined to form a single prediction signal.
  • the two motion vectors for bidirectional prediction are further refined through a bilateral template matching process. Apply bilateral template matching in the decoder to perform a distortion-based search between the bilateral template and the reconstructed samples in the reference picture in order to obtain a refined MV without sending additional motion information.
  • a bilateral template is generated from the initial MV0 of list0 and the MV1 of list1 as a weighted combination (ie, average) of two prediction blocks.
  • the template matching operation consists of calculating a cost metric between the template generated and the sample area (around the initial prediction block) in the reference picture. For each of the two reference pictures, the MV that produces the smallest template cost is considered as an updated MV of the list to replace the original MV.
  • each list searches for nine MV candidates.
  • the nine MV candidates include the original MV and 8 surrounding MVs, with one of the luminance samples offset to the original MV in the horizontal or vertical direction or both.
  • two new MVs, MV0 ′ and MV1 ′ are used to generate the final bidirectional prediction results.
  • the sum of absolute differences (SAD) is used as a cost metric.
  • TMVP motion vectors, reference indexes, and encoding modes
  • HEVC uses motion data storage reduction (MDSR) to reduce the size of the motion data buffer and the associated memory access bandwidth by subsampling the motion data in the reference picture.
  • MDSR motion data storage reduction
  • H.264 / AVC stores this information on a 4 ⁇ 4 block basis
  • HEVC uses 16 ⁇ 16 blocks, where the upper left 4 ⁇ 4 block information is stored in the case of subsampling a 4 ⁇ 4 grid. . Due to this subsampling, the MDSR affects the quality of the temporal prediction.
  • the motion vector accuracy is one quarter of a pixel (for a 4: 2: 0 video, one quarter of a luma sample and one eighth of a chroma sample).
  • the accuracy of internal motion vector storage and merge candidates is improved to 1/16 pixels.
  • For CUs coded in normal AMVP mode use integer pixel or quarter pixel motion.
  • MVD motion vector difference
  • LAMVR locally adaptive motion vector resolution
  • MVD can be coded in units of quarter brightness samples, integer brightness samples, or four brightness samples.
  • the MVD resolution is controlled at the coding unit (CU) level, and the MVD resolution flag is conditionally signaled for each CU with at least one non-zero MVD component.
  • the first flag is signaled to indicate whether a quarter brightness sample MV accuracy is used in the CU.
  • the first flag (equal to 1) indicates that the quarter brightness sample MV accuracy is not used, another flag is signaled to indicate whether the integer brightness sample MV accuracy or the four brightness sample MV accuracy is used.
  • a quarter brightness sample MV resolution is used for the CU.
  • the MVP in the AMVP candidate list of the CU is rounded to the corresponding precision.
  • a CU-level RD check is used to determine which MVD resolution is used for the CU. That is, for each MVD resolution, three CU-level RD checks are performed.
  • motion-compensated interpolation is required.
  • a 4-tap separable DCT-based interpolation filter is used, as shown in Table 1.3.
  • the output of the interpolation filter is kept to 14-bit accuracy before averaging the two prediction signals, regardless of the source bit depth.
  • the actual averaging process is implicit in the bit depth reduction process, as shown below:
  • predSamples [x, y] (predSamplesL0 [x, y] + predSamplesL1 [x, y] + offset) >> shift (1.9)
  • bilateral linear interpolation and template matching are used instead of the conventional 8-tap HEVC interpolation.
  • the matching cost C of the bilateral matching at the sub-CU level search is calculated as follows:
  • w is a weighting factor empirically set to 4
  • MV and MV s indicate the current MV and the starting MV, respectively.
  • SAD is still used as the matching cost of template matching at the sub-CU level search.
  • the MV is obtained by using only luminance samples. The resulting motion will be used for MC inter-prediction luminance and chrominance. After the MV is determined, the final MC is performed using an 8-tap interpolation filter for luminance and a 4-tap interpolation filter for chrominance.
  • overlapping block motion compensation For all motion compensation (MC) block boundaries, overlapping block motion compensation (OBMC) is performed, except for the right and bottom boundaries of the CU currently under development. In addition, it applies to both luminance and chrominance components.
  • the MC block corresponds to a coding block.
  • OBMC is performed at the sub-block level for all MC block boundaries, where the sub-block size is set equal to 4 ⁇ 4.
  • OBMC When OBMC is applied to the current sub-block, in addition to the current motion vector, if the motion vectors of four connected neighboring sub-blocks are available and different from the current motion vector, the motion of the four connected neighboring sub-blocks is also used Vector to get the predicted block of the current sub-block. These multiple prediction blocks based on multiple motion vectors are combined to generate a final prediction signal for the current sub-block.
  • a prediction block based on motion vectors of neighboring sub-blocks is labeled as PN, where N represents an index of adjacent upper, lower, left, and right sub-blocks, and a prediction block based on motion vectors of the current sub-block is labeled as PC.
  • N represents an index of adjacent upper, lower, left, and right sub-blocks
  • PC prediction block based on motion vectors of the current sub-block
  • a weighting factor of ⁇ 1/4, 1/8, 1/16, 1/32 ⁇ is used for the PN, and a weighting factor of ⁇ 3/4, 7/8, 15/16, 31/32 ⁇ is used for the PC.
  • the exception is small MC blocks (ie, when the height or width of the coding block is equal to 4 or the CU is coded using the sub-CU mode), for such blocks, only two rows / columns of PN are added to the PC.
  • a weighting factor of ⁇ 1 / 4,1 / 8 ⁇ is used for the PN
  • a weighting factor of ⁇ 3 / 4,7 / 8 ⁇ is used for the PC.
  • samples in the same row (column) of the PN are added to the PC with the same weighting factor.
  • a CU level flag is signaled to indicate whether OBMC is applied to the current CU.
  • OBMC is applied by default.
  • the prediction signals formed by the OBMC using the motion information of the top neighboring block and the left neighboring block are used to compensate the top and left edges of the original signal of the current CU, and then normal motion estimation processing is applied.
  • LIC Local Illumination Compensation
  • the parameters a and b are obtained by using the least square error method by using neighboring samples of the current CU and their corresponding reference samples.
  • Sub-sampling (2: 1 sub-sampling) neighboring samples using the CU and corresponding samples in the reference picture (identified by the motion information of the current CU or sub-CU).
  • the IC parameters are obtained and applied to each prediction direction separately.
  • the LIC flag is copied from adjacent blocks in a manner similar to the motion information copy in merge mode; otherwise, the LIC flag is signaled for the CU to indicate whether LIC is applicable.
  • LIC When LIC is enabled for pictures, an additional CU-level RD check is required to determine if LIC is applicable to CU.
  • MR-SAD mean-removed sum of absolute difference
  • Hadamard-transformed nodifference, MR) -SATD mean-removed sum of absolute Adamard transform difference
  • Bi-directional optical flow is a sampling-type motion refinement performed on top of block-wise motion compensation for bi-directional prediction. Sample-level motion refinement does not use signaling.
  • the motion vector field (v x , v y ) is given by equation (1.13)
  • ⁇ 0 and ⁇ 1 represent distances to a reference frame.
  • the motion vector field (v x , v y ) is determined by minimizing the difference ⁇ between the values in points A and B (the intersection of the motion trajectory and the reference frame plane).
  • the model uses only the first linear term of the local Taylor expansion of ⁇ :
  • d is the bit depth of the video sample.
  • BIO When using BIO, the sports field may be refined for each sample, but in order to reduce the computational complexity, a block-based BIO design can be used. Calculate motion refinement based on 4 ⁇ 4 blocks.
  • block-based BIO the values sn in equation (1.19) of all samples in a 4 ⁇ 4 block are aggregated, and then the aggregated value sn is used to obtain a 4 ⁇ 4 block BIO motion vector offset.
  • BIO's MV scheme may be unreliable due to noise or irregular movement. Therefore, in BIO, the size of the MV scheme is truncated to the threshold thBIO.
  • the threshold is determined based on whether the reference pictures of the current picture are all from one direction. If all reference pictures of the current picture are from one direction, the value of the threshold is set to 12 ⁇ 2 14-d ; otherwise, it is set to 12 ⁇ 2 13-d .
  • the gradient of the BIO is calculated simultaneously with the motion compensation interpolation.
  • the input of this 2D separable FIR is the same reference frame sample for the motion compensation process and the fractional position (fracX, fracY) according to the fractional part of the block motion vector.
  • fracX, fracY fractional position
  • In horizontal gradient In the case of signals, first use BIOfilterS to vertically interpolate fracY corresponding to the fractional position with a scale-shift displacement d-8, and then apply a gradient filter BIOfilterG to the fractional position fracX with a scale-shift displacement 18-d in the horizontal direction.
  • BIOfilterG corresponding to a fractional position fracY with a scale-shift displacement d-8
  • BIOfilterS to perform a signal shift in the horizontal direction corresponding to a fractional position fracX with a scale-shift displacement 18-d.
  • the lengths of the interpolation filters for gradient calculation BIOfilterG and signal displacement BIOfilterF are short (6 taps).
  • Table 1.4 shows the filters used for gradient calculation of different fractional positions of block motion vectors in BIO.
  • Table 1.5 shows the interpolation filters used for prediction signal generation in BIO.
  • Fractional pixel position Interpolation filter for gradients (BIOfilterG) 0 ⁇ 8, -39, -3,46, -17,5 ⁇ 1/16 ⁇ 8, -32, -13,50, -18,5 ⁇ 1/8 ⁇ 7.-27, -20,54, -19,5 ⁇ 3/16 ⁇ 6, -21, -29,57, -18,5 ⁇ 1/4 ⁇ 4, -17, -36,60, -15,4 ⁇ 5/16 ⁇ 3, -9, -44,61, -15,4 ⁇ 3/8 ⁇ 1, -4, -48,61, -13,3 ⁇ 7/16 ⁇ 0,1, -54,60, -9,2 ⁇ 1/2 ⁇ -1,4, -57,57, -4,1 ⁇
  • Fractional pixel position Interpolation filters for predicting signals 0 ⁇ 0,0,64,0,0 ⁇ 1/16 ⁇ 1, -3,64,4, -2,0 ⁇ 1/8 ⁇ 1, -6,62,9, -3,1 ⁇ 3/16 ⁇ 2, -8,60,14, -5,1 ⁇ 1/4 ⁇ 2, -9,57,19, -7,2 ⁇ 5/16 ⁇ 3, -10,53,24, -8,2 ⁇ 3/8 ⁇ 3, -11,50,29, -9,2 ⁇ 7/16 ⁇ 3, -11,44,35, -10,3 ⁇ 1/2 ⁇ 3, -10,35,44, -11,3 ⁇
  • BIO is applied to all bi-directional prediction blocks when two predictions come from different reference pictures.
  • BIO is disabled.
  • OBMC is applied for the block after the normal MC process.
  • BIO is not applied during the OBMC process. This means that the BIO is applied in the MC process only when using its own MV, and the BIO is not applied in the MC process when using the MV of an adjacent block in the OBMC process.
  • HEVC provides weighted prediction (WP) tools.
  • WP weighted prediction
  • the principle of WP is to replace the inter prediction signal P with a linear weighted prediction signal P '(with weight w and offset o):
  • the encoder selects the appropriate weights and offsets and transmits them within the code stream.
  • the L0 and L1 suffixes define List0 and List1 of the reference picture list, respectively.
  • the bit depth is maintained to 14-bit accuracy before the prediction signal is averaged.
  • the following formula is applicable to explicit signaling of weighted prediction parameters related to the luminance channel.
  • the corresponding formula is applicable to the case of chroma channels and one-way prediction.
  • log2WD luma_log2_weight_denom + 14-bitDepth
  • w0 LumaWeightL0 [refIdxL0]
  • w1 LumaWeightL1 [refIdxL1]
  • o0 luma_offset_l0 [refIdxL0] * highPrecisionScaleFactor
  • o1 luma_offset_l1 [refIdxL1] * highPrecisionScaleFactor
  • Boundary prediction filters are intra coding methods that further adjust the leftmost or topmost columns of predicted pixels.
  • HEVC high definition video Coding
  • the leftmost column or the top row of the prediction sample is further adjusted, respectively.
  • This method can be further extended to several diagonal internal modes and further adjusted to up to four columns using a double-tap (for internal modes 2 and 34) or a three-tap filter (for internal modes 3 to 6 and 30 to 33). Or four-line boundary samples.
  • reference frames are divided into two groups, forward and backward, which are placed in two reference picture lists, which are generally named list0 and list1.
  • the inter prediction direction indicates which prediction direction is used by the current block in forward prediction, backward prediction, or bidirectional prediction, and different reference frame lists list0, list1, or list0 and list1 are selected and used according to the prediction direction.
  • the reference frame is indicated by the reference frame index.
  • the position of the reference block in the reference frame of the prediction block of the current block relative to the current block in the current frame is indicated by the motion vector.
  • the prediction block obtained from the reference frames in list0, list1, or list0 and list1 is used to generate the final prediction block.
  • the prediction direction is unidirectional, the prediction block obtained from the reference frame in list0 or list1 is directly used.
  • the prediction direction is bidirectional, the prediction block obtained from the reference frame in list0 and list1 is weighted average. Synthesize the final prediction block.
  • This application proposes a method for performing spatial filtering on an inter-coded prediction block, which is applied to inter-prediction, and the processing at the encoding end and the decoding end is the same.
  • Intra-prediction coding A coding method that uses the neighboring pixel values to predict the current pixel value and then encodes the prediction error.
  • Coded picture A coded representation of a picture that contains all the coding tree units of the picture.
  • Motion vector A two-dimensional vector used for inter prediction, which provides an offset from coordinates in a decoded picture to coordinates in a reference picture.
  • Prediction block A rectangular M ⁇ N sample block on which the same prediction is applied.
  • Prediction process Use the predicted value to provide an estimate of the data element (eg, sample value or motion vector) that is currently being decoded.
  • data element eg, sample value or motion vector
  • Predicted value A specified value or a combination of previously decoded data elements (eg, sample values or motion vectors) used in subsequent data element decoding processes.
  • Reference frame A picture or frame used as a short-term reference picture or a long-term reference picture.
  • the reference frame contains samples that can be used in the decoding order for inter prediction in the decoding process of subsequent pictures.
  • Inter prediction According to the pixels in the reference frame of the current block, the position of the pixels used for prediction in the reference frame is indicated by the motion vector to generate a predicted image of the current block.
  • Bidirectional prediction (B) slice A slice that can be decoded using intra prediction or inter prediction to predict the sample value of each block with up to two motion vectors and reference indexes.
  • CTU coding tree unit (coding tree unit).
  • An image is composed of multiple CTUs.
  • a CTU usually corresponds to a square image area, which contains the luma pixels and chroma pixels in this image area (or it can only include luma pixels. (Or may also include only chroma pixels); the CTU also contains syntax elements, which indicate how to divide the CTU into at least one coding unit (coding unit, CU), and a method of decoding each coding unit to obtain a reconstructed image.
  • a coding unit corresponding to an A ⁇ B rectangular area in the image, including A ⁇ B brightness pixels or / and its corresponding chroma pixels, A is the width of the rectangle, B is the height of the rectangle, and A and B can be the same It can also be different.
  • the values of A and B are usually integer powers of 2, such as 128, 64, 32, 16, 8, and 4.
  • a coding unit includes a predicted image and a residual image, and the predicted image and the residual image are added to obtain a reconstructed image of the coding unit.
  • the predicted image is generated by intra prediction or inter prediction, and the residual image is generated by inverse quantization and inverse transform processing of the transform coefficients.
  • VTM New codec reference software developed by the JVET organization.
  • Fusion coding An inter-frame coding method in which motion vectors are not directly transmitted in the code stream.
  • the current block can select the corresponding fusion candidate from the merge candidate list according to the merge index, and use the motion information of the fusion candidate as the motion information of the current block, or scale the motion information of the fusion candidate as the motion information of the fusion candidate. Motion information of the current block.
  • FIG. 13 is a schematic flowchart of an embodiment of the present application, and relates to a decoding method for predicting motion information.
  • the present application proposes a method for performing spatial filtering on an inter-coded prediction block. After generating a prediction pixel, the neighboring pixels are used to filter the prediction pixel.
  • At least one of image blocks using inter prediction is decoded to obtain a reconstructed image of the image block.
  • the method includes:
  • a candidate list of fused motion information is generated.
  • the method includes adding a spatial candidate and a time domain candidate with the current block to the fusion motion information candidate list of the current block.
  • the method is the same as the method in HEVC.
  • the spatial fusion candidates include A0, A1, B0, B1, and B2, and the time domain fusion candidates include T0 and T1.
  • time-domain fusion candidates also include candidates provided by the adaptive time-domain motion vector prediction (ATMVP) technology.
  • ATMVP adaptive time-domain motion vector prediction
  • the present invention does not involve a process related to generating a candidate list of fused motion information. The process may be performed by using a method in HEVC or VTM, or other methods of generating a candidate list of fused motion information. If the current block is in the Inter-MVP mode, a motion vector prediction candidate list is generated.
  • the original hypothesized motion information is obtained. If the current block is in merge / skip mode, the motion information of the current block is determined according to the fusion index carried in the code stream. If the current block is in the Inter MVP mode, the motion information of the current block is determined according to the inter prediction direction, reference frame index, motion vector prediction value index, and motion vector residual value transmitted in the code stream.
  • This step may be performed using a method in HEVC or VTM, or other methods for generating a motion vector prediction candidate list are not limited.
  • the code stream is parsed to obtain update identification identifier information of the image block to be processed; determining that the update identification identifier information indicates to update the prediction block of the image block to be processed.
  • preset update identification identifier information of the image block to be processed is obtained; it is determined that the update identification identifier information indicates that the prediction block of the image block to be processed is updated.
  • the identification of the inter-frame prediction pixel filtering method is not transmitted in the code stream.
  • an inter prediction spatial filtering flag is determined (that is, an update discrimination flag is determined), and if true, filtering is performed on the prediction block.
  • the code stream is parsed to obtain a prediction mode of the image block to be processed; and the prediction mode is determined to be a merge mode or a skip mode.
  • the inter-frame prediction pixel filtering method is performed only on the inter-frame coded block which continues to be encoded in a merge / skip mode.
  • a reference block direction, a reference frame number, and a motion vector are used to obtain a prediction block from the reference frame.
  • the reference frame direction is forward prediction, which means that the current coding unit selects a reference image from the forward reference image set to obtain a reference block.
  • the reference frame direction is backward prediction, which means that the current coding unit selects a reference image from the backward reference image set to obtain a reference block.
  • the reference frame direction is bidirectional prediction, which means that one reference image is selected from each of the forward and backward reference image sets to obtain a reference block. When the bidirectional prediction method is used, there are two reference blocks in the current coding unit, and each reference block requires a motion vector and a reference frame index for indication. Then, the pixel value of the pixel in the predicted block of the current block is determined according to the pixel value of the pixel in the reference block.
  • This step may be performed using a method in HEVC or VTM, or other methods for generating a motion vector prediction candidate list are not limited.
  • S1306 Perform weighted calculation on the reconstructed values of the one or more reference pixels and the predicted values of the target pixels in the image block to be processed to update the predicted values of the target pixels.
  • the reference pixel point and the target pixel point have a preset spatial relationship in space.
  • the predicted pixels are spatially filtered using the current CU spatial domain near the reconstructed pixels, and the current CU spatial domain near the reconstructed pixels and the predicted pixels in the prediction block are weighted to obtain a new predicted pixel value.
  • the one or more reference pixel points include a reconstructed pixel point that has the same abscissa and a preset ordinate difference as the target pixel point, or is the same as the target pixel point.
  • the predicted value of the target pixel is updated according to the following formula:
  • the coordinates of the target pixel point are (xP, yP), the coordinates of the upper left pixel point in the image block to be processed are (xN, yN), and predP (xP, yP) is the target pixel point's Prediction value before update, predQ (xP, yP) is the updated prediction value of the target pixel, recon (xN-M1, yP), recon (xP, yN-M2) are located at the coordinate position (xN- M1, yP), (xP, yN-M2) of the reference pixel reconstruction values, w1, w2, w3, w4, w5, w6 are preset constants, and M1, M2 are preset positive integers.
  • the coordinates of the current pixel are (xP, yP), the current predicted pixel value is predP (xP, yP), the coordinates of the pixel in the upper left corner of the current CU are (xN, yN), and the pixel value of the reconstructed pixel in the current frame is recon (x , Y).
  • predQ (xP, yP) (w1 * predP (xP, yP) + w2 * recon (xN-M1, yP)) / (w1 + w2)
  • w1 and w2 are preset weighting coefficients
  • M1 is a preset offset value
  • the value is a positive integer
  • predQ (xP, yP) (w1 * predP (xP, yP) + w2 * recon (xP, yN-M2)) / (w1 + w2)
  • w1 and w2 are preset weighting coefficients
  • M2 is a preset offset value
  • the value is a positive integer
  • predQ (xP, yP) (w1 * predP (xP, yP) + w2 * recon (xN-M1, yP) + w3 * recon (xP, yN-M2)) / (w1 + w2 + w3)
  • w1, w2, and w3 are preset weighting coefficients
  • M1 and M2 are preset offset values
  • the values are positive integers.
  • M1 and M2 are positive integers, and the values are 1, 2, 3, 4, and so on.
  • the weighting coefficient group (w1, w2) or (w1, w2, w3) can take a numerical combination of the power of N to w1 + w2 or w1 + w2 + w3 equal to 2, such as (6,2), ( 5,3), (4,4) or (6,1,1), (5,2,1), and so on.
  • the predicted value of the target pixel is updated according to the following formula:
  • the coordinates of the target pixel point are (xP, yP), the coordinates of the upper left pixel point in the image block to be processed are (xN, yN), and predP (xP, yP) is the target pixel point's Prediction value before update, predQ (xP, yP) is the updated prediction value of the target pixel, recon (xN-M1, yP), recon (xN-M2, yP), recon (xP, yN-M3 ), recon (xP, yN-M4) are the references at coordinate positions (xN-M1, yP), (xN-M2, yP), (xP, yN-M3), (xP, yN-M4) Pixel reconstruction values, w1, w2, w3, w4, w5, w6, w7, w8, w9, w10, and w11 are preset constants, and M1, M2, M3, and M4 are preset positive integers.
  • the coordinates of the current pixel are (xP, yP), the current predicted pixel value is predP (xP, yP), the coordinates of the pixel in the upper left corner of the current CU are (xN, yN), and the pixel value of the reconstructed pixel in the current frame is recon (x , Y).
  • w1, w2, and w3 are preset weighting coefficients
  • M1 and M2 are preset offset values
  • the values are positive integers.
  • predQ (w1 * predP (xP, yP) + w2 * recon (xP, yN-M3) + w3 * recon (xP, yN-M4)) / (w1 + w2 + w3)
  • w1, w2, and w3 are preset weighting coefficients
  • M3 and M4 are preset offset values
  • the values are positive integers.
  • predQ (xP, yP) (w1 * predP (xP, yP) + w2 * recon (xN-M1, yP) + w3 * recon (xN-M2, yP ) + w4 * recon (xP, yN-M3) + w5 * recon (xP, yN-M4)) / (w1 + w2 + w3 + w4 + w5)
  • w1, w2, and w3 are preset weighting coefficients
  • M1, M2, M3, and M4 are preset offset values. The values are positive integers.
  • M1, M2, M3, and M4 are positive integers, and the values are 1, 2, 3, 4, and so on.
  • the weighting coefficient group (w1, w2, w3) or (w1, w2, w3, w4, w5) can take w1 + w2 + w3 or w1 + w2 + w3 + w4 + w5 equal to the power of N , Such as (6,1,1), (5,2,1) or (12,1,1,1,1), (10,2,2,1,1), (8,2, 2,2,2), (8,3,3,1,1) and so on.
  • the one or more reference pixels include one or more of the following pixels:
  • the predicted value of the target pixel is updated according to the following formula:
  • predQ (xP, yP) (w1 * predP (xP, yP) + w2 * predP1 (xP, yP) + ((w1 + w2) / 2)) / (w1 + w2)
  • p (xP, -1), p (-1, nTbH), p (-1, yP), p (nTbW, -1) are located at the coordinate position (xP, -1), (- 1, nTbH), (-1, yP), (nTbW, -1) reconstructed values of the reference pixels, w1, w2 are preset constants, nTbW and nTbH are widths of the image blocks to be processed and height.
  • the predicted value of the target pixel is updated according to the following formula:
  • predQ (xP, yP) (w1 * predP (xP, yP)
  • predV (xP, yP) ((nTbH-1-yP) * p (xP, -1) + (yP + 1) * p (-1, nTbH) + nTbH / 2) >> Log2 (nTbH),
  • predH (xP, yP) ((nTbW-1-xP) * p (-1, yP) + (xP + 1) * p (nTbW, -1) + nTbW / 2) >> Log2 (nTbW), so
  • the coordinates of the target pixel point are (xP, yP)
  • the coordinates of the upper left pixel point in the image block to be processed are (0, 0)
  • predP (xP, yP) is the value before the update of the target pixel point.
  • Predicted value, predQ (xP, yP) is the updated predicted value of the target pixel
  • p (xP, -1), p (-1, nTbH), p (-1, yP), p (nTbW, -1) are the reconstructed values of the reference pixels at the coordinate positions (xP, -1), (-1, nTbH), (-1, yP), (nTbW, -1), w1, w2 are The preset constants, nTbW and nTbH are the width and height of the image block to be processed.
  • the predicted value of the target pixel is updated according to the following formula:
  • predQ (xP, yP) (((w1 * predP (xP, yP)) ⁇ (Log2 (nTbW) + Log2 (nTbH) +1))
  • predV (xP, yP) ((nTbH-1-yP) * p (xP, -1) + (yP + 1) * p (-1, nTbH)) ⁇ Log2 (nTbW),
  • predH (xP, yP) ((nTbW-1-xP) * p (-1, yP) + (xP + 1) * p (nTbW, -1)) ⁇ Log2 (nTbH), the target pixel
  • predP (xP, yP) is the predicted value of the target pixel point before update
  • predQ (xP, yP) is the updated predicted value of the target pixel
  • w1 and w2 are preset constants, nTbW and nT
  • the sum of w1 and w2 is an n-th power of 2, where n is a non-negative integer.
  • a planar mode (PLANAR) in intra prediction can be used to obtain a second predicted pixel value from the adjacent pixels in the spatial domain.
  • the coordinates of the current pixel are (xP, yP)
  • the current predicted pixel value is predP (xP, yP)
  • the pixel value of the reconstructed pixel in the current frame is p (x, y)
  • the second predicted pixel value is predP1 (xP , YP)
  • the new predicted pixel value predQ (xP, yP) is equal to
  • predQ (xP, yP) (w1 * predP (xP, yP) + w2 * predP1 (xP, yP)) / (w1 + w2)
  • w1 and w2 are preset weighting coefficients, and the values are positive integers.
  • the weighting coefficient group (w1, w2) may take a numerical combination of the power of N to which w1 + w2 is equal to 2, such as (6,2), (5,3), (4,4), and so on.
  • predP1 (xP, yP) is obtained using the method in VTM as follows:
  • predV (x, y) ((nTbH-1-y) * p (x, -1) + (y + 1) * p (-1, nTbH)) ⁇ Log2 (nTbW)
  • predH (x, y) ((nTbW-1-x) * p (-1, y) + (x + 1) * p (nTbW, -1)) ⁇ Log2 (nTbH)
  • nTbW and nTbH are the width and height of the current CU
  • p (x, y) represents neighboring pixels
  • x and y are coordinates.
  • JVET-K1001VVC spec JVET-K1001VVC spec.
  • planar mode (PLANAR) algorithm used to generate the second predicted pixel value is predP1 (xP, yP) is not limited to the algorithm in VTM, but can also use the PLANAR algorithm in HEVC and H.264.
  • the predicted value of the target pixel is updated according to the following formula:
  • predQ (xP, yP) clip1Cmp ((refL (xP, yP) * wL (xP) + refT (xP, yP) * wT (yP) -p (-1, -1) * wTL (xP, yP) + (64-wL (xP) -wT (yP) + wTL (xP, yP)) * predP (xP, yP) +32) >> 6),
  • Position-based intra prediction combined processing in intra-prediction can be used to process inter-predicted blocks. Assuming the coordinates of the current pixel are (xP, yP), the current predicted pixel value is predP (xP, yP), and the pixel value of the reconstructed pixel in the current frame is p (x, y), then the new predicted pixel value is predQ ( xP, yP).
  • predQ (xP, yP) is obtained using the DC mode of the intra prediction joint processing technology in VTM, as follows:
  • predQ (x, y) clip1Cmp ((refL (x, y) * wL (x) + refT (x, y) * wT (y)-
  • nScale ((Log2 (nTbW) + Log2 (nTbH) -2) >> 2)
  • nTbW and nTbH are the width and height of the current CU
  • p (x, y) represents neighboring pixels
  • x and y are coordinates.
  • JVET-K1001VVC spec JVET-K1001VVC spec.
  • intra prediction joint processing technology used is not limited to the algorithm in VTM, and the algorithm in JEM can also be used.
  • the predicted value of the target pixel is updated according to the following formula:
  • predQ (xP, yP) clip1Cmp ((refL (xP, yP) * wL (xP) + refT (xP, yP) * wT (yP) + (64-wL (xP) -w ⁇ T (yP)) * predP (xP, yP) +32) >> 6),
  • Position-based intra prediction combined processing in intra-prediction can be used to process inter-predicted blocks. Assuming the coordinates of the current pixel are (xP, yP), the current predicted pixel value is predP (xP, yP), and the pixel value of the reconstructed pixel in the current frame is p (x, y), then the new predicted pixel value is predQ ( xP, yP).
  • predQ (xP, yP) is obtained using the PLANAR mode of the intra prediction joint processing technology in VTM, as follows:
  • predQ (x, y) clip1Cmp ((refL (x, y) * wL (x) + refT (x, y) * wT (y)
  • nScale ((Log2 (nTbW) + Log2 (nTbH) -2) >> 2)
  • nTbW and nTbH are the width and height of the current CU
  • p (x, y) represents neighboring pixels
  • x and y are coordinates.
  • JVET-K1001VVC spec JVET-K1001VVC spec.
  • intra prediction joint processing technology used is not limited to the algorithm in VTM, and the algorithm in JEM can also be used.
  • weighted calculation is performed on the reconstruction value of the reference pixel point and the reconstruction values of the left and right adjacent pixel points of the reference pixel point. ;
  • weighting calculation is performed on the reconstruction value of the reference pixel point and the reconstruction value of adjacent pixel points above and below the reference pixel point; using A result of the weighting calculation updates a reconstruction value of the reference pixel point.
  • the boundary filtering technology in intra prediction can be used to filter inter-predicted pixels, and the boundary filtering technology can be performed with reference to the method of HEVC.
  • embodiments of the present application may further include, before S1306 or after S1306, according to the motion information and code stream information, the inter-frame prediction technology other than this method is used to continue the inter-frame prediction (S1307).
  • the present invention does not limit the inter-frame coding used by the encoder and decoder during inter-frame prediction except for this method.
  • the techniques in HEVC or VTM can be used, including and not limited to the two-way optical flow method and decoding. End-motion vector improvement methods, brightness compensation technology (LIC), universal weighted prediction (GBI), overlay block motion compensation (OBMC), decoding segment motion vector compensation (DMVD) technology.
  • LIC brightness compensation technology
  • GBI universal weighted prediction
  • OBMC overlay block motion compensation
  • DMVD decoding segment motion vector compensation
  • the method may be performed in HEVC or VTM, or other methods for generating a motion vector prediction candidate list are not limited.
  • the method before the performing motion compensation on the image block to be processed based on the motion information, the method further includes: initializing the motion information by using a first preset algorithm; correspondingly, The performing motion compensation on the image block to be processed based on the motion information includes: performing motion compensation on the image block to be processed based on the initially updated motion information.
  • the method further includes: pre-updating the prediction block by using a second preset algorithm; correspondingly, the The weighted calculation of the reconstructed value of the one or more reference pixels and the predicted value of the target pixel in the image block to be processed includes: reconstructing the reconstructed value of the one or more reference pixels and the to-be-processed The pre-updated prediction value of the target pixel point in the image block is weighted.
  • the reconstructed value of one or more reference pixel points and the predicted value of the target pixel point in the image block to be processed are weighted to update the target pixel.
  • the method further includes: updating the predicted value of the target pixel point by using a second preset algorithm.
  • it may further include: adding the final inter prediction image and the residual image to obtain a reconstructed image of the current block.
  • the residual information and the predicted image are added to obtain a reconstructed image of the current block; if the current block has no residual, the predicted image is a reconstructed image of the current block.
  • the above process may use the same method as HEVC or VTM, and may also use other motion compensation and image reconstruction methods without limitation.
  • the technical effect of the embodiments of the present application is to improve the coding compression efficiency and increase the PSNR and BDrate by 0.5%.
  • spatial filtering is performed on the inter-predicted pixels, which improves coding efficiency.
  • FIG. 15 is a schematic block diagram of an embodiment of the present application, and relates to a decoding apparatus 1500 for predicting motion information, including:
  • An analysis module 1501 is configured to analyze a code stream to obtain motion information of an image block to be processed; a compensation module 1502 is configured to perform motion compensation on the image block to be processed based on the motion information to obtain the image block to be processed A prediction block; a calculation module 1503, configured to perform weighted calculation on the reconstructed value of one or more reference pixel points and the prediction value of the target pixel point in the image block to be processed to update the prediction of the target pixel point Value, wherein the reference pixel point and the target pixel point have a preset spatial domain position relationship.
  • the one or more reference pixel points include a reconstructed pixel point that has the same abscissa and a preset ordinate difference as the target pixel point, or is the same as the target pixel point.
  • the predicted value of the target pixel is updated according to the following formula:
  • the coordinates of the target pixel point are (xP, yP), the coordinates of the upper left pixel point in the image block to be processed are (xN, yN), and predP (xP, yP) is the target pixel point's Prediction value before update, predQ (xP, yP) is the updated prediction value of the target pixel, recon (xN-M1, yP), recon (xP, yN-M2) are located at the coordinate position (xN- M1, yP), (xP, yN-M2) of the reference pixel reconstruction values, w1, w2, w3, w4, w5, w6 are preset constants, and M1, M2 are preset positive integers.
  • the predicted value of the target pixel is updated according to the following formula:
  • the coordinates of the target pixel point are (xP, yP), the coordinates of the upper left pixel point in the image block to be processed are (xN, yN), and predP (xP, yP) is the target pixel point's Prediction value before update, predQ (xP, yP) is the updated prediction value of the target pixel, recon (xN-M1, yP), recon (xN-M2, yP), recon (xP, yN-M3 ), recon (xP, yN-M4) are the references at coordinate positions (xN-M1, yP), (xN-M2, yP), (xP, yN-M3), (xP, yN-M4) Pixel reconstruction values, w1, w2, w3, w4, w5, w6, w7, w8, w9, w10, and w11 are preset constants, and M1, M2, M3, and M4 are preset positive integers.
  • the one or more reference pixel points include one or more of the following pixel points: having the same abscissa with the target pixel point and the upper edge of the image block to be processed Adjacent reconstructed pixels; or reconstructed pixels having the same ordinate as the target pixel and adjacent to the left edge of the image block to be processed; or the image to be processed A reconstructed pixel point in the upper right corner of the block; or a reconstructed pixel point in the lower left corner of the image block to be processed; or a reconstructed pixel point in the upper left corner of the image block to be processed.
  • the predicted value of the target pixel is updated according to the following formula:
  • predQ (xP, yP) (w1 * predP (xP, yP) + w2 * predP1 (xP, yP) + ((w1 + w2) / 2)) / (w1 + w2)
  • predH (xP, yP) ((nTbW-1-xP) * p (-1, yP) + (xP + 1) * p (nTbW, -1)) ⁇ Log2 (nTbH), the target pixel
  • predP (xP, yP) is the predicted value of the target pixel point before update
  • predQ (xP, yP) is the updated predicted value of the target pixel
  • w1 and w2 are preset constants, nTbW and nT
  • the predicted value of the target pixel is updated according to the following formula:
  • predQ (xP, yP) (w1 * predP (xP, yP)
  • predV (xP, yP) ((nTbH-1-yP) * p (xP, -1) + (yP + 1) * p (-1, nTbH) + nTbH / 2) >> Log2 (nTbH),
  • predH (xP, yP) ((nTbW-1-xP) * p (-1, yP) + (xP + 1) * p (nTbW, -1) + nTbW / 2) >> Log2 (nTbW), so
  • the coordinates of the target pixel point are (xP, yP)
  • the coordinates of the upper left pixel point in the image block to be processed are (0, 0)
  • predP (xP, yP) is the value before the update of the target pixel point.
  • Predicted value, predQ (xP, yP) is the updated predicted value of the target pixel
  • p (xP, -1), p (-1, nTbH), p (-1, yP), p (nTbW, -1) are the reconstructed values of the reference pixels at the coordinate positions (xP, -1), (-1, nTbH), (-1, yP), (nTbW, -1), w1, w2 are The preset constants, nTbW and nTbH are the width and height of the image block to be processed.
  • the predicted value of the target pixel is updated according to the following formula:
  • predQ (xP, yP) (((w1 * predP (xP, yP)) ⁇ (Log2 (nTbW) + Log2 (nTbH) +1))
  • predV (xP, yP) ((nTbH-1-yP) * p (xP, -1) + (yP + 1) * p (-1, nTbH)) ⁇ Log2 (nTbW),
  • predH (xP, yP) ((nTbW-1-xP) * p (-1, yP) + (xP + 1) * p (nTbW, -1)) ⁇ Log2 (nTbH), the target pixel
  • predP (xP, yP) is the predicted value of the target pixel point before update
  • predQ (xP, yP) is the updated predicted value of the target pixel
  • w1 and w2 are preset constants, nTbW and nT
  • the sum of w1 and w2 is an n-th power of 2, where n is a non-negative integer.
  • the predicted value of the target pixel is updated according to the following formula:
  • predQ (xP, yP) clip1Cmp ((refL (xP, yP) * wL (xP) + refT (xP, yP) * wT (yP) -p (-1, -1) * wTL (xP, yP) + (64-wL (xP) -wT (yP) + wTL (xP, yP)) * predP (xP, yP) +32) >> 6),
  • the predicted value of the target pixel is updated according to the following formula:
  • predQ (xP, yP) clip1Cmp ((refL (xP, yP) * wL (xP) + refT (xP, yP) * wT (yP) + (64-wL (xP) -w ⁇ T (yP)) * predP (xP, yP) +32) >> 6),
  • the calculation module 1503 is further configured to: when the reference pixel point is located above the image block to be processed, a reconstruction value of the reference pixel point and the reference pixel The reconstructed values of the left and right adjacent pixel points are weighted; when the reference pixel point is located to the left of the image block to be processed, the reconstructed value of the reference pixel point and the The reconstructed values of the upper and lower neighboring pixels are weighted; the results of the weighted calculation are used to update the reconstructed values of the reference pixels.
  • the calculation module 1503 is further configured to: initially update the motion information by using a first preset algorithm; and correspondingly, the compensation module 1502 is specifically configured to: based on the initial update The motion information is used to perform motion compensation on the image block to be processed.
  • the calculation module 1503 is further configured to pre-update the prediction block by using a second preset algorithm; correspondingly, the calculation module 1503 is specifically configured to: update the one or The weighted calculation is performed on the reconstructed values of the plurality of reference pixels and the pre-updated predicted values of the target pixels in the image block to be processed.
  • the calculation module 1503 is further configured to update the predicted value of the target pixel point by using a second preset algorithm.
  • the analysis module 1501 is further configured to: parse the code stream to obtain a prediction mode of the image block to be processed; determine that the prediction mode is a merge mode or skip Mode.
  • the analysis module 1501 is further configured to: parse the code stream to obtain update identification information of the image block to be processed; determine that the update identification identification information indicates to update the Process the prediction block of the image block.
  • the calculation module 1503 is further configured to: obtain preset update discrimination identification information of the image block to be processed; determine that the update discrimination identification information indicates to update the image block to be processed; Prediction block.
  • FIG. 16 is a schematic structural block diagram of a decoding device 1600 for predicting motion information in an embodiment of the present application. Specifically, it includes: a processor 1601 and a memory 1602 coupled to the processor; the processor 1601 is configured to execute the embodiment shown in FIG. 13 and various feasible implementations.
  • the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over a computer-readable medium as one or more instructions or code, and executed by a hardware-based processing unit.
  • the computer-readable medium may include a computer-readable storage medium or a communication medium, the computer-readable storage medium corresponding to a tangible medium such as a data storage medium, and the communication medium includes a computer program that facilitates, for example, transmission from one place to another according to a communication protocol Any media.
  • computer-readable media may illustratively correspond to (1) non-transitory, tangible computer-readable storage media, or (2) a communication medium such as a signal or carrier wave.
  • a data storage medium may be any available medium that can be accessed by one or more computers or one or more processors to retrieve instructions, code, and / or data structures used to implement the techniques described in this application.
  • the computer program product may include a computer-readable medium.
  • the computer-readable storage medium may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage devices, magnetic disk storage devices or other magnetic storage devices, flash memory, or may be used to store rendering instructions. Or any other medium in the form of a data structure with the desired code and accessible by a computer. Also, any connection is properly termed a computer-readable medium.
  • coaxial cable fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared, radio, and microwave
  • DSL digital subscriber line
  • coaxial Cables, fiber optic cables, twisted pairs, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of media.
  • computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transitory, tangible storage media.
  • magnetic disks and optical discs include compact discs (CDs), laser discs, optical discs, digital versatile discs (DVDs), flexible disks and Blu-ray discs, where disks typically reproduce data magnetically, and optical discs pass through The data is reproduced optically. Combinations of the above should also be included within the scope of computer-readable media.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • processors may refer to any of the aforementioned structures or any other structure suitable for implementing the techniques described herein.
  • functionality described herein may be provided within dedicated hardware and / or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques may be fully implemented in one or more circuits or logic elements.
  • the techniques of this application can be implemented in a wide variety of devices or devices, including wireless handsets, integrated circuits (ICs), or collections of ICs (eg, chipset).
  • ICs integrated circuits
  • collections of ICs eg, chipset
  • Various components, modules, or units are described in this application to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily need to be implemented by different hardware units. More specifically, as described above, various units may be combined in a codec hardware unit or by interoperable hardware units (including one or more processors as described above) combined with appropriate software and / or firmware To provide.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)

Abstract

本申请实施例一种帧间预测方法,包括:解析码流,以获得待处理图像块的运动信息;基于所述运动信息对所述待处理图像块进行运动补偿,以获得所述待处理图像块的预测块;将一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预测值进行加权计算,以更新所述目标像素点的预测值,其中,所述参考像素点与所述目标像素点具有预设的空域位置关系。

Description

一种视频编解码的方法与装置
本申请要求于2018年09月21日提交中国国家知识产权局、申请号为201811109950.2、申请名称为“一种视频编解码的方法与装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及视频编解码技术领域,尤其涉及一种帧间预测的方法和装置。
背景技术
数字视频技术可广泛应用于各种装置中,包括数字电视、数字直播系统、无线广播系统、个人数字助理(PDA)、笔记本电脑、平板计算机、电子书阅读器、数字相机、数字记录装置、数字媒体播放器、视频游戏装置、视频游戏控制台、蜂窝或卫星无线电电话、视频电话会议装置、视频流式发射装置及其类似者。数字视频装置实施视频解码技术,例如在由MPEG-2、MPEG-4、ITU-T H.263、ITU-T H.264/MPEG-4第10部分先进视频解码(AVC)、ITU-T H.265(也被称作高效率视频解码(HEVC))所定义的标准及这些标准的扩展中所描述的视频解码技术。数字视频装置可通过实施这些视频解码技术来更有效地发送、接收、编码、解码和/或存储数字视频信息。
视频压缩技术执行空间(图像内)预测和/或时间(图像间)预测,以减少或移除视频序列中固有的冗余信息。对于基于块的视频解码,可将视频块分割成视频块,视频块还可被称作树块、编码单元/解码单元(coding unit,CU)或编码节点/解码节点。使用位于同一图像中的相邻块中的参考样本的空间预测来编码图像的帧内解码(I)条带中的视频块。图像的帧间解码(P或B)条带中的视频块可使用位于同一图像中的相邻块中的参考样本的空间预测或位于其它参考图像中的参考样本的时间预测。图像可被称作帧,且参考图像可被称作参考帧。
发明内容
本申请的第一方面提供了一种帧间预测方法,包括:解析码流,以获得待处理图像块的运动信息;基于所述运动信息对所述待处理图像块进行运动补偿,以获得所述待处理图像块的预测块;将一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预测值进行加权计算,以更新所述目标像素点的预测值,其中,所述参考像素点与所述目标像素点具有预设的空域位置关系。
在第一方面的一种可行的实施方式中,所述一个或多个参考像素点包括与所述目标像素点具有相同横坐标且具有预设纵坐标差的已重构像素点,或者,与所述目标像素点具有相同纵坐标且具有预设横坐标差的已重构像素点。
在第一方面的一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
Figure PCTCN2018109233-appb-000001
其中,所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(xN,yN),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,recon(xN-M1,yP),recon(xP,yN-M2)分别为位于坐标位置(xN-M1,yP),(xP,yN-M2)的所述参考像素点的重构值,w1,w2,w3,w4,w5,w6为预设常数,M1,M2为预设正整数。
在第一方面的一种可行的实施方式中,R为2的n次方,其中,n为非负整数,R=w1+w2,或,R=w3+w4,或,R=w5+w6+w7。
在第一方面的一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
Figure PCTCN2018109233-appb-000002
其中,所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(xN,yN),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,recon(xN-M1,yP),recon(xN-M2,yP),recon(xP,yN-M3),recon(xP,yN-M4)分别为位于坐标位置(xN-M1,yP),(xN-M2,yP),(xP,yN-M3),(xP,yN-M4)的所述参考像素点的重构值,w1,w2,w3,w4,w5,w6,w7,w8,w9,w10,w11为预设常数,M1,M2,M3,M4为预设正整数。
在第一方面的一种可行的实施方式中,S为2的n次方,其中,n为非负整数,S=w1+w2+w3,或,S=w4+w5+w6,或,S=w7+w8+w9+w10+w11。
在第一方面的一种可行的实施方式中,所述一个或多个参考像素点包括以下像素点中的一个或多个:与所述目标像素点具有相同横坐标且与所述待处理图像块的上边缘相邻接的已重构像素点;或者,与所述目标像素点具有相同纵坐标且与所述待处理图像块的左边缘相邻接的已重构像素点;或者,所述待处理图像块的右上角的已重构像素点;或 者,所述待处理图像块的左下角的已重构像素点;或者,所述待处理图像块的左上角的已重构像素点。
在第一方面的一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:predQ(xP,yP)=(w1*predP(xP,yP)+w2*predP1(xP,yP)+((w1+w2)/2))/(w1+w2)
其中,predP1(xP,yP)=(predV(xP,yP)+predH(xP,yP)+nTbW*nTbH)>>(Log2(nTbW)+Log2(nTbH)+1),predV(xP,yP)=((nTbH-1-yP)*p(xP,-1)+(yP+1)*p(-1,nTbH))<<Log2(nTbW),
predH(xP,yP)=((nTbW-1-xP)*p(-1,yP)+(xP+1)*p(nTbW,-1))<<Log2(nTbH),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,nTbH),p(-1,yP),p(nTbW,-1)分别为位于坐标位置(xP,-1),(-1,nTbH),(-1,yP),(nTbW,-1)的所述参考像素点的重构值,w1,w2为预设常数,nTbW和nTbH为所述待处理图像块的宽度和高度。
在第一方面的一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
predQ(xP,yP)=(w1*predP(xP,yP)
                 +w2*predV(xP,yP)
                 +w3*predH(xP,yP)+((w1+w2+w3)/2))/(w1+w2+w3)
其中,
predV(xP,yP)=((nTbH-1-yP)*p(xP,-1)+(yP+1)*p(-1,nTbH)+nTbH/2)>>Log2(nTbH),
predH(xP,yP)=((nTbW-1-xP)*p(-1,yP)+(xP+1)*p(nTbW,-1)+nTbW/2)>>Log2(nTbW),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,nTbH),p(-1,yP),p(nTbW,-1)分别为位于坐标位置(xP,-1),(-1,nTbH),(-1,yP),(nTbW,-1)的所述参考像素点的重构值,w1,w2,w3为预设常数,nTbW和nTbH为所述待处理图像块的宽度和高度。
在第一方面的一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
predQ(xP,yP)=(((w1*predP(xP,yP))<<(Log2(nTbW)+Log2(nTbH)+1))
                  +w2*predV(xP,yP)
                  +w3*predH(xP,yP)
                  +(((w1+w2+w3)/2)<<(Log2(nTbW)+Log2(nTbH)+1)))
                  /(((w1+w2+w3)<<(Log2(nTbW)+Log2(nTbH)+1)))
其中,
predV(xP,yP)=((nTbH-1-yP)*p(xP,-1)+(yP+1)*p(-1,nTbH))<<Log2(nTbW),
predH(xP,yP)=((nTbW-1-xP)*p(-1,yP)+(xP+1)*p(nTbW,-1))<<Log2(nTbH),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,nTbH),p(-1,yP),p(nTbW,-1)分别为位于坐标位置(xP,-1),(-1,nTbH),(-1,yP),(nTbW,-1)的所述参考像素点的重构值,w1,w2为预设常数,nTbW和nTbH为所述待处理图像块的宽度和高度。
在第一方面的一种可行的实施方式中,w1和w2的和为2的n次方,其中,n为非负 整数。
在第一方面的一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
predQ(xP,yP)=clip1Cmp((refL(xP,yP)*wL(xP)+refT(xP,yP)*wT(yP)-p(-1,-1)*wTL(xP,yP)+(64-wL(xP)-wT(yP)+wTL(xP,yP))*predP(xP,yP)+32)>>6),
其中,refL(xP,yP)=p(-1,yP),refT(xP,yP)=p(xP,-1),wT(yP)=32>>((yP<<1)>>nScale),wL(xP)=32>>((xP<<1)>>nScale),wTL(xP,yP)=((wL(xP)>>4)+(wT(yP)>>4)),nScale=((Log2(nTbW)+Log2(nTbH)-2)>>2),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,yP),p(-1,-1)分别为位于坐标位置(xP,-1),(-1,yP),(-1,-1)的所述参考像素点的重构值,nTbW和nTbH为所述待处理图像块的宽度和高度,clip1Cmp为钳位操作。
在第一方面的一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
predQ(xP,yP)=clip1Cmp((refL(xP,yP)*wL(xP)+refT(xP,yP)*wT(yP)+(64-wL(xP)-wT(yP))*predP(xP,yP)+32)>>6),
其中,refL(xP,yP)=p(-1,yP),refT(xP,yP)=p(xP,-1),wT(yP)=32>>((yP<<1)>>nScale),wL(xP)=32>>((xP<<1)>>nScale),nScale=((Log2(nTbW)+Log2(nTbH)-2)>>2),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,yP)分别为位于坐标位置(xP,-1),(-1,yP)的所述参考像素点的重构值,nTbW和nTbH为所述待处理图像块的宽度和高度,clip1Cmp为钳位操作。
在第一方面的一种可行的实施方式中,在所述将一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预测值进行加权计算之前,包括:当所述参考像素点位于所述待处理图像块的上方时,对所述参考像素点的重构值和所述参考像素点的左右相邻像素点的重构值进行加权计算;当所述参考像素点位于所述待处理图像块的左方时,对所述参考像素点的重构值和所述参考像素点的上下相邻像素点的重构值进行加权计算;采用所述加权计算的结果更新所述参考像素点的重构值。
在第一方面的一种可行的实施方式中,在所述基于所述运动信息对所述待处理图像块进行运动补偿之前,还包括:通过第一预设算法对所述运动信息进行初始更新;对应的,所述基于所述运动信息对所述待处理图像块进行运动补偿,包括:基于所述初始更新后的运动信息对所述待处理图像块进行运动补偿。
在第一方面的一种可行的实施方式中,在所述获得所述待处理图像块的预测块之后,还包括:通过第二预设算法对所述预测块进行预更新;对应的,所述将一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预测值进行加权计算,包括:将所述一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预更新后的预测值进行加权计算。
在第一方面的一种可行的实施方式中,在所述将一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预测值进行加权计算,以更新所述目标像素点的预测值之后,还包括:通过第二预设算法对所述目标像素点的预测值进行更新。
在第一方面的一种可行的实施方式中,在所述将一个或多个参考像素点的重构值和所 述待处理图像块中的目标像素点的预测值进行加权计算之前,还包括:解析所述码流,以获得所述待处理图像块的预测模式;确定所述预测模式为融合模式(merge)或跳过模式(skip)。
在第一方面的一种可行的实施方式中,在所述将一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预测值进行加权计算之前,还包括:解析所述码流,以获得所述待处理图像块的更新判别标识信息;确定所述更新判别标识信息指示更新所述待处理图像块的预测块。
在第一方面的一种可行的实施方式中,在所述将一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预测值进行加权计算之前,还包括:获取所述待处理图像块的预设的更新判别标识信息;确定所述更新判别标识信息指示更新所述待处理图像块的预测块。
本申请的第二方面提供了一种帧间预测装置,包括:解析模块,用于解析码流,以获得待处理图像块的运动信息;补偿模块,用于基于所述运动信息对所述待处理图像块进行运动补偿,以获得所述待处理图像块的预测块;计算模块,用于将一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预测值进行加权计算,以更新所述目标像素点的预测值,其中,所述参考像素点与所述目标像素点具有预设的空域位置关系。
在第二方面的一种可行的实施方式中,所述一个或多个参考像素点包括与所述目标像素点具有相同横坐标且具有预设纵坐标差的已重构像素点,或者,与所述目标像素点具有相同纵坐标且具有预设横坐标差的已重构像素点。
在第二方面的一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
Figure PCTCN2018109233-appb-000003
其中,所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(xN,yN),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,recon(xN-M1,yP),recon(xP,yN-M2)分别为位于坐标位置(xN-M1,yP),(xP,yN-M2)的所述参考像素点的重构值,w1,w2,w3,w4,w5,w6为预设常数,M1,M2为预设正整数。
在第二方面的一种可行的实施方式中,R为2的n次方,其中,n为非负整数,R=w1+w2,或,R=w3+w4,或,R=w5+w6+w7。
在第二方面的一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
Figure PCTCN2018109233-appb-000004
其中,所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(xN,yN),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,recon(xN-M1,yP),recon(xN-M2,yP),recon(xP,yN-M3),recon(xP,yN-M4)分别为位于坐标位置(xN-M1,yP),(xN-M2,yP),(xP,yN-M3),(xP,yN-M4)的所述参考像素点的重构值,w1,w2,w3,w4,w5,w6,w7,w8,w9,w10,w11为预设常数,M1,M2,M3,M4为预设正整数。
在第二方面的一种可行的实施方式中,S为2的n次方,其中,n为非负整数,S=w1+w2+w3,或,S=w4+w5+w6,或,S=w7+w8+w9+w10+w11。
在第二方面的一种可行的实施方式中,所述一个或多个参考像素点包括以下像素点中的一个或多个:与所述目标像素点具有相同横坐标且与所述待处理图像块的上边缘相邻接的已重构像素点;或者,与所述目标像素点具有相同纵坐标且与所述待处理图像块的左边缘相邻接的已重构像素点;或者,所述待处理图像块的右上角的已重构像素点;或者,所述待处理图像块的左下角的已重构像素点;或者,所述待处理图像块的左上角的已重构像素点。
在第二方面的一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:predQ(xP,yP)=(w1*predP(xP,yP)+w2*predP1(xP,yP)+((w1+w2)/2))/(w1+w2)
其中,predP1(xP,yP)=(predV(xP,yP)+predH(xP,yP)+nTbW*nTbH)>>(Log2(nTbW)+Log2(nTbH)+1),predV(xP,yP)=((nTbH-1-yP)*p(xP,-1)+(yP+1)*p(-1,nTbH))<<Log2(nTbW),
predH(xP,yP)=((nTbW-1-xP)*p(-1,yP)+(xP+1)*p(nTbW,-1))<<Log2(nTbH),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,nTbH),p(-1,yP),p(nTbW,-1)分别为位于坐标位置(xP,-1),(-1,nTbH),(-1,yP),(nTbW,-1)的所述参考像素点的重构值,w1,w2为预设常数,nTbW和nTbH为所述待处理图像块的宽度和高度。
在第二方面的一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
predQ(xP,yP)=(w1*predP(xP,yP)
                  +w2*predV(xP,yP)
                  +w3*predH(xP,yP)+((w1+w2+w3)/2))/(w1+w2+w3)
其中,
predV(xP,yP)=((nTbH-1-yP)*p(xP,-1)+(yP+1)*p(-1,nTbH)+nTbH/2)>>Log2(nTbH),
predH(xP,yP)=((nTbW-1-xP)*p(-1,yP)+(xP+1)*p(nTbW,-1)+nTbW/2)>>Log2(nTbW),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,nTbH),p(-1,yP),p(nTbW,-1)分别为位于坐标位置(xP,-1),(-1,nTbH),(-1,yP),(nTbW,-1)的所述参考像素点的重构值,w1,w2,w3为预设常数,nTbW和nTbH为所述待处理图像块的宽度和高度。
在第二方面的一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
predQ(xP,yP)=(((w1*predP(xP,yP))<<(Log2(nTbW)+Log2(nTbH)+1))
                   +w2*predV(xP,yP)
                   +w3*predH(xP,yP)
                   +(((w1+w2+w3)/2)<<(Log2(nTbW)+Log2(nTbH)+1)))
                   /(((w1+w2+w3)<<(Log2(nTbW)+Log2(nTbH)+1)))
其中,
predV(xP,yP)=((nTbH-1-yP)*p(xP,-1)+(yP+1)*p(-1,nTbH))<<Log2(nTbW),
predH(xP,yP)=((nTbW-1-xP)*p(-1,yP)+(xP+1)*p(nTbW,-1))<<Log2(nTbH),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,nTbH),p(-1,yP),p(nTbW,-1)分别为位于坐标位置(xP,-1),(-1,nTbH),(-1,yP),(nTbW,-1)的所述参考像素点的重构值,w1,w2为预设常数,nTbW和nTbH为所述待处理图像块的宽度和高度。
在第二方面的一种可行的实施方式中,w1和w2的和为2的n次方,其中,n为非负整数。
在第二方面的一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
predQ(xP,yP)=clip1Cmp((refL(xP,yP)*wL(xP)+refT(xP,yP)*wT(yP)-p(-1,-1)*wTL(xP,yP)+(64-wL(xP)-wT(yP)+wTL(xP,yP))*predP(xP,yP)+32)>>6),
其中,refL(xP,yP)=p(-1,yP),refT(xP,yP)=p(xP,-1),wT(yP)=32>>((yP<<1)>>nScale),wL(xP)=32>>((xP<<1)>>nScale),wTL(xP,yP)=((wL(xP)>>4)+(wT(yP)>>4)),nScale=((Log2(nTbW)+Log2(nTbH)-2)>>2),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,yP),p(-1,-1)分别为位于坐标位置(xP,-1),(-1,yP),(-1,-1)的所述参考像素点的重构值,nTbW和nTbH为所述待处理图像块的宽度和高度,clip1Cmp为钳位操作。
在第二方面的一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
predQ(xP,yP)=clip1Cmp((refL(xP,yP)*wL(xP)+refT(xP,yP)*wT(yP)+(64-wL(xP)-w  T(yP))*predP(xP,yP)+32)>>6),
其中,refL(xP,yP)=p(-1,yP),refT(xP,yP)=p(xP,-1),wT(yP)=32>>((yP<<1)>>nScale),wL(xP)=32>>((xP<<1)>>nScale),nScale=((Log2(nTbW)+Log2(nTbH)-2)>>2),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,yP)分别为位于坐标位置(xP,-1),(-1,yP)的所述参考像素点的重构值,nTbW和nTbH为所述待处理图像块的宽度和高度,clip1Cmp为钳位操作。
在第二方面的一种可行的实施方式中,所述计算模块还用于:当所述参考像素点位于所述待处理图像块的上方时,对所述参考像素点的重构值和所述参考像素点的左右相邻像素点的重构值进行加权计算;当所述参考像素点位于所述待处理图像块的左方时,对所述参考像素点的重构值和所述参考像素点的上下相邻像素点的重构值进行加权计算;采用所述加权计算的结果更新所述参考像素点的重构值。
在第二方面的一种可行的实施方式中,所述计算模块还用于:通过第一预设算法对所述运动信息进行初始更新;对应的,所述补偿模块具体用于:基于所述初始更新后的运动信息对所述待处理图像块进行运动补偿。
在第二方面的一种可行的实施方式中,所述计算模块还用于:通过第二预设算法对所述预测块进行预更新;对应的,所述计算模块具体用于:将所述一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预更新后的预测值进行加权计算。
在第二方面的一种可行的实施方式中,所述计算模块还用于:通过第二预设算法对所述目标像素点的预测值进行更新。
在第二方面的一种可行的实施方式中,所述解析模块还用于:解析所述码流,以获得所述待处理图像块的预测模式;确定所述预测模式为融合模式(merge)或跳过模式(skip)。
在第二方面的一种可行的实施方式中,所述解析模块还用于:解析所述码流,以获得所述待处理图像块的更新判别标识信息;确定所述更新判别标识信息指示更新所述待处理图像块的预测块。
在第二方面的一种可行的实施方式中,所述计算模块还用于:获取所述待处理图像块的预设的更新判别标识信息;确定所述更新判别标识信息指示更新所述待处理图像块的预测块。
在本申请的第三方面,提供了提供了一种运动信息的预测设备,包括:处理器和耦合于所述处理器的存储器;所述处理器用于执行上述第一方面所述的方法。
在本申请的第四方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令在计算机上运行时,使得计算机执行上述第一方面所述的方法。
在本申请的第五方面,提供了一种包含指令的计算机程序产品,当所述指令在计算机上运行时,使得计算机执行上述第一方面所述的方法。
应理解,本申请的第二至五方面与本申请的第一方面的技术方案一致,各方面及对应的可实施的设计方式所取得的有益效果相似,不再赘述。
附图说明
为了更清楚地说明本申请实施例或背景技术中的技术方案,下面将对本申请实施例 或背景技术中所需要使用的附图进行说明。
图1为示例性的可通过配置以用于本申请实施例的一种视频编码及解码的系统框图;
图2为示例性的可通过配置以用于本申请实施例的一种视频编码器的系统框图;
图3为示例性的可通过配置以用于本申请实施例的一种视频解码器的系统框图;
图4为示例性的可通过配置以用于本申请实施例的一种帧间预测模块的框图;
图5为示例性的一种合并预测模式的实施流程图;
图6为示例性的一种高级运动矢量预测模式的实施流程图;
图7为示例性的可通过配置以用于本申请实施例的一种由视频解码器执行的运动补偿的实施流程图;
图8为示例性的一种编码单元及与其关联的相邻位置图像块的示意图;
图9为示例性的一种构建候选预测运动矢量列表的实施流程图;
图10为示例性的一种将经过组合的候选运动矢量添加到合并模式候选预测运动矢量列表的实施示意图;
图11为示例性的一种将经过缩放的候选运动矢量添加到合并模式候选预测运动矢量列表的实施示意图;
图12为示例性的一种将零运动矢量添加到合并模式候选预测运动矢量列表的实施示意图;
图13为本申请实施例中帧间预测方法的一个示意性流程图;
图14为示例性的一种候选运动矢量来源的示意图;
图15为本申请实施例中帧间预测装置的一个示意性框图;
图16为本申请实施例中帧间预测设备的一个示意性框图;
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
图1为本申请实施例中所描述的一种实例的视频译码系统1的框图。如本文所使用,术语“视频译码器”一般是指视频编码器和视频解码器两者。在本申请中,术语“视频译码”或“译码”可一般地指代视频编码或视频解码。视频译码系统1的视频编码器100和视频解码器200用于根据本申请提出的多种新的帧间预测模式中的任一种所描述的各种方法实例来预测当前经译码图像块或其子块的运动信息,例如运动矢量,使得预测出的运动矢量最大程度上接近使用运动估算方法得到的运动矢量,从而编码时无需传送运动矢量差值,从而进一步的改善编解码性能。
如图1中所示,视频译码系统1包含源装置10和目的地装置20。源装置10产生经编码视频数据。因此,源装置10可被称为视频编码装置。目的地装置20可对由源装置10所产生的经编码的视频数据进行解码。因此,目的地装置20可被称为视频解码装置。源装置10、目的地装置20或两个的各种实施方案可包含一或多个处理器以及耦合到所述一或多个处理器的存储器。所述存储器可包含但不限于RAM、ROM、EEPROM、快闪存储器或可用于以可由计算机存取的指令或数据结构的形式存储所要的程序代码的任何其它媒体,如本文所描述。
源装置10和目的地装置20可以包括各种装置,包含桌上型计算机、移动计算装置、 笔记型(例如,膝上型)计算机、平板计算机、机顶盒、例如所谓的“智能”电话等电话手持机、电视机、相机、显示装置、数字媒体播放器、视频游戏控制台、车载计算机或其类似者。
目的地装置20可经由链路30从源装置10接收经编码视频数据。链路30可包括能够将经编码视频数据从源装置10移动到目的地装置20的一或多个媒体或装置。在一个实例中,链路30可包括使得源装置10能够实时将经编码视频数据直接发射到目的地装置20的一或多个通信媒体。在此实例中,源装置10可根据通信标准(例如无线通信协议)来调制经编码视频数据,且可将经调制的视频数据发射到目的地装置20。所述一或多个通信媒体可包含无线和/或有线通信媒体,例如射频(RF)频谱或一或多个物理传输线。所述一或多个通信媒体可形成基于分组的网络的一部分,基于分组的网络例如为局域网、广域网或全球网络(例如,因特网)。所述一或多个通信媒体可包含路由器、交换器、基站或促进从源装置10到目的地装置20的通信的其它设备。
在另一实例中,可将经编码数据从输出接口140输出到存储装置40。类似地,可通过输入接口240从存储装置40存取经编码数据。存储装置40可包含多种分布式或本地存取的数据存储媒体中的任一者,例如硬盘驱动器、蓝光光盘、DVD、CD-ROM、快闪存储器、易失性或非易失性存储器,或用于存储经编码视频数据的任何其它合适的数字存储媒体。
在另一实例中,存储装置40可对应于文件服务器或可保持由源装置10产生的经编码视频的另一中间存储装置。目的地装置20可经由流式传输或下载从存储装置40存取所存储的视频数据。文件服务器可为任何类型的能够存储经编码的视频数据并且将经编码的视频数据发射到目的地装置20的服务器。实例文件服务器包含网络服务器(例如,用于网站)、FTP服务器、网络附接式存储(NAS)装置或本地磁盘驱动器。目的地装置20可通过任何标准数据连接(包含因特网连接)来存取经编码视频数据。这可包含无线信道(例如,Wi-Fi连接)、有线连接(例如,DSL、电缆调制解调器等),或适合于存取存储在文件服务器上的经编码视频数据的两者的组合。经编码视频数据从存储装置40的传输可为流式传输、下载传输或两者的组合。
本申请的运动矢量预测技术可应用于视频编解码以支持多种多媒体应用,例如空中电视广播、有线电视发射、卫星电视发射、串流视频发射(例如,经由因特网)、用于存储于数据存储媒体上的视频数据的编码、存储在数据存储媒体上的视频数据的解码,或其它应用。在一些实例中,视频译码系统1可用于支持单向或双向视频传输以支持例如视频流式传输、视频回放、视频广播和/或视频电话等应用。
图1中所说明的视频译码系统1仅为实例,并且本申请的技术可适用于未必包含编码装置与解码装置之间的任何数据通信的视频译码设置(例如,视频编码或视频解码)。在其它实例中,数据从本地存储器检索、在网络上流式传输等等。视频编码装置可对数据进行编码并且将数据存储到存储器,和/或视频解码装置可从存储器检索数据并且对数据进行解码。在许多实例中,由并不彼此通信而是仅编码数据到存储器和/或从存储器检索数据且解码数据的装置执行编码和解码。
在图1的实例中,源装置10包含视频源120、视频编码器100和输出接口140。在一些实例中,输出接口140可包含调节器/解调器(调制解调器)和/或发射器。视频源120可包括视频捕获装置(例如,摄像机)、含有先前捕获的视频数据的视频存档、用以从视频内容提供者接收视频数据的视频馈入接口,和/或用于产生视频数据的计算机图形系统,或视频数据的此些来源的组合。
视频编码器100可对来自视频源120的视频数据进行编码。在一些实例中,源装置10经由输出接口140将经编码视频数据直接发射到目的地装置20。在其它实例中,经编码视频数据还可存储到存储装置40上,供目的地装置20以后存取来用于解码和/或播放。
在图1的实例中,目的地装置20包含输入接口240、视频解码器200和显示装置220。在一些实例中,输入接口240包含接收器和/或调制解调器。输入接口240可经由链路30和/或从存储装置40接收经编码视频数据。显示装置220可与目的地装置20集成或可在目的地装置20外部。一般来说,显示装置220显示经解码视频数据。显示装置220可包括多种显示装置,例如,液晶显示器(LCD)、等离子显示器、有机发光二极管(OLED)显示器或其它类型的显示装置。
尽管图1中未图示,但在一些方面,视频编码器100和视频解码器200可各自与音频编码器和解码器集成,且可包含适当的多路复用器-多路分用器单元或其它硬件和软件,以处置共同数据流或单独数据流中的音频和视频两者的编码。在一些实例中,如果适用的话,那么MUX-DEMUX单元可符合I TU H.223多路复用器协议,或例如用户数据报协议(UDP)等其它协议。
视频编码器100和视频解码器200各自可实施为例如以下各项的多种电路中的任一者:一或多个微处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)、离散逻辑、硬件或其任何组合。如果部分地以软件来实施本申请,那么装置可将用于软件的指令存储在合适的非易失性计算机可读存储媒体中,且可使用一或多个处理器在硬件中执行所述指令从而实施本申请技术。前述内容(包含硬件、软件、硬件与软件的组合等)中的任一者可被视为一或多个处理器。视频编码器100和视频解码器200中的每一者可包含在一或多个编码器或解码器中,所述编码器或解码器中的任一者可集成为相应装置中的组合编码器/解码器(编码解码器)的一部分。
本申请可大体上将视频编码器100称为将某些信息“发信号通知”或“发射”到例如视频解码器200的另一装置。术语“发信号通知”或“发射”可大体上指代用以对经压缩视频数据进行解码的语法元素和/或其它数据的传送。此传送可实时或几乎实时地发生。替代地,此通信可经过一段时间后发生,例如可在编码时在经编码码流中将语法元素存储到计算机可读存储媒体时发生,解码装置接着可在所述语法元素存储到此媒体之后的任何时间检索所述语法元素。
JCT-VC开发了H.265(HEVC)标准。HEVC标准化基于称作HEVC测试模型(HM)的视频解码装置的演进模型。H.265的最新标准文档可从http://www.itu.int/rec/T-REC-H.265获得,最新版本的标准文档为H.265(12/16),该标准文档以全文引用的方式并入本文中。HM假设视频解码装置相对于I TU-TH.264/AVC的现有算法具有若干额外能力。例如,H.264提供9种帧内预测编码模式,而HM可提供多达35种帧内预测编码模式。
JVET致力于开发H.266标准。H.266标准化的过程基于称作H.266测试模型的视频解码装置的演进模型。H.266的算法描述可从http://phenix.int-evry.fr/jvet获得,其中最新的算法描述包含于JVET-F1001-v2中,该算法描述文档以全文引用的方式并入本文中。同时,可从https://jvet.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/获得JEM测试模型的参考软件,同样以全文引用的方式并入本文中。
一般来说,HM的工作模型描述可将视频帧或图像划分成包含亮度及色度样本两者的树块或最大编码单元(largest coding unit,LCU)的序列,LCU也被称为CTU。树块具有与H.264标准的宏块类似的目的。条带包含按解码次序的数个连续树块。可将视频帧或图 像分割成一个或多个条带。可根据四叉树将每一树块分裂成编码单元。例如,可将作为四叉树的根节点的树块分裂成四个子节点,且每一子节点可又为母节点且被分裂成另外四个子节点。作为四叉树的叶节点的最终不可分裂的子节点包括解码节点,例如,经解码视频块。与经解码码流相关联的语法数据可定义树块可分裂的最大次数,且也可定义解码节点的最小大小。
编码单元包含解码节点及预测块(prediction unit,PU)以及与解码节点相关联的变换单元(transformunit,TU)。CU的大小对应于解码节点的大小且形状必须为正方形。CU的大小的范围可为8×8像素直到最大64×64像素或更大的树块的大小。每一CU可含有一个或多个PU及一个或多个TU。例如,与CU相关联的语法数据可描述将CU分割成一个或多个PU的情形。分割模式在CU是被跳过或经直接模式编码、帧内预测模式编码或帧间预测模式编码的情形之间可为不同的。PU可经分割成形状为非正方形。例如,与CU相关联的语法数据也可描述根据四叉树将CU分割成一个或多个TU的情形。TU的形状可为正方形或非正方形。
HEVC标准允许根据TU进行变换,TU对于不同CU来说可为不同的。TU通常基于针对经分割LCU定义的给定CU内的PU的大小而设定大小,但情况可能并非总是如此。TU的大小通常与PU相同或小于PU。在一些可行的实施方式中,可使用称作“残余四叉树”(residual qualtree,RQT)的四叉树结构将对应于CU的残余样本再分成较小单元。RQT的叶节点可被称作TU。可变换与TU相关联的像素差值以产生变换系数,变换系数可被量化。
一般来说,PU包含与预测过程有关的数据。例如,在PU经帧内模式编码时,PU可包含描述PU的帧内预测模式的数据。作为另一可行的实施方式,在PU经帧间模式编码时,PU可包含界定PU的运动矢量的数据。例如,界定PU的运动矢量的数据可描述运动矢量的水平分量、运动矢量的垂直分量、运动矢量的分辨率(例如,四分之一像素精确度或八分之一像素精确度)、运动矢量所指向的参考图像,和/或运动矢量的参考图像列表(例如,列表0、列表1或列表C)。
一般来说,TU使用变换及量化过程。具有一个或多个PU的给定CU也可包含一个或多个TU。在预测之后,视频编码器100可计算对应于PU的残余值。残余值包括像素差值,像素差值可变换成变换系数、经量化且使用TU扫描以产生串行化变换系数以用于熵解码。本申请通常使用术语“视频块”来指CU的解码节点。在一些特定应用中,本申请也可使用术语“视频块”来指包含解码节点以及PU及TU的树块,例如,LCU或CU。
视频序列通常包含一系列视频帧或图像。图像群组(group of picture,GOP)示例性地包括一系列、一个或多个视频图像。GOP可在GOP的头信息中、图像中的一者或多者的头信息中或在别处包含语法数据,语法数据描述包含于GOP中的图像的数目。图像的每一条带可包含描述相应图像的编码模式的条带语法数据。视频编码器100通常对个别视频条带内的视频块进行操作以便编码视频数据。视频块可对应于CU内的解码节点。视频块可具有固定或变化的大小,且可根据指定解码标准而在大小上不同。
作为一种可行的实施方式,HM支持各种PU大小的预测。假定特定CU的大小为2N×2N,HM支持2N×2N或N×N的PU大小的帧内预测,及2N×2N、2N×N、N×2N或N×N的对称PU大小的帧间预测。HM也支持2N×nU、2N×nD、nL×2N及nR×2N的PU大小的帧间预测的不对称分割。在不对称分割中,CU的一方向未分割,而另一方向分割成25%及75%。对应于25%区段的CU的部分由“n”后跟着“上(Up)”、“下(Down)”、“左(Left)”或“右(Right)”的指示来指示。因此,例如,“2N×nU”指水平分割的2N×2NCU,其中2N×0.5NPU在上 部且2N×1.5NPU在底部。
在本申请中,“N×N”与“N乘N”可互换使用以指依照垂直维度及水平维度的视频块的像素尺寸,例如,16×16像素或16乘16像素。一般来说,16×16块将在垂直方向上具有16个像素(y=16),且在水平方向上具有16个像素(x=16)。同样地,N×N块一股在垂直方向上具有N个像素,且在水平方向上具有N个像素,其中N表示非负整数值。可将块中的像素排列成行及列。此外,块未必需要在水平方向上与在垂直方向上具有相同数目个像素。例如,块可包括N×M个像素,其中M未必等于N。
在使用CU的PU的帧内预测性或帧间预测性解码之后,视频编码器100可计算CU的TU的残余数据。PU可包括空间域(也称作像素域)中的像素数据,且TU可包括在将变换(例如,离散余弦变换(discrete cosine transform,DCT)、整数变换、小波变换或概念上类似的变换)应用于残余视频数据之后变换域中的系数。残余数据可对应于未经编码图像的像素与对应于PU的预测值之间的像素差。视频编码器100可形成包含CU的残余数据的TU,且接着变换TU以产生CU的变换系数。
在任何变换以产生变换系数之后,视频编码器100可执行变换系数的量化。量化示例性地指对系数进行量化以可能减少用以表示系数的数据的量从而提供进一步压缩的过程。量化过程可减少与系数中的一些或全部相关联的位深度。例如,可在量化期间将n位值降值舍位到m位值,其中n大于m。
JEM模型对视频图像的编码结构进行了进一步的改进,具体的,被称为“四叉树结合二叉树”(QTBT)的块编码结构被引入进来。QTBT结构摒弃了HEVC中的CU,PU,TU等概念,支持更灵活的CU划分形状,一个CU可以正方形,也可以是长方形。一个CTU首先进行四叉树划分,该四叉树的叶节点进一步进行二叉树划分。同时,在二叉树划分中存在两种划分模式,对称水平分割和对称竖直分割。二叉树的叶节点被称为CU,JEM的CU在预测和变换的过程中都不可以被进一步划分,也就是说JEM的CU,PU,TU具有相同的块大小。在现阶段的JEM中,CTU的最大尺寸为256×256亮度像素。
在一些可行的实施方式中,视频编码器100可利用预定义扫描次序来扫描经量化变换系数以产生可经熵编码的串行化向量。在其它可行的实施方式中,视频编码器100可执行自适应性扫描。在扫描经量化变换系数以形成一维向量之后,视频编码器100可根据上下文自适应性可变长度解码(CAVLC)、上下文自适应性二进制算术解码(CABAC)、基于语法的上下文自适应性二进制算术解码(SBAC)、概率区间分割熵(PIPE)解码或其他熵解码方法来熵解码一维向量。视频编码器100也可熵编码与经编码视频数据相关联的语法元素以供视频解码器200用于解码视频数据。
为了执行CABAC,视频编码器100可将上下文模型内的上下文指派给待传输的符号。上下文可与符号的相邻值是否为非零有关。为了执行CAVLC,视频编码器100可选择待传输的符号的可变长度码。可变长度解码(VLC)中的码字可经构建以使得相对较短码对应于可能性较大的符号,而较长码对应于可能性较小的符号。以这个方式,VLC的使用可相对于针对待传输的每一符号使用相等长度码字达成节省码率的目的。基于指派给符号的上下文可以确定CABAC中的概率。
在本申请实施例中,视频编码器可执行帧间预测以减少图像之间的时间冗余。如前文所描述,根据不同视频压缩编解码标准的规定,CU可具有一个或多个预测单元PU。换句话说,多个PU可属于CU,或者PU和CU的尺寸相同。在本文中当CU和PU尺寸相同时,CU的分割模式为不分割,或者即为分割为一个PU,且统一使用PU进行表述。当 视频编码器执行帧间预测时,视频编码器可用信号通知视频解码器用于PU的运动信息。示例性的,PU的运动信息可以包括:参考图像索引、运动矢量和预测方向标识。运动矢量可指示PU的图像块(也称视频块、像素块、像素集合等)与PU的参考块之间的位移。PU的参考块可为类似于PU的图像块的参考图像的一部分。参考块可定位于由参考图像索引和预测方向标识指示的参考图像中。
为了减少表示PU的运动信息所需要的编码比特的数目,视频编码器可根据合并预测模式或高级运动矢量预测模式过程产生用于PU中的每一者的候选预测运动矢量(Motion Vector,MV)列表。用于PU的候选预测运动矢量列表中的每一候选预测运动矢量可指示运动信息。由候选预测运动矢量列表中的一些候选预测运动矢量指示的运动信息可基于其它PU的运动信息。如果候选预测运动矢量指示指定空间候选预测运动矢量位置或时间候选预测运动矢量位置中的一者的运动信息,则本申请可将所述候选预测运动矢量称作“原始”候选预测运动矢量。举例来说,对于合并模式,在本文中也称为合并预测模式,可存在五个原始空间候选预测运动矢量位置和一个原始时间候选预测运动矢量位置。在一些实例中,视频编码器可通过组合来自不同原始候选预测运动矢量的部分运动矢量、修改原始候选预测运动矢量或仅插入零运动矢量作为候选预测运动矢量来产生额外候选预测运动矢量。这些额外候选预测运动矢量不被视为原始候选预测运动矢量且在本申请中可称作人工产生的候选预测运动矢量。
本申请的技术一般涉及用于在视频编码器处产生候选预测运动矢量列表的技术和用于在视频解码器处产生相同候选预测运动矢量列表的技术。视频编码器和视频解码器可通过实施用于构建候选预测运动矢量列表的相同技术来产生相同候选预测运动矢量列表。举例来说,视频编码器和视频解码器两者可构建具有相同数目的候选预测运动矢量(例如,五个候选预测运动矢量)的列表。视频编码器和解码器可首先考虑空间候选预测运动矢量(例如,同一图像中的相邻块),接着考虑时间候选预测运动矢量(例如,不同图像中的候选预测运动矢量),且最后可考虑人工产生的候选预测运动矢量直到将所要数目的候选预测运动矢量添加到列表为止。根据本申请的技术,可在候选预测运动矢量列表构建期间针对某些类型的候选预测运动矢量利用修剪操作以便从候选预测运动矢量列表移除重复,而对于其它类型的候选预测运动矢量,可能不使用修剪以便减小解码器复杂性。举例来说,对于空间候选预测运动矢量集合和对于时间候选预测运动矢量,可执行修剪操作以从候选预测运动矢量的列表排除具有重复运动信息的候选预测运动矢量。然而,当将人工产生的候选预测运动矢量添加到候选预测运动矢量的列表时,可在不对人工产生的候选预测运动矢量执行修剪操作的情况下添加人工产生的候选预测运动矢量。
在产生用于CU的PU的候选预测运动矢量列表之后,视频编码器可从候选预测运动矢量列表选择候选预测运动矢量且在码流中输出候选预测运动矢量索引。选定候选预测运动矢量可为具有产生最紧密地匹配正被解码的目标PU的预测子的运动矢量的候选预测运动矢量。候选预测运动矢量索引可指示在候选预测运动矢量列表中选定候选预测运动矢量的位置。视频编码器还可基于由PU的运动信息指示的参考块产生用于PU的预测性图像块。可基于由选定候选预测运动矢量指示的运动信息确定PU的运动信息。举例来说,在合并模式中,PU的运动信息可与由选定候选预测运动矢量指示的运动信息相同。在AMVP模式中,PU的运动信息可基于PU的运动矢量差和由选定候选预测运动矢量指示的运动信息确定。视频编码器可基于CU的PU的预测性图像块和用于CU的原始图像块产生用于CU的一或多个残余图像块。视频编码器可接着编码一或多个残余图像块且在码流 中输出一或多个残余图像块。
码流可包括识别PU的候选预测运动矢量列表中的选定候选预测运动矢量的数据。视频解码器可基于由PU的候选预测运动矢量列表中的选定候选预测运动矢量指示的运动信息确定PU的运动信息。视频解码器可基于PU的运动信息识别用于PU的一或多个参考块。在识别PU的一或多个参考块之后,视频解码器可基于PU的一或多个参考块产生用于PU的预测性图像块。视频解码器可基于用于CU的PU的预测性图像块和用于CU的一或多个残余图像块来重构用于CU的图像块。
为了易于解释,本申请可将位置或图像块描述为与CU或PU具有各种空间关系。此描述可解释为是指位置或图像块和与CU或PU相关联的图像块具有各种空间关系。此外,本申请可将视频解码器当前在解码的PU称作当前PU,也称为当前待处理图像块。本申请可将视频解码器当前在解码的CU称作当前CU。本申请可将视频解码器当前在解码的图像称作当前图像。应理解,本申请同时适用于PU和CU具有相同尺寸,或者PU即为CU的情况,统一使用PU来表示。
如前文简短地描述,视频编码器100可使用帧间预测以产生用于CU的PU的预测性图像块和运动信息。在许多例子中,给定PU的运动信息可能与一或多个附近PU(即,其图像块在空间上或时间上在给定PU的图像块附近的PU)的运动信息相同或类似。因为附近PU经常具有类似运动信息,所以视频编码器100可参考附近PU的运动信息来编码给定PU的运动信息。参考附近PU的运动信息来编码给定PU的运动信息可减少码流中指示给定PU的运动信息所需要的编码比特的数目。
视频编码器100可以各种方式参考附近PU的运动信息来编码给定PU的运动信息。举例来说,视频编码器100可指示给定PU的运动信息与附近PU的运动信息相同。本申请可使用合并模式来指代指示给定PU的运动信息与附近PU的运动信息相同或可从附近PU的运动信息导出。在另一可行的实施方式中,视频编码器100可计算用于给定PU的运动矢量差(Motion Vector Difference,MVD)。MVD指示给定PU的运动矢量与附近PU的运动矢量之间的差。视频编码器100可将MVD而非给定PU的运动矢量包括于给定PU的运动信息中。在码流中表示MVD比表示给定PU的运动矢量所需要的编码比特少。本申请可使用高级运动矢量预测模式指代通过使用MVD和识别候选者运动矢量的索引值来用信号通知解码端给定PU的运动信息。
为了使用合并模式或AMVP模式来用信号通知解码端给定PU的运动信息,视频编码器100可产生用于给定PU的候选预测运动矢量列表。候选预测运动矢量列表可包括一或多个候选预测运动矢量。用于给定PU的候选预测运动矢量列表中的候选预测运动矢量中的每一者可指定运动信息。由每一候选预测运动矢量指示的运动信息可包括运动矢量、参考图像索引和预测方向标识。候选预测运动矢量列表中的候选预测运动矢量可包括“原始”候选预测运动矢量,其中每一者指示不同于给定PU的PU内的指定候选预测运动矢量位置中的一者的运动信息。
在产生用于PU的候选预测运动矢量列表之后,视频编码器100可从用于PU的候选预测运动矢量列表选择候选预测运动矢量中的一者。举例来说,视频编码器可比较每一候选预测运动矢量与正被解码的PU且可选择具有所要码率-失真代价的候选预测运动矢量。视频编码器100可输出用于PU的候选预测运动矢量索引。候选预测运动矢量索引可识别选定候选预测运动矢量在候选预测运动矢量列表中的位置。
此外,视频编码器100可基于由PU的运动信息指示的参考块产生用于PU的预测性 图像块。可基于由用于PU的候选预测运动矢量列表中的选定候选预测运动矢量指示的运动信息确定PU的运动信息。举例来说,在合并模式中,PU的运动信息可与由选定候选预测运动矢量指示的运动信息相同。在AMVP模式中,可基于用于PU的运动矢量差和由选定候选预测运动矢量指示的运动信息确定PU的运动信息。视频编码器100可如前文所描述处理用于PU的预测性图像块。
当视频解码器200接收到码流时,视频解码器200可产生用于CU的PU中的每一者的候选预测运动矢量列表。由视频解码器200针对PU产生的候选预测运动矢量列表可与由视频编码器100针对PU产生的候选预测运动矢量列表相同。从码流中解析得到的语法元素可指示在PU的候选预测运动矢量列表中选定候选预测运动矢量的位置。在产生用于PU的候选预测运动矢量列表之后,视频解码器200可基于由PU的运动信息指示的一或多个参考块产生用于PU的预测性图像块。视频解码器200可基于由用于PU的候选预测运动矢量列表中的选定候选预测运动矢量指示的运动信息确定PU的运动信息。视频解码器200可基于用于PU的预测性图像块和用于CU的残余图像块重构用于CU的图像块。
应理解,在一种可行的实施方式中,在解码端,候选预测运动矢量列表的构建与从码流中解析选定候选预测运动矢量在候选预测运动矢量列表中的位置是相互独立,可以任意先后或者并行进行的。
在另一种可行的实施方式中,在解码端,首先从码流中解析选定候选预测运动矢量在候选预测运动矢量列表中的位置,根据解析出来的位置构建候选预测运动矢量列表,在该实施方式中,不需要构建全部的候选预测运动矢量列表,只需要构建到该解析出来的位置处的候选预测运动矢量列表,即能够确定该位置出的候选预测运动矢量即可。举例来说,当解析码流得出选定的候选预测运动矢量为候选预测运动矢量列表中索引为3的候选预测运动矢量时,仅需要构建从索引为0到索引为3的候选预测运动矢量列表,即可确定索引为3的候选预测运动矢量,可以达到减小复杂度,提高解码效率的技术效果。
图2为本申请实施例中所描述的一种实例的视频编码器100的框图。视频编码器100用于将视频输出到后处理实体41。后处理实体41表示可处理来自视频编码器100的经编码视频数据的视频实体的实例,例如媒体感知网络元件(MANE)或拼接/编辑装置。在一些情况下,后处理实体41可为网络实体的实例。在一些视频编码系统中,后处理实体41和视频编码器100可为单独装置的若干部分,而在其它情况下,相对于后处理实体41所描述的功能性可由包括视频编码器100的相同装置执行。在某一实例中,后处理实体41是图1的存储装置40的实例。
在图2的实例中,视频编码器100包括预测处理单元108、滤波器单元106、经解码图像缓冲器(DPB)107、求和器112、变换器101、量化器102和熵编码器103。预测处理单元108包括帧间预测器110和帧内预测器109。为了图像块重构,视频编码器100还包含反量化器104、反变换器105和求和器111。滤波器单元106既定表示一或多个环路滤波器,例如去块滤波器、自适应环路滤波器(ALF)和样本自适应偏移(SAO)滤波器。尽管在图2A中将滤波器单元106示出为环路内滤波器,但在其它实现方式下,可将滤波器单元106实施为环路后滤波器。在一种示例下,视频编码器100还可以包括视频数据存储器、分割单元(图中未示意)。
视频数据存储器可存储待由视频编码器100的组件编码的视频数据。可从视频源120获得存储在视频数据存储器中的视频数据。DPB 107可为参考图像存储器,其存储用于由 视频编码器100在帧内、帧间译码模式中对视频数据进行编码的参考视频数据。视频数据存储器和DPB 107可由多种存储器装置中的任一者形成,例如包含同步DRAM(SDRAM)的动态随机存取存储器(DRAM)、磁阻式RAM(MRAM)、电阻式RAM(RRAM),或其它类型的存储器装置。视频数据存储器和DPB 107可由同一存储器装置或单独存储器装置提供。在各种实例中,视频数据存储器可与视频编码器100的其它组件一起在芯片上,或相对于那些组件在芯片外。
如图2所示,视频编码器100接收视频数据,并将所述视频数据存储在视频数据存储器中。分割单元将所述视频数据分割成若干图像块,而且这些图像块可以被进一步分割为更小的块,例如基于四叉树结构或者二叉树结构的图像块分割。此分割还可包含分割成条带(slice)、片(tile)或其它较大单元。视频编码器100通常说明编码待编码的视频条带内的图像块的组件。所述条带可分成多个图像块(并且可能分成被称作片的图像块集合)。预测处理单元108可选择用于当前图像块的多个可能的译码模式中的一者,例如多个帧内译码模式中的一者或多个帧间译码模式中的一者。预测处理单元108可将所得经帧内、帧间译码的块提供给求和器112以产生残差块,且提供给求和器111以重构用作参考图像的经编码块。
预测处理单元108内的帧内预测器109可相对于与待编码当前块在相同帧或条带中的一或多个相邻块执行当前图像块的帧内预测性编码,以去除空间冗余。预测处理单元108内的帧间预测器110可相对于一或多个参考图像中的一或多个预测块执行当前图像块的帧间预测性编码以去除时间冗余。
具体的,帧间预测器110可用于确定用于编码当前图像块的帧间预测模式。举例来说,帧间预测器110可使用码率-失真分析来计算候选帧间预测模式集合中的各种帧间预测模式的码率-失真值,并从中选择具有最佳码率-失真特性的帧间预测模式。码率失真分析通常确定经编码块与经编码以产生所述经编码块的原始的未经编码块之间的失真(或误差)的量,以及用于产生经编码块的位码率(也就是说,位数目)。例如,帧间预测器110可确定候选帧间预测模式集合中编码所述当前图像块的码率失真代价最小的帧间预测模式为用于对当前图像块进行帧间预测的帧间预测模式。
帧间预测器110用于基于确定的帧间预测模式,预测当前图像块中一个或多个子块的运动信息(例如运动矢量),并利用当前图像块中一个或多个子块的运动信息(例如运动矢量)获取或产生当前图像块的预测块。帧间预测器110可在参考图像列表中的一者中定位所述运动向量指向的预测块。帧间预测器110还可产生与图像块和视频条带相关联的语法元素以供视频解码器200在对视频条带的图像块解码时使用。又或者,一种示例下,帧间预测器110利用每个子块的运动信息执行运动补偿过程,以生成每个子块的预测块,从而得到当前图像块的预测块;应当理解的是,这里的帧间预测器110执行运动估计和运动补偿过程。
具体的,在为当前图像块选择帧间预测模式之后,帧间预测器110可将指示当前图像块的所选帧间预测模式的信息提供到熵编码器103,以便于熵编码器103编码指示所选帧间预测模式的信息。
帧内预测器109可对当前图像块执行帧内预测。明确地说,帧内预测器109可确定用来编码当前块的帧内预测模式。举例来说,帧内预测器109可使用码率-失真分析来计算各种待测试的帧内预测模式的码率-失真值,并从待测试模式当中选择具有最佳码率-失真特性的帧内预测模式。在任何情况下,在为图像块选择帧内预测模式之后,帧内预测 器109可将指示当前图像块的所选帧内预测模式的信息提供到熵编码器103,以便熵编码器103编码指示所选帧内预测模式的信息。
在预测处理单元108经由帧间预测、帧内预测产生当前图像块的预测块之后,视频编码器100通过从待编码的当前图像块减去所述预测块来形成残差图像块。求和器112表示执行此减法运算的一或多个组件。所述残差块中的残差视频数据可包含在一或多个TU中,并应用于变换器101。变换器101使用例如离散余弦变换(DCT)或概念上类似的变换等变换将残差视频数据变换成残差变换系数。变换器101可将残差视频数据从像素值域转换到变换域,例如频域。
变换器101可将所得变换系数发送到量化器102。量化器102量化所述变换系数以进一步减小位码率。在一些实例中,量化器102可接着执行对包含经量化的变换系数的矩阵的扫描。或者,熵编码器103可执行扫描。
在量化之后,熵编码器103对经量化变换系数进行熵编码。举例来说,熵编码器103可执行上下文自适应可变长度编码(CAVLC)、上下文自适应二进制算术编码(CABAC)、基于语法的上下文自适应二进制算术编码(SBAC)、概率区间分割熵(PIPE)编码或另一熵编码方法或技术。在由熵编码器103熵编码之后,可将经编码码流发射到视频解码器200,或经存档以供稍后发射或由视频解码器200检索。熵编码器103还可对待编码的当前图像块的语法元素进行熵编码。
反量化器104和反变化器105分别应用逆量化和逆变换以在像素域中重构所述残差块,例如以供稍后用作参考图像的参考块。求和器111将经重构的残差块添加到由帧间预测器110或帧内预测器109产生的预测块,以产生经重构图像块。滤波器单元106可以适用于经重构图像块以减小失真,诸如方块效应(block artifacts)。然后,该经重构图像块作为参考块存储在经解码图像缓冲器107中,可由帧间预测器110用作参考块以对后续视频帧或图像中的块进行帧间预测。
应当理解的是,视频编码器100的其它的结构变化可用于编码视频流。例如,对于某些图像块或者图像帧,视频编码器100可以直接地量化残差信号而不需要经变换器101处理,相应地也不需要经反变换器105处理;或者,对于某些图像块或者图像帧,视频编码器100没有产生残差数据,相应地不需要经变换器101、量化器102、反量化器104和反变换器105处理;或者,视频编码器100可以将经重构图像块作为参考块直接地进行存储而不需要经滤波器单元106处理;或者,视频编码器100中量化器102和反量化器104可以合并在一起。
图3为本申请实施例中所描述的一种实例的视频解码器200的框图。在图3的实例中,视频解码器200包括熵解码器203、预测处理单元208、反量化器204、反变换器205、求和器211、滤波器单元206以及经解码图像缓冲器207。预测处理单元208可以包括帧间预测器210和帧内预测器209。在一些实例中,视频解码器200可执行大体上与相对于来自图2的视频编码器100描述的编码过程互逆的解码过程。
在解码过程中,视频解码器200从视频编码器100接收表示经编码视频条带的图像块和相关联的语法元素的经编码视频码流。视频解码器200可从网络实体42接收视频数据,可选的,还可以将所述视频数据存储在视频数据存储器(图中未示意)中。视频数据存储器可存储待由视频解码器200的组件解码的视频数据,例如经编码视频码流。存储在视频数据存储器中的视频数据,例如可从存储装置40、从相机等本地视频源、经由视频数据的有线或无线网络通信或者通过存取物理数据存储媒体而获得。视频数据存储器可作 为用于存储来自经编码视频码流的经编码视频数据的经解码图像缓冲器(CPB)。因此,尽管在图3中没有示意出视频数据存储器,但视频数据存储器和DPB 207可以是同一个的存储器,也可以是单独设置的存储器。视频数据存储器和DPB 207可由多种存储器装置中的任一者形成,例如:包含同步DRAM(SDRAM)的动态随机存取存储器(DRAM)、磁阻式RAM(MRAM)、电阻式RAM(RRAM),或其它类型的存储器装置。在各种实例中,视频数据存储器可与视频解码器200的其它组件一起集成在芯片上,或相对于那些组件设置在芯片外。
网络实体42可例如为服务器、MANE、视频编辑器/剪接器,或用于实施上文所描述的技术中的一或多者的其它此装置。网络实体42可包括或可不包括视频编码器,例如视频编码器100。在网络实体42将经编码视频码流发送到视频解码器200之前,网络实体42可实施本申请中描述的技术中的部分。在一些视频解码系统中,网络实体42和视频解码器200可为单独装置的部分,而在其它情况下,相对于网络实体42描述的功能性可由包括视频解码器200的相同装置执行。在一些情况下,网络实体42可为图1的存储装置40的实例。
视频解码器200的熵解码器203对码流进行熵解码以产生经量化的系数和一些语法元素。熵解码器203将语法元素转发到预测处理单元208。视频解码器200可接收在视频条带层级和/或图像块层级处的语法元素。
当视频条带被解码为经帧内解码(I)条带时,预测处理单元208的帧内预测器209可基于发信号通知的帧内预测模式和来自当前帧或图像的先前经解码块的数据而产生当前视频条带的图像块的预测块。当视频条带被解码为经帧间解码(即,B或P)条带时,预测处理单元208的帧间预测器210可基于从熵解码器203接收到的语法元素,确定用于对当前视频条带的当前图像块进行解码的帧间预测模式,基于确定的帧间预测模式,对所述当前图像块进行解码(例如执行帧间预测)。具体的,帧间预测器210可确定是否对当前视频条带的当前图像块采用新的帧间预测模式进行预测,如果语法元素指示采用新的帧间预测模式来对当前图像块进行预测,基于新的帧间预测模式(例如通过语法元素指定的一种新的帧间预测模式或默认的一种新的帧间预测模式)预测当前视频条带的当前图像块或当前图像块的子块的运动信息,从而通过运动补偿过程使用预测出的当前图像块或当前图像块的子块的运动信息来获取或生成当前图像块或当前图像块的子块的预测块。这里的运动信息可以包括参考图像信息和运动矢量,其中参考图像信息可以包括但不限于单向/双向预测信息,参考图像列表号和参考图像列表对应的参考图像索引。对于帧间预测,可从参考图像列表中的一者内的参考图像中的一者产生预测块。视频解码器200可基于存储在DPB 207中的参考图像来建构参考图像列表,即列表0和列表1。当前图像的参考帧索引可包含于参考帧列表0和列表1中的一或多者中。在一些实例中,可以是视频编码器100发信号通知指示是否采用新的帧间预测模式来解码特定块的特定语法元素,或者,也可以是发信号通知指示是否采用新的帧间预测模式,以及指示具体采用哪一种新的帧间预测模式来解码特定块的特定语法元素。应当理解的是,这里的帧间预测器210执行运动补偿过程。
反量化器204将在码流中提供且由熵解码器203解码的经量化变换系数逆量化,即去量化。逆量化过程可包括:使用由视频编码器100针对视频条带中的每个图像块计算的量化参数来确定应施加的量化程度以及同样地确定应施加的逆量化程度。反变换器205将逆变换应用于变换系数,例如逆DCT、逆整数变换或概念上类似的逆变换过程,以便产生像素域中的残差块。
在帧间预测器210产生用于当前图像块或当前图像块的子块的预测块之后,视频解码器200通过将来自反变换器205的残差块与由帧间预测器210产生的对应预测块求和以得到重建的块,即经解码图像块。求和器211表示执行此求和操作的组件。在需要时,还可使用环路滤波器(在解码环路中或在解码环路之后)来使像素转变平滑或者以其它方式改进视频质量。滤波器单元206可以表示一或多个环路滤波器,例如去块滤波器、自适应环路滤波器(ALF)以及样本自适应偏移(SAO)滤波器。尽管在图2B中将滤波器单元206示出为环路内滤波器,但在其它实现方式中,可将滤波器单元206实施为环路后滤波器。在一种示例下,滤波器单元206适用于重建块以减小块失真,并且该结果作为经解码视频流输出。并且,还可以将给定帧或图像中的经解码图像块存储在经解码图像缓冲器207中,经解码图像缓冲器207存储用于后续运动补偿的参考图像。经解码图像缓冲器207可为存储器的一部分,其还可以存储经解码视频,以供稍后在显示装置(例如图1的显示装置220)上呈现,或可与此类存储器分开。
应当理解的是,视频解码器200的其它结构变化可用于解码经编码视频码流。例如,视频解码器200可以不经滤波器单元206处理而生成输出视频流;或者,对于某些图像块或者图像帧,视频解码器200的熵解码器203没有解码出经量化的系数,相应地不需要经反量化器204和反变换器205处理。
如前文所注明,本申请的技术示例性地涉及帧间解码。应理解,本申请的技术可通过本申请中所描述的视频解码器中的任一者进行,视频解码器包含(例如)如关于图1到3所展示及描述的视频编码器100及视频解码器200。即,在一种可行的实施方式中,关于图2所描述的帧间预测器110可在视频数据的块的编码期间在执行帧间预测时执行下文中所描述的特定技术。在另一可行的实施方式中,关于图3所描述的帧间预测器210可在视频数据的块的解码期间在执行帧间预测时执行下文中所描述的特定技术。因此,对一般性“视频编码器”或“视频解码器”的引用可包含视频编码器100、视频解码器200或另一视频编码或编码单元。
图4为本申请实施例中帧间预测模块的一种示意性框图。帧间预测模块121,示例性的,可以包括运动估计单元42和运动补偿单元44。在不同的视频压缩编解码标准中,PU和CU的关系各有不同。帧间预测模块121可根据多个分割模式将当前CU分割为PU。举例来说,帧间预测模块121可根据2N×2N、2N×N、N×2N和N×N分割模式将当前CU分割为PU。在其他实施例中,当前CU即为当前PU,不作限定。
帧间预测模块121可对PU中的每一者执行整数运动估计(Integer Motion Estimation,IME)且接着执行分数运动估计(Fraction Motion Estimation,FME)。当帧间预测模块121对PU执行IME时,帧间预测模块121可在一个或多个参考图像中搜索用于PU的参考块。在找到用于PU的参考块之后,帧间预测模块121可产生以整数精度指示PU与用于PU的参考块之间的空间位移的运动矢量。当帧间预测模块121对PU执行FME时,帧间预测模块121可改进通过对PU执行IME而产生的运动矢量。通过对PU执行FME而产生的运动矢量可具有子整数精度(例如,1/2像素精度、1/4像素精度等)。在产生用于PU的运动矢量之后,帧间预测模块121可使用用于PU的运动矢量以产生用于PU的预测性图像块。
在帧间预测模块121使用AMVP模式用信号通知解码端PU的运动信息的一些可行的实施方式中,帧间预测模块121可产生用于PU的候选预测运动矢量列表。候选预测运动矢量列表可包括一个或多个原始候选预测运动矢量和从原始候选预测运动矢量导出的一 个或多个额外候选预测运动矢量。在产生用于PU的候选预测运动矢量列表之后,帧间预测模块121可从候选预测运动矢量列表选择候选预测运动矢量且产生用于PU的运动矢量差(MVD)。用于PU的MVD可指示由选定候选预测运动矢量指示的运动矢量与使用IME和FME针对PU产生的运动矢量之间的差。在这些可行的实施方式中,帧间预测模块121可输出识别选定候选预测运动矢量在候选预测运动矢量列表中的位置的候选预测运动矢量索引。帧间预测模块121还可输出PU的MVD。下文详细描述图6中,本申请实施例中高级运动矢量预测(AMVP)模式的一种可行的实施方式。
除了通过对PU执行IME和FME来产生用于PU的运动信息外,帧间预测模块121还可对PU中的每一者执行合并(Merge)操作。当帧间预测模块121对PU执行合并操作时,帧间预测模块121可产生用于PU的候选预测运动矢量列表。用于PU的候选预测运动矢量列表可包括一个或多个原始候选预测运动矢量和从原始候选预测运动矢量导出的一个或多个额外候选预测运动矢量。候选预测运动矢量列表中的原始候选预测运动矢量可包括一个或多个空间候选预测运动矢量和时间候选预测运动矢量。空间候选预测运动矢量可指示当前图像中的其它PU的运动信息。时间候选预测运动矢量可基于不同于当前图像的对应的PU的运动信息。时间候选预测运动矢量还可称作时间运动矢量预测(TMVP)。
在产生候选预测运动矢量列表之后,帧间预测模块121可从候选预测运动矢量列表选择候选预测运动矢量中的一个。帧间预测模块121可接着基于由PU的运动信息指示的参考块产生用于PU的预测性图像块。在合并模式中,PU的运动信息可与由选定候选预测运动矢量指示的运动信息相同。下文描述的图5说明Merge示例性的流程图。
在基于IME和FME产生用于PU的预测性图像块和基于合并操作产生用于PU的预测性图像块之后,帧间预测模块121可选择通过FME操作产生的预测性图像块或者通过合并操作产生的预测性图像块。在一些可行的实施方式中,帧间预测模块121可基于通过FME操作产生的预测性图像块和通过合并操作产生的预测性图像块的码率-失真代价分析来选择用于PU的预测性图像块。
在帧间预测模块121已选择通过根据分割模式中的每一者分割当前CU而产生的PU的预测性图像块之后(在一些实施方式中,编码树单元CTU划分为CU后,不会再进一步划分为更小的PU,此时PU等同于CU),帧间预测模块121可选择用于当前CU的分割模式。在一些实施方式中,帧间预测模块121可基于通过根据分割模式中的每一者分割当前CU而产生的PU的选定预测性图像块的码率-失真代价分析来选择用于当前CU的分割模式。帧间预测模块121可将与属于选定分割模式的PU相关联的预测性图像块输出到残差产生模块102。帧间预测模块121可将指示属于选定分割模式的PU的运动信息的语法元素输出到熵编码模块116。
在图4的示意图中,帧间预测模块121包括IME模块180A到180N(统称为“IME模块180”)、FME模块182A到182N(统称为“FME模块182”)、合并模块184A到184N(统称为“合并模块184”)、PU模式决策模块186A到186N(统称为“PU模式决策模块186”)和CU模式决策模块188(也可以包括执行从CTU到CU的模式决策过程)。
IME模块180、FME模块182和合并模块184可对当前CU的PU执行IME操作、FME操作和合并操作。图4的示意图中将帧间预测模块121说明为包括用于CU的每一分割模式的每一PU的单独IME模块180、FME模块182和合并模块184。在其它可行的实施方式中,帧间预测模块121不包括用于CU的每一分割模式的每一PU的单独IME模块180、FME模块182和合并模块184。
如图4的示意图中所说明,IME模块180A、FME模块182A和合并模块184A可对通过根据2N×2N分割模式分割CU而产生的PU执行IME操作、FME操作和合并操作。PU模式决策模块186A可选择由IME模块180A、FME模块182A和合并模块184A产生的预测性图像块中的一者。
IME模块180B、FME模块182B和合并模块184B可对通过根据N×2N分割模式分割CU而产生的左PU执行IME操作、FME操作和合并操作。PU模式决策模块186B可选择由IME模块180B、FME模块182B和合并模块184B产生的预测性图像块中的一者。
IME模块180C、FME模块182C和合并模块184C可对通过根据N×2N分割模式分割CU而产生的右PU执行IME操作、FME操作和合并操作。PU模式决策模块186C可选择由IME模块180C、FME模块182C和合并模块184C产生的预测性图像块中的一者。
IME模块180N、FME模块182N和合并模块184可对通过根据N×N分割模式分割CU而产生的右下PU执行IME操作、FME操作和合并操作。PU模式决策模块186N可选择由IME模块180N、FME模块182N和合并模块184N产生的预测性图像块中的一者。
PU模式决策模块186可基于多个可能预测性图像块的码率-失真代价分析选择预测性图像块,且选择针对给定解码情形提供最佳码率-失真代价的预测性图像块。示例性的,对于带宽受限的应用,PU模式决策模块186可偏向选择增加压缩比的预测性图像块,而对于其它应用,PU模式决策模块186可偏向选择增加经重建视频质量的预测性图像块。在PU模式决策模块186选择用于当前CU的PU的预测性图像块之后,CU模式决策模块188选择用于当前CU的分割模式且输出属于选定分割模式的PU的预测性图像块和运动信息。
图5为本申请实施例中合并模式的一种示例性流程图。视频编码器(例如视频编码器20)可执行合并操作200。在其它可行的实施方式中,视频编码器可执行不同于合并操作200的合并操作。举例来说,在其它可行的实施方式中,视频编码器可执行合并操作,其中视频编码器执行比合并操作200多、少的步骤或与合并操作200不同的步骤。在其它可行的实施方式中,视频编码器可以不同次序或并行地执行合并操作200的步骤。编码器还可对以跳跃(skip)模式编码的PU执行合并操作200。
在视频编码器开始合并操作200之后,视频编码器可产生用于当前PU的候选预测运动矢量列表(202)。视频编码器可以各种方式产生用于当前PU的候选预测运动矢量列表。举例来说,视频编码器可根据下文关于图8到图12描述的实例技术中的一者产生用于当前PU的候选预测运动矢量列表。
如前文所述,用于当前PU的候选预测运动矢量列表可包括时间候选预测运动矢量。时间候选预测运动矢量可指示时域对应(co-located)的PU的运动信息。co-located的PU可在空间上与当前PU处于图像帧中的同一个位置,但在参考图像而非当前图像中。本申请可将包括时域对应的PU的参考图像称作相关参考图像。本申请可将相关参考图像的参考图像索引称作相关参考图像索引。如前文所描述,当前图像可与一个或多个参考图像列表(例如,列表0、列表1等)相关联。参考图像索引可通过指示在参考图像某一个参考图像列表中的位置来指示参考图像。在一些可行的实施方式中,当前图像可与组合参考图像列表相关联。
在一些视频编码器中,相关参考图像索引为涵盖与当前PU相关联的参考索引源位置的PU的参考图像索引。在这些视频编码器中,与当前PU相关联的参考索引源位置邻接于当前PU左方或邻接于当前PU上方。在本申请中,如果与PU相关联的图像块包括特定 位置,则PU可“涵盖”所述特定位置。在这些视频编码器中,如果参考索引源位置不可用,则视频编码器可使用零的参考图像索引。
然而,可存在以下例子:与当前PU相关联的参考索引源位置在当前CU内。在这些例子中,如果PU在当前CU上方或左方,则涵盖与当前PU相关联的参考索引源位置的PU可被视为可用。然而,视频编码器可需要存取当前CU的另一PU的运动信息以便确定含有co-located PU的参考图像。因此,这些视频编码器可使用属于当前CU的PU的运动信息(即,参考图像索引)以产生用于当前PU的时间候选预测运动矢量。换句话说,这些视频编码器可使用属于当前CU的PU的运动信息产生时间候选预测运动矢量。因此,视频编码器可能不能并行地产生用于当前PU和涵盖与当前PU相关联的参考索引源位置的PU的候选预测运动矢量列表。
根据本申请的技术,视频编码器可在不参考任何其它PU的参考图像索引的情况下显式地设定相关参考图像索引。此可使得视频编码器能够并行地产生用于当前PU和当前CU的其它PU的候选预测运动矢量列表。因为视频编码器显式地设定相关参考图像索引,所以相关参考图像索引不基于当前CU的任何其它PU的运动信息。在视频编码器显式地设定相关参考图像索引的一些可行的实施方式中,视频编码器可始终将相关参考图像索引设定为固定的预定义预设参考图像索引(例如0)。以此方式,视频编码器可基于由预设参考图像索引指示的参考帧中的co-located PU的运动信息产生时间候选预测运动矢量,且可将时间候选预测运动矢量包括于当前CU的候选预测运动矢量列表中。
在视频编码器显式地设定相关参考图像索引的可行的实施方式中,视频编码器可显式地在语法结构(例如图像标头、条带标头、APS或另一语法结构)中用信号通知相关参考图像索引。在此可行的实施方式中,视频编码器可用信号通知解码端用于每一LCU(即CTU)、CU、PU、TU或其它类型的子块的相关参考图像索引。举例来说,视频编码器可用信号通知:用于CU的每一PU的相关参考图像索引等于“1”。
在一些可行的实施方式中,相关参考图像索引可经隐式地而非显式地设定。在这些可行的实施方式中,视频编码器可使用由涵盖当前CU外部的位置的PU的参考图像索引指示的参考图像中的PU的运动信息产生用于当前CU的PU的候选预测运动矢量列表中的每一时间候选预测运动矢量,即使这些位置并不严格地邻近当前PU。
在产生用于当前PU的候选预测运动矢量列表之后,视频编码器可产生与候选预测运动矢量列表中的候选预测运动矢量相关联的预测性图像块(204)。视频编码器可通过基于所指示候选预测运动矢量的运动信息确定当前PU的运动信息和接着基于由当前PU的运动信息指示的一个或多个参考块产生预测性图像块来产生与候选预测运动矢量相关联的预测性图像块。视频编码器可接着从候选预测运动矢量列表选择候选预测运动矢量中的一者(206)。视频编码器可以各种方式选择候选预测运动矢量。举例来说,视频编码器可基于对与候选预测运动矢量相关联的预测性图像块的每一者的码率-失真代价分析来选择候选预测运动矢量中的一者。
在选择候选预测运动矢量之后,视频编码器可输出候选预测运动矢量索引(208)。候选预测运动矢量索引可指示在候选预测运动矢量列表中选定候选预测运动矢量的位置。在一些可行的实施方式中,候选预测运动矢量索引可表示为“merge_idx”。
图6为本申请实施例中高级运动矢量预测(AMVP)模式的一种示例性流程图。视频编码器(例如视频编码器20)可执行AMVP操作210。
在视频编码器开始AMVP操作210之后,视频编码器可产生用于当前PU的一个或多 个运动矢量(211)。视频编码器可执行整数运动估计和分数运动估计以产生用于当前PU的运动矢量。如前文所描述,当前图像可与两个参考图像列表(列表0和列表1)相关联。如果当前PU经单向预测,则视频编码器可产生用于当前PU的列表0运动矢量或列表1运动矢量。列表0运动矢量可指示当前PU的图像块与列表0中的参考图像中的参考块之间的空间位移。列表1运动矢量可指示当前PU的图像块与列表1中的参考图像中的参考块之间的空间位移。如果当前PU经双向预测,则视频编码器可产生用于当前PU的列表0运动矢量和列表1运动矢量。
在产生用于当前PU的一个或多个运动矢量之后,视频编码器可产生用于当前PU的预测性图像块(212)。视频编码器可基于由用于当前PU的一个或多个运动矢量指示的一个或多个参考块产生用于当前PU的预测性图像块。
另外,视频编码器可产生用于当前PU的候选预测运动矢量列表(213)。视频解码器可以各种方式产生用于当前PU的候选预测运动矢量列表。举例来说,视频编码器可根据下文关于图8到图12描述的可行的实施方式中的一个或多个产生用于当前PU的候选预测运动矢量列表。在一些可行的实施方式中,当视频编码器在AMVP操作210中产生候选预测运动矢量列表时,候选预测运动矢量列表可限于两个候选预测运动矢量。相比而言,当视频编码器在合并操作中产生候选预测运动矢量列表时,候选预测运动矢量列表可包括更多候选预测运动矢量(例如,五个候选预测运动矢量)。
在产生用于当前PU的候选预测运动矢量列表之后,视频编码器可产生用于候选预测运动矢量列表中的每一候选预测运动矢量的一个或多个运动矢量差(MVD)(214)。视频编码器可通过确定由候选预测运动矢量指示的运动矢量与当前PU的对应运动矢量之间的差来产生用于候选预测运动矢量的运动矢量差。
如果当前PU经单向预测,则视频编码器可产生用于每一候选预测运动矢量的单一MVD。如果当前PU经双向预测,则视频编码器可产生用于每一候选预测运动矢量的两个MVD。第一MVD可指示候选预测运动矢量的运动矢量与当前PU的列表0运动矢量之间的差。第二MVD可指示候选预测运动矢量的运动矢量与当前PU的列表1运动矢量之间的差。
视频编码器可从候选预测运动矢量列表选择候选预测运动矢量中的一个或多个(215)。视频编码器可以各种方式选择一个或多个候选预测运动矢量。举例来说,视频编码器可选择具有最小误差地匹配待编码的运动矢量的相关联运动矢量的候选预测运动矢量,此可减少表示用于候选预测运动矢量的运动矢量差所需的位数目。
在选择一个或多个候选预测运动矢量之后,视频编码器可输出用于当前PU的一个或多个参考图像索引、一个或多个候选预测运动矢量索引,和用于一个或多个选定候选预测运动矢量的一个或多个运动矢量差(216)。
在当前图像与两个参考图像列表(列表0和列表1)相关联且当前PU经单向预测的例子中,视频编码器可输出用于列表0的参考图像索引(“ref_idx_10”)或用于列表1的参考图像索引(“ref_idx_11”)。视频编码器还可输出指示用于当前PU的列表0运动矢量的选定候选预测运动矢量在候选预测运动矢量列表中的位置的候选预测运动矢量索引(“mvp_10_flag”)。或者,视频编码器可输出指示用于当前PU的列表1运动矢量的选定候选预测运动矢量在候选预测运动矢量列表中的位置的候选预测运动矢量索引(“mvp_11_flag”)。视频编码器还可输出用于当前PU的列表0运动矢量或列表1运动矢量的MVD。
在当前图像与两个参考图像列表(列表0和列表1)相关联且当前PU经双向预测的例 子中,视频编码器可输出用于列表0的参考图像索引(“ref_idx_10”)和用于列表1的参考图像索引(“ref_idx_11”)。视频编码器还可输出指示用于当前PU的列表0运动矢量的选定候选预测运动矢量在候选预测运动矢量列表中的位置的候选预测运动矢量索引(“mvp_10_flag”)。另外,视频编码器可输出指示用于当前PU的列表1运动矢量的选定候选预测运动矢量在候选预测运动矢量列表中的位置的候选预测运动矢量索引(“mvp_11_flag”)。视频编码器还可输出用于当前PU的列表0运动矢量的MVD和用于当前PU的列表1运动矢量的MVD。
图7为本申请实施例中由视频解码器(例如视频解码器30)执行的运动补偿的一种示例性流程图。
当视频解码器执行运动补偿操作220时,视频解码器可接收用于当前PU的选定候选预测运动矢量的指示(222)。举例来说,视频解码器可接收指示选定候选预测运动矢量在当前PU的候选预测运动矢量列表内的位置的候选预测运动矢量索引。
如果当前PU的运动信息是使用AMVP模式进行编码且当前PU经双向预测,则视频解码器可接收第一候选预测运动矢量索引和第二候选预测运动矢量索引。第一候选预测运动矢量索引指示用于当前PU的列表0运动矢量的选定候选预测运动矢量在候选预测运动矢量列表中的位置。第二候选预测运动矢量索引指示用于当前PU的列表1运动矢量的选定候选预测运动矢量在候选预测运动矢量列表中的位置。在一些可行的实施方式中,单一语法元素可用以识别两个候选预测运动矢量索引。
另外,视频解码器可产生用于当前PU的候选预测运动矢量列表(224)。视频解码器可以各种方式产生用于当前PU的此候选预测运动矢量列表。举例来说,视频解码器可使用下文参看图8到图12描述的技术来产生用于当前PU的候选预测运动矢量列表。当视频解码器产生用于候选预测运动矢量列表的时间候选预测运动矢量时,视频解码器可显式地或隐式地设定识别包括co-located PU的参考图像的参考图像索引,如前文关于图5所描述。
在产生用于当前PU的候选预测运动矢量列表之后,视频解码器可基于由用于当前PU的候选预测运动矢量列表中的一个或多个选定候选预测运动矢量指示的运动信息确定当前PU的运动信息(225)。举例来说,如果当前PU的运动信息是使用合并模式而编码,则当前PU的运动信息可与由选定候选预测运动矢量指示的运动信息相同。如果当前PU的运动信息是使用AMVP模式而编码,则视频解码器可使用由所述或所述选定候选预测运动矢量指示的一个或多个运动矢量和码流中指示的一个或多个MVD来重建当前PU的一个或多个运动矢量。当前PU的参考图像索引和预测方向标识可与所述一个或多个选定候选预测运动矢量的参考图像索引和预测方向标识相同。在确定当前PU的运动信息之后,视频解码器可基于由当前PU的运动信息指示的一个或多个参考块产生用于当前PU的预测性图像块(226)。
图8为本申请实施例中编码单元(CU)及与其关联的相邻位置图像块的一种示例性示意图,说明CU250和与CU250相关联的示意性的候选预测运动矢量位置252A到252E的示意图。本申请可将候选预测运动矢量位置252A到252E统称为候选预测运动矢量位置252。候选预测运动矢量位置252表示与CU250在同一图像中的空间候选预测运动矢量。候选预测运动矢量位置252A定位于CU250左方。候选预测运动矢量位置252B定位于CU250上方。候选预测运动矢量位置252C定位于CU250右上方。候选预测运动矢量位置252D定位于CU250左下方。候选预测运动矢量位置252E定位于CU250左上方。图8为 用以提供帧间预测模块121和运动补偿模块162可产生候选预测运动矢量列表的方式的示意性实施方式。下文将参考帧间预测模块121解释实施方式,但应理解运动补偿模块162可实施相同技术,且因此产生相同候选预测运动矢量列表。
图9为本申请实施例中构建候选预测运动矢量列表的一种示例性流程图。将参考包括五个候选预测运动矢量的列表描述图9的技术,但本文中所描述的技术还可与具有其它大小的列表一起使用。五个候选预测运动矢量可各自具有索引(例如,0到4)。将参考一般视频解码器描述图9的技术。一般视频解码器示例性的可以为视频编码器(例如视频编码器20)或视频解码器(例如视频解码器30)。
为了根据图9的实施方式重建候选预测运动矢量列表,视频解码器首先考虑四个空间候选预测运动矢量(902)。四个空间候选预测运动矢量可以包括候选预测运动矢量位置252A、252B、252C和252D。四个空间候选预测运动矢量对应于与当前CU(例如,CU250)在同一图像中的四个PU的运动信息。视频解码器可以特定次序考虑列表中的四个空间候选预测运动矢量。举例来说,候选预测运动矢量位置252A可被第一个考虑。如果候选预测运动矢量位置252A可用,则候选预测运动矢量位置252A可指派到索引0。如果候选预测运动矢量位置252A不可用,则视频解码器可不将候选预测运动矢量位置252A包括于候选预测运动矢量列表中。候选预测运动矢量位置可出于各种理由而不可用。举例来说,如果候选预测运动矢量位置不在当前图像内,则候选预测运动矢量位置可能不可用。在另一可行的实施方式中,如果候选预测运动矢量位置经帧内预测,则候选预测运动矢量位置可能不可用。在另一可行的实施方式中,如果候选预测运动矢量位置在与当前CU不同的条带中,则候选预测运动矢量位置可能不可用。
在考虑候选预测运动矢量位置252A之后,视频解码器可接下来考虑候选预测运动矢量位置252B。如果候选预测运动矢量位置252B可用且不同于候选预测运动矢量位置252A,则视频解码器可将候选预测运动矢量位置252B添加到候选预测运动矢量列表。在此特定上下文中,术语“相同”和“不同”指代与候选预测运动矢量位置相关联的运动信息。因此,如果两个候选预测运动矢量位置具有相同运动信息则被视为相同,且如果其具有不同运动信息则被视为不同。如果候选预测运动矢量位置252A不可用,则视频解码器可将候选预测运动矢量位置252B指派到索引0。如果候选预测运动矢量位置252A可用,则视频解码器可将候选预测运动矢量位置252指派到索引1。如果候选预测运动矢量位置252B不可用或与候选预测运动矢量位置252A相同,则视频解码器跳过候选预测运动矢量位置252B且不将其包括于候选预测运动矢量列表中。
候选预测运动矢量位置252C由视频解码器类似地考虑以供包括于列表中。如果候选预测运动矢量位置252C可用且不与候选预测运动矢量位置252B和252A相同,则视频解码器将候选预测运动矢量位置252C指派到下一可用索引。如果候选预测运动矢量位置252C不可用或并非不同于候选预测运动矢量位置252A和252B中的至少一者,则视频解码器不将候选预测运动矢量位置252C包括于候选预测运动矢量列表中。接下来,视频解码器考虑候选预测运动矢量位置252D。如果候选预测运动矢量位置252D可用且不与候选预测运动矢量位置252A、252B和252C相同,则视频解码器将候选预测运动矢量位置252D指派到下一可用索引。如果候选预测运动矢量位置252D不可用或并非不同于候选预测运动矢量位置252A、252B和252C中的至少一者,则视频解码器不将候选预测运动矢量位置252D包括于候选预测运动矢量列表中。以上实施方式大体上描述示例性地考虑候选预测运动矢量252A到252D以供包括于候选预测运动矢量列表中,但在一些实施方 施中,可首先将所有候选预测运动矢量252A到252D添加到候选预测运动矢量列表,稍后从候选预测运动矢量列表移除重复。
在视频解码器考虑前四个空间候选预测运动矢量之后,候选预测运动矢量列表可能包括四个空间候选预测运动矢量或者该列表可能包括少于四个空间候选预测运动矢量。如果列表包括四个空间候选预测运动矢量(904,是),则视频解码器考虑时间候选预测运动矢量(906)。时间候选预测运动矢量可对应于不同于当前图像的图像的co-located PU的运动信息。如果时间候选预测运动矢量可用且不同于前四个空间候选预测运动矢量,则视频解码器将时间候选预测运动矢量指派到索引4。如果时间候选预测运动矢量不可用或与前四个空间候选预测运动矢量中的一者相同,则视频解码器不将所述时间候选预测运动矢量包括于候选预测运动矢量列表中。因此,在视频解码器考虑时间候选预测运动矢量(906)之后,候选预测运动矢量列表可能包括五个候选预测运动矢量(框902处考虑的前四个空间候选预测运动矢量和框904处考虑的时间候选预测运动矢量)或可能包括四个候选预测运动矢量(框902处考虑的前四个空间候选预测运动矢量)。如果候选预测运动矢量列表包括五个候选预测运动矢量(908,是),则视频解码器完成构建列表。
如果候选预测运动矢量列表包括四个候选预测运动矢量(908,否),则视频解码器可考虑第五空间候选预测运动矢量(910)。第五空间候选预测运动矢量可(例如)对应于候选预测运动矢量位置252E。如果位置252E处的候选预测运动矢量可用且不同于位置252A、252B、252C和252D处的候选预测运动矢量,则视频解码器可将第五空间候选预测运动矢量添加到候选预测运动矢量列表,第五空间候选预测运动矢量经指派到索引4。如果位置252E处的候选预测运动矢量不可用或并非不同于候选预测运动矢量位置252A、252B、252C和252D处的候选预测运动矢量,则视频解码器可不将位置252处的候选预测运动矢量包括于候选预测运动矢量列表中。因此在考虑第五空间候选预测运动矢量(910)之后,列表可能包括五个候选预测运动矢量(框902处考虑的前四个空间候选预测运动矢量和框910处考虑的第五空间候选预测运动矢量)或可能包括四个候选预测运动矢量(框902处考虑的前四个空间候选预测运动矢量)。
如果候选预测运动矢量列表包括五个候选预测运动矢量(912,是),则视频解码器完成产生候选预测运动矢量列表。如果候选预测运动矢量列表包括四个候选预测运动矢量(912,否),则视频解码器添加人工产生的候选预测运动矢量(914)直到列表包括五个候选预测运动矢量(916,是)为止。
如果在视频解码器考虑前四个空间候选预测运动矢量之后,列表包括少于四个空间候选预测运动矢量(904,否),则视频解码器可考虑第五空间候选预测运动矢量(918)。第五空间候选预测运动矢量可(例如)对应于候选预测运动矢量位置252E。如果位置252E处的候选预测运动矢量可用且不同于已包括于候选预测运动矢量列表中的候选预测运动矢量,则视频解码器可将第五空间候选预测运动矢量添加到候选预测运动矢量列表,第五空间候选预测运动矢量经指派到下一可用索引。如果位置252E处的候选预测运动矢量不可用或并非不同于已包括于候选预测运动矢量列表中的候选预测运动矢量中的一者,则视频解码器可不将位置252E处的候选预测运动矢量包括于候选预测运动矢量列表中。视频解码器可接着考虑时间候选预测运动矢量(920)。如果时间候选预测运动矢量可用且不同于已包括于候选预测运动矢量列表中的候选预测运动矢量,则视频解码器可将所述时间候选预测运动矢量添加到候选预测运动矢量列表,所述时间候选预测运动矢量经指派到下一可用索引。如果时间候选预测运动矢量不可用或并非不同于已包括于候选预测 运动矢量列表中的候选预测运动矢量中的一者,则视频解码器可不将所述时间候选预测运动矢量包括于候选预测运动矢量列表中。
如果在考虑第五空间候选预测运动矢量(框918)和时间候选预测运动矢量(框920)之后,候选预测运动矢量列表包括五个候选预测运动矢量(922,是),则视频解码器完成产生候选预测运动矢量列表。如果候选预测运动矢量列表包括少于五个候选预测运动矢量(922,否),则视频解码器添加人工产生的候选预测运动矢量(914)直到列表包括五个候选预测运动矢量(916,是)为止。
根据本申请的技术,可在空间候选预测运动矢量和时间候选预测运动矢量之后人工产生额外合并候选预测运动矢量以使合并候选预测运动矢量列表的大小固定为合并候选预测运动矢量的指定数目(例如前文图9的可行的实施方式中的五个)。额外合并候选预测运动矢量可包括示例性的经组合双向预测性合并候选预测运动矢量(候选预测运动矢量1)、经缩放双向预测性合并候选预测运动矢量(候选预测运动矢量2),和零向量Merge/AMVP候选预测运动矢量(候选预测运动矢量3)。
图10为本申请实施例中将经过组合的候选运动矢量添加到合并模式候选预测运动矢量列表的一种示例性示意图。经组合双向预测性合并候选预测运动矢量可通过组合原始合并候选预测运动矢量而产生。具体来说,原始候选预测运动矢量中的两个候选预测运动矢量(其具有mvL0和refIdxL0或mvL1和refIdxL1)可用以产生双向预测性合并候选预测运动矢量。在图10中,两个候选预测运动矢量包括于原始合并候选预测运动矢量列表中。一候选预测运动矢量的预测类型为列表0单向预测,且另一候选预测运动矢量的预测类型为列表1单向预测。在此可行的实施方式中,mvL0_A和ref0是从列表0拾取,且mvL1_B和ref0是从列表1拾取,且接着可产生双向预测性合并候选预测运动矢量(其具有列表0中的mvL0_A和ref0以及列表1中的mvL1_B和ref0)并检查其是否不同于已包括于候选预测运动矢量列表中的候选预测运动矢量。如果其不同,则视频解码器可将双向预测性合并候选预测运动矢量包括于候选预测运动矢量列表中。
图11为本申请实施例中将经过缩放的候选运动矢量添加到合并模式候选预测运动矢量列表的一种示例性示意图。经缩放双向预测性合并候选预测运动矢量可通过缩放原始合并候选预测运动矢量而产生。具体来说,来自原始候选预测运动矢量的一候选预测运动矢量(其可具有mvLX和refIdxLX)可用以产生双向预测性合并候选预测运动矢量。在图11的可行的实施方式中,两个候选预测运动矢量包括于原始合并候选预测运动矢量列表中。一候选预测运动矢量的预测类型为列表0单向预测,且另一候选预测运动矢量的预测类型为列表1单向预测。在此可行的实施方式中,mvL0_A和ref0可从列表0拾取,且ref0可复制到列表1中的参考索引ref0′。接着,可通过缩放具有ref0和ref0′的mvL0_A而计算mvL0′_A。缩放可取决于POC(Picture Order Count)距离。接着,可产生双向预测性合并候选预测运动矢量(其具有列表0中的mvL0_A和ref0以及列表1中的mvL0′_A和ref0′)并检查其是否为重复的。如果其并非重复的,则可将其添加到合并候选预测运动矢量列表。
图12为本申请实施例中将零运动矢量添加到合并模式候选预测运动矢量列表的一种示例性示意图。零向量合并候选预测运动矢量可通过组合零向量与可经参考的参考索引而产生。如果零向量候选预测运动矢量并非重复的,则可将其添加到合并候选预测运动矢量列表。对于每一产生的合并候选预测运动矢量,运动信息可与列表中的前一候选预测运动矢量的运动信息比较。
在一种可行的实施方式中,如果新产生的候选预测运动矢量不同于已包括于候选预测运动矢量列表中的候选预测运动矢量,则将所产生的候选预测运动矢量添加到合并候选预测运动矢量列表。确定候选预测运动矢量是否不同于已包括于候选预测运动矢量列表中的候选预测运动矢量的过程有时称作修剪(pruning)。通过修剪,每一新产生的候选预测运动矢量可与列表中的现有候选预测运动矢量比较。在一些可行的实施方式中,修剪操作可包括比较一个或多个新候选预测运动矢量与已在候选预测运动矢量列表中的候选预测运动矢量和不添加为已在候选预测运动矢量列表中的候选预测运动矢量的重复的新候选预测运动矢量。在另一些可行的实施方式中,修剪操作可包括将一个或多个新候选预测运动矢量添加到候选预测运动矢量列表且稍后从所述列表移除重复候选预测运动矢量。
下面介绍几种帧间预测的实施方式,本申请中的第一预设算法以及第二预设算法可以包括其中的一种或多种。
图片间预测利用图片之间的时间相关性来得到针对图像样本块的运动补偿预测(motion-compensated prediction,MCP)。
对于这种基于块的MCP,将视频图片划分成矩形块。假设一个块内均匀运动并且移动对象大于一个块,则对于每个块,可以找到先前解码图片中的对应块作为预测值。使用平移运动模型,由运动矢量(Δx,Δy)表示块在先前解码图片中的位置,其中Δx指定相对于当前块位置的水平位移,Δy指定相对于当前块位置的竖直位移。运动矢量(Δx、Δy)可具有分数样本精度以更精确地捕捉底层对象的移动。当对应的运动矢量具有分数样本精度时,对参考图片应用内插以得到预测信号。先前解码图片被称为参考图片并由对应于参考图片列表的参考索引Δt指示。这些平移运动模型参数,即运动矢量和参考索引,被进一步称为运动数据。现代视频编码标准允许两种图片间预测,即单向预测和双向预测。
在双向预测的情况下,使用两组运动数据(Δx 0,Δy 0,Δt 0和Δx 1,Δy 1,Δt 1)来生成两个MCP(可能来自不同图片),然后将其组合以获得最终的MCP。默认情况下,这通过求平均来完成,但是在加权预测的情况下,可以对每个MCP应用不同的权重,例如,以便补偿场景淡出。可以在双向预测中使用的参考图片存储在两个单独的列表中,即列表0和列表1。为了限制允许双向预测的片中的内存带宽,HEVC标准限制具有4×8和8×4个亮度预测块的PU仅使用单向预测。使用运动估计过程在编码器处得到运动数据。视频标准中并未指定运动估计,因此不同的编码器可以在其实施中使用不同的复杂度与质量的折衷。
一个块的运动数据与相邻块相关。为了利用这种相关性,运动数据并不直接在码流中进行编码,而是基于相邻运动数据进行预测编码。在HEVC中,为此使用了两个概念。在HEVC中通过引入称为高级运动矢量预测(advanced motion vector prediction,AMVP)的新工具而改进了运动矢量的预测编码,其中将每个运动块的最佳预测值用信号表示给解码器。另外,称为帧间预测块合并的新技术从相邻块得到块的所有运动数据,从而替代H.264/AVC中的直通和跳过模式。
高级运动矢量预测
如在先前的视频编码标准中一样,HEVC运动矢量根据水平(x)和竖直(y)分量被编码为与所谓的运动矢量预测值(motion vector predictor,MVP)的差。两种运动矢量差(MVD)分量的计算如方程式(1.1)和(1.2)所示。
MVD X=Δx-MVP X      (1.1)
MVD Y=Δy-MVP Y      (1.2)
当前块的运动矢量通常与当前图片中或者较早的编码图片中的相邻块的运动矢量相关。这是因为相邻块可能对应于具有相似运动的相同移动对象,并且对象的运动不可能随时间突然改变。因此,使用相邻块中的运动矢量作为预测值减小了用信号表示的运动矢量差的大小。MVP通常从同位图片中来自空间相邻块或来自时间相邻块的已经解码的运动矢量得到。在一些情况下,还可以将零运动矢量用作MVP。在H.264/AVC中,这通过执行三个空间相邻运动矢量的分量形式中值来完成。使用这种方法,不需要用信号表示预测值。来自同位图片的时间MVP仅在H.264/AVC的所谓的时间直通模式中被考虑。H.264/AVC直通模式还用于得到除运动矢量之外的其它运动数据。
在HEVC中,隐式地得到MVP的方法由称为运动矢量竞争的技术所替代,所述技术显式地用信号表示MVP列表中的哪个MVP用于运动矢量得到。HEVC中的可变编码四叉树块结构可导致一个块使若干具有运动矢量的相邻块作为潜在MVP候选者。高级运动矢量预测(Advanced Motion Vector Prediction,AMVP)的初始设计包含来自三个不同类别预测值的五个MVP:来自空间相邻者的三个运动矢量、三个空间预测值的中值以及来自同位时间相邻块的缩放运动矢量。此外,通过重新排序以将最可能的运动预测值放在第一位置并通过去除冗余候选者以确保最小的信令开销来修改预测值列表。接着,开发AMVP设计的重要简化,例如去除中值预测值、将列表中的候选者数量从五个减少到两个、固定列表中的候选者顺序,以及减少冗余检查的数量。AMVP候选者列表构建的最终设计包含以下两个MVP候选者:a.从五个空间相邻块得到的最多两个空间候选MVP;b.当两个空间候选MVP不可用或它们相同时,从两个时间同位块得到的一个时间候选MVP;c.当空间候选者、时间候选者或这两者都不可用时的零运动矢量。
正如已经提到的,从五个空间相邻块得到两个空间MVP候选者A和B。对于AMVP和帧间预测块合并,空间候选块的位置是相同的。对于候选者A,在两遍式方法中考虑来自左下角的两个块A0和A1的运动数据。在第一遍中,检查是否有任何候选块含有的参考索引等于当前块的参考索引。找到的第一运动矢量将作为候选者A。当来自A0和A1的所有参考索引指向与当前块的参考索引不同的参考图片时,相关的运动矢量不能按原样使用。因此,在第二遍中,需要根据候选参考图片与当前参考图片之间的时间距离来缩放运动矢量。方程式(1.3)示出了如何根据缩放因子缩放候选运动矢量mvcand。基于当前图片与候选块td的参考图片之间的时间距离以及当前图片与当前块tb的参考图片之间的时间距离来计算ScaleFactor。时间距离以定义图片显示顺序的图片顺序编号(picture order count,POC)值之间的差表示。缩放操作基本上与H.264/AVC中用于时间直通模式的方案相同。这种分解允许在片层级预先计算ScaleFactor,因为它只取决于片头中用信号表示的参考图片列表结构。应注意,仅在当前参考图片和候选参考图片都是短期参考图片时才执行MV缩放。参数td被定义为同位候选块的同位图片与参考图片之间的POC差。
mv=sign(mv cand·ScaleFactor)·((|mv cand·ScaleFactor|+2 7)>>8)     (1.3)
ScaleFactor=clip(-2 12,2 12-1,(tb·tx+2 5)>>6)    (1.4)
Figure PCTCN2018109233-appb-000005
对于候选者B,以与在第一遍中检查A0和A1相同的方式依次检查候选者B0到B2。然而,第二遍仅在块A0和A1不含有任何运动信息时执行,即,不可用或使用图片内预测来编码时执行。接着,如果找到候选者A,则将候选者A设置成等于未缩放的候选者B,并且将候选者B设置成等于候选者B的第二未缩放的或缩放的变体。第二遍搜索从候选者B0到B2得到的未缩放的以及缩放的MV。总体而言,这种设计允许独立于B0、B1和B2来处理A0和A1。B的得到应只了解A0和A1两者的可用性,以便搜索从B0到B2得到的缩放的或另外未缩放的MV。考虑到它明显减少了候选者B的复杂运动矢量缩放操作,这种相依性是可接受的。减少运动矢量缩放的数量表示运动矢量预测值得到过程中显着的复杂度降低。
在HEVC中,已确定当前块的右下方和中心处的块为最适合提供良好的时间运动矢量预测值(temporal motion vector predictor,TMVP)。这些候选者中C0表示右下相邻者,C1表示中心块。这里同样首先考虑C0的运动数据,并且如果不可用,则使用来自中心处的同位候选块的运动数据来得到时间MVP候选者C。当相关的PU属于当前CTU行之外的CTU时,C0的运动数据也被视为不可用。这最大限度地减少了存储同位运动数据的内存带宽要求。与运动矢量可能指代相同参考图片的空间MVP候选者对比,运动矢量缩放对于TMVP是强制性的。因此,使用与空间MVP相同的缩放操作。
虽然H.264/AVC中的时间直通模式总是参考第二参考图片列表、即列表1中的第一参考图片,并且仅在双向预测片中被允许,但HEVC提供了为每个图片指示哪个参考图片被视为同位图片的可能性。这是通过在片头中用信号表示同位参考图片列表和参考图片索引以及要求图片中的所有片中的这些语法元素应指定相同的参考图片来完成的。
由于时间MVP候选者引入了另外的相依性,因此出于差错鲁棒性原因,可能需要禁用其使用。在H.264/AVC中,可能禁用片头中的双向预测片的时间直通模式(direct_spatial_mv_pred_flag)。HEVC语法通过允许在序列层级或在图片层级禁用TMVP(sps/slice_temporal_mvp_enabled_flag)来扩展此信令。尽管在片头中用信号表示标志,但是对于一个图片中的所有片,其值应该是相同的,这是码流一致性的要求。由于图片层级标志的信令取决于SPS标志,因此在PPS中用信号表示图片层级标志将引入SPS与PPS之间的解析相依性。这种片头信令的另一个优点是,如果只想改变PPS中此标志的值而不改变其它参数,则不需要发送第二PPS。
通常,HEVC中的运动数据信令与H.264/AVC中的类似。图片间预测语法元素inter_pred_idc用信号表示是否使用参考列表0、1或这两者。对于从一个参考图片列表获得的每个MCP,对应的参考图片(Δt)由参考图片列表的索引ref_idx_l0/1用信号表示,并且MV(Δx,Δy)由MVP的索引mvp_l0/1_flag及其MVD表示。片头中新引入的标志mvd_l1_zero_flag指示第二参考图片列表的MVD是否等于零且因此不在码流中用信号表示。当运动矢量被完全重构时,最终剪切操作确保最终运动矢量的每个分量的值将始终在-2 15到2 15-1的范围内,包含端点值。
图片间预测布洛赫(Bloch)合并
AMVP列表仅含有一个参考列表的运动矢量,而合并候选者含有所有运动数据,包括 是使用一个还是两个参考图片列表以及每个列表的参考索引和运动矢量的信息。总的来说,合并候选者列表基于以下候选者构建:a.从五个空间相邻块得到的最多四个空间合并候选者;b.从两个时间同位块得到的一个时间合并候选者;c.包含组合的双向预测候选者和零运动矢量候选者的另外的合并候选者。
合并候选者列表中的第一候选者是空间相邻者。通过依次循序检查A1、B1、B0、A0和B2,最多可以在合并列表中以所述顺序插入四个候选者。
替代仅仅检查相邻块是否可用并含有运动信息,在将相邻块的所有运动数据作为合并候选者之前执行一些另外的冗余检查。这些冗余检查可以分为两类,用于两个不同的目的:a.避免列表中存在具有冗余运动数据的候选者;b.防止合并两个可以用其它方式表示的会产生冗余语法的分区。
当N是空间合并候选者的数量时,完整的冗余检查将由
Figure PCTCN2018109233-appb-000006
次运动数据比较组成。在五个潜在空间合并候选者的情况下,将需要十次运动数据比较来确保合并列表中的所有候选者具有不同的运动数据。在开发HEVC期间,对冗余运动数据的检查已经减少到一个子集,从而在比较逻辑明显减少的同时保持编码效率。在最终设计中,对于每个候选者执行不超过两次比较,从而产生总共五次比较。给定{A1,B1,B0,A0,B2}的顺序,B0只检查B1,A0只检查A1,且B2只检查A1和B1。在分区冗余检查的实施例中,通过选择候选者B1来将2N×N分区的底部PU与顶部PU合并。这将产生一个CU具有两个具有相同运动数据的PU,其可以被均等地用信号表示为2N×2N CU。总体而言,这种检查适用于矩形和不对称分区2N×N、2N×nU、2N×nD、N×2N、nR×2N和nL×2N的所有第二PU。应注意,对于空间合并候选者,仅执行冗余检查,并且按原样从候选块复制运动数据。因此,这里不需要运动矢量缩放。
时间合并候选者的运动矢量的得到与TMVP的相同。由于合并候选者包括所有运动数据并且TMVP仅是一个运动矢量,因此整个运动数据的得到仅取决于片的类型。对于双向预测片,针对每个参考图片列表得到TMVP。取决于每个列表的TMVP的可用性,将预测类型设置成双向预测或设置成TMVP可用的列表。所有相关的参考图片索引被设置成等于零。因此,对于单向预测片,只连同等于零的参考图片索引一起得到列表0的TMVP。
当至少一个TMVP可用并且时间合并候选者被添加到列表中时,不执行冗余检查。这使得合并列表构建独立于同位图片,从而提高抗抗误码能力。考虑时间合并候选者将是多余的并因此未包含在合并候选者列表中的情况。在丢失同位图片的情况下,解码器不能得到时间候选者,因此不检查它是否是冗余的。所有后续候选者的索引将受此影响。
出于解析鲁棒性原因,合并候选者列表的长度是固定的。在已经添加了空间和时间合并候选者之后,可能出现所述列表还没有固定长度的情况。为了补偿与非长度自适应列表索引信令一起出现的编码效率损失,生成另外的候选者。取决于片的类型,最多可以使用两种候选者来完全填充列表:a.组合双向预测候选者;b.零运动矢量候选者。
在双向预测片中,通过组合一个候选者的参考图片列表0运动数据与另一候选者的列表1运动数据,可基于现有候选者生成另外的候选者。这通过从第一候选者等一个候选者复制Δx 0、Δy 0、Δt 0并且从第二候选者等另一候选者复制Δx 1、Δy 1、Δt 1来完成。预定义不同的组合并在表1.1中给出。
表1.1
Figure PCTCN2018109233-appb-000007
当添加组合的双向预测候选者之后或对于单向预测片来说列表仍然不完整时,计算零运动矢量候选者以使列表完整。所有零运动矢量候选者对于单向预测片具有一个零位移运动矢量,对于双向预测片具有两个零位移运动矢量。参考索引被设置成等于零,并且对于每个另外的候选者递增一,直到达到参考索引的最大数量。如果是这种情况,并且还有其它候选者缺失,则使用等于零的参考索引来创建这些候选者。对于所有另外的候选者,不执行冗余检查,因为结果显示省略这些检查不会引起编码效率损失。
对于以图片间预测模式编码的每个PU,所谓的merge_flag指示使用所述块合并来得到运动数据。merge_idx进一步确定合并列表中提供MCP所需的所有运动数据的候选者。除了此PU层级的信令之外,还在片头中用信号表示合并列表中的候选者的数量。由于默认值为五,因此它表示为与五的差(five_minus_max_num_merge_cand)。这样,五利用0的短码字用信号表示,而仅使用一个候选者则利用4的较长码字用信号表示。至于对合并候选者列表构建过程的影响,整个过程保持不变,但是在列表含有最大数量合并候选者之后,所述过程终止。在初始设计中,合并索引编码的最大值由列表中可用空间和时间候选者的数量给出。当例如只有两个候选者可用时,索引可以高效地编码为一个标志。但是,为了解析合并索引,必须构建整个合并候选者列表以了解候选者的实际数量。假设由于发送错误而导致不可用的相邻块,将不可能再解析合并索引。
HEVC中的块合并概念的关键应用是与跳过模式的组合。在先前的视频编码标准中,使用跳过模式指示这样的块:推测而不是显式地用信号表示运动数据,并且预测残差为零,即,不发送变换系数。在HEVC中,在图片间预测片中的每个CU的开始处,用信号表示skip_flag,这意味着以下方面:a.CU仅含有一个PU(2N×2N分区类型);b.使用合并模式来得到运动数据(merge_flag等于1);c.码流中不存在残差数据。
在HEVC中引入指示区域的并行合并估计层级,其中可以通过检查候选块是否位于所述合并估计区域(MER)中而独立地得到合并候选者列表。相同MER中的候选块不包含在合并候选者列表中。因此,它的运动数据不需要在列表构建时可用。当这个层级是例如32时,那么32×32区域中的所有预测单元可以并行构建合并候选者列表,因为处于相同32×32MER中的所有合并候选者都不插入列表中。第一PU0的所有潜在合并候选者都可 用,因为它们在第一32×32MER之外。对于第二MER,当MER内的合并估计应该独立时,PU 2-6的合并候选者列表不能包含来自这些PU的运动数据。因此,例如在查看PU5时,没有合并候选者可用且因此不插入合并候选者列表中。在这种情况下,PU5的合并列表仅由时间候选者(如果可用)和零MV候选者组成。为了使编码器能够权衡并行性和编码效率,并行合并估计层级是自适应的,并且在图片参数集中用信号表示为log2_parallel_merge_level_minus2。
基于子CU的运动矢量预测
在开发新的视频编码技术期间,使用QTBT,每个CU对于每个预测方向可以具有最多一组运动参数。在编码器中通过将大CU分成子CU并且得到大CU的所有子CU的运动信息来考虑两个子CU层级运动矢量预测方法。替代时间运动矢量预测(alternative temporal motion vector prediction,ATMVP)方法允许每个CU从同位参考图片中小于当前CU的多个块提取多组运动信息。在时空运动矢量预测(spatial-temporal motion vector prediction,STMVP)方法中,通过使用时间运动矢量预测值和空间相邻运动矢量来递归地得到子CU的运动矢量。
为了保留用于子CU运动预测的更精确的运动场,当前禁用参考帧的运动压缩。
替代时间运动矢量预测
在替代时间运动矢量预测(alternative temporal motion vector prediction,ATMVP)方法中,通过从小于当前CU的块中提取多组运动信息(包含运动矢量和参考索引)来修改运动矢量时间运动矢量预测(temporal motion vector prediction,TMVP)。子CU是正方形的N×N块(N默认设置为4)。
ATMVP分两步预测CU内的子CU的运动矢量。第一步是使用所谓的时间矢量来标识参考图片中的对应块。参考图片被称为运动源图片。第二步是将当前CU分成子CU,并从每个子CU对应的块中获得每个子CU的运动矢量以及参考索引。
在第一步中,通过当前CU的空间相邻块的运动信息来确定参考图片和对应块。为了避免相邻块的重复扫描过程,使用当前CU的合并候选者列表中的第一合并候选者。将第一可用运动矢量及其相关的参考索引设置成时间矢量和运动源图片的索引。这样,在ATMVP中,与TMVP相比,可以更精确地标识对应块,其中对应块(有时称为同位块)始终位于相对于当前CU的右下或中心位置。
在第二步中,通过将时间矢量添加到当前CU的坐标,通过运动源图片中的时间矢量来标识子CU的对应块。对于每个子CU,使用其对应块(覆盖中心样本的最小运动网格)的运动信息来得到子CU的运动信息。在标识对应的N×N块的运动信息之后,以与HEVC的TMVP相同的方式将其转换为当前子CU的运动矢量和参考索引,其中适用运动缩放和其它程序。例如,解码器检查是否满足低延迟条件(即,当前图片的所有参考图片的POC小于当前图片的POC)并且可能使用运动矢量MVx(与参考图片列表X相对应的运动矢量)来预测每个子CU的运动矢量MVy(其中X等于0或1且Y等于1-X)。
时空运动矢量预测
在这种方法中,按照光栅扫描顺序递归地得到子CU的运动矢量。考虑含有四个4×4子CU A、B、C和D的8×8CU。将当前帧中的相邻4×4块标记为a、b、c和d。
子CU A的运动得到起始于标识其两个空间相邻者。第一相邻者是子CU A上方的N×N块(块c)。如果这一块c不可用或被帧内编码,则检查子CU A上方的其它N×N块(从左到右,从块c开始)。第二相邻者是子CU A左侧的块(块b)。如果块b不可用或被 帧内编码,则检查子CU A左侧的其它块(从上到下,从块b开始)。对于给定列表,将从每个列表的相邻块获得的运动信息缩放到第一参考帧。接下来,通过遵循与HEVC中指定的TMVP得到相同的过程来得到子块A的时间运动矢量预测值(temporal motion vector predictor,TMVP)。提取位置D处的同位块的运动信息并相应地缩放。最后,在检索并缩放运动信息之后,分别对每个参考列表的所有可用的运动矢量(最多3个)求平均。分配平均运动矢量作为当前子CU的运动矢量。
结合合并模式
作为另外的合并候选者而启用子CU模式,并且不需要另外的语法元素来用信号表示模式。将两个另外的合并候选者添加至每个CU的合并候选者列表以表示ATMVP模式和STMVP模式。如果序列参数集指示ATMVP和STMVP已启用,则最多使用七个合并候选者。另外的合并候选者的编码逻辑与HM中的合并候选者的编码逻辑相同,这意味着对于P或B片中的每个CU,对两个另外的合并候选者需要多两次RD检查。
仿射运动补偿预测
通过两个控制点运动矢量描述块的仿射运动场。
通过以下方程式描述块的运动矢量场(motion vector field,MVF):
Figure PCTCN2018109233-appb-000008
其中(v 0x,v 0y)是左上角控制点的运动矢量,(v 1x,v 1y)是右上角控制点的运动矢量。
为了进一步简化运动补偿预测,应用基于子块的仿射变换预测。如方程式(1.7)中得到子块大小M×N,其中MvPre是运动矢量分数精度(例如1/16),(v 2x,v 2y)是根据方程式(1.6)计算的左下控制点的运动矢量。
Figure PCTCN2018109233-appb-000009
在通过方程式(1.7)得到之后,应在必要时向下调整M和N,使其分别为w和h的除数。
为了得到每个M×N子块的运动矢量,根据方程式(1.6)计算每个子块的中心样本的运动矢量,并舍入到1/16的分数精度。
仿射帧间模式
对于宽度和高度都大于8的CU,可以应用AF_INTER模式。在码流中用信号表示CU层级中的仿射标志以指示是否使用AF_INTER模式。在此模式中,使用相邻块构建具有运动矢量对{(v 0,v 1)|v 0={v A,v B,v c},v 1={v D,v E}}的候选者列表。从块A、B或C的运动矢量中选择v 0。根据参考列表以及相邻块参考的POC、当前CU参考的POC和当前CU的POC之间的关系来缩放来自相邻块的运动矢量。从相邻块D和E中选择v 1的方法类似。如果 候选者列表的数量小于2,则通过复制每个AMVP候选者组成的运动矢量对来填充列表。当候选者列表大于2时,首先根据相邻运动矢量的一致性(对候选者中的两个运动矢量的相似性)对候选者进行排序,并且仅保留前两个候选者。使用RD成本检查来确定选择哪个运动矢量对候选者作为当前CU的控制点运动矢量预测(control point motion vector prediction,CPMVP)。并且在码流中用信号表示指示候选者列表中的CPMVP的位置的索引。在码流中用信号表示CPMV与CPMVP的差。
仿射合并模式
当在AF_MERGE模式下应用CU时,它从有效的相邻重建块获得以仿射模式编码的第一块。候选块的选择顺序是从左、上、右上、左下到左上。如果相邻左下块A以仿射模式编码,则得到含有块A的CU的左上角、右上角和左下角的运动矢量v 2、v 3、和v 4。并且根据v 2、v 3和v 4计算当前CU上的左上角的运动矢量v 0。其次,计算当前CU的右上方的运动矢量v 1
为了标识当前CU是否用AF_MERGE模式编码,当存在至少一个相邻块以仿射模式编码时,在码流中用信号表示仿射标志。
模式匹配运动矢量得到
模式匹配运动矢量得到(pattern matched motion vector derivation,PMMVD)模式是基于帧率上变换(Frame-Rate Up Conversion,FRUC)技术。在这种模式下,不用信号表示块的运动信息,而是在解码器端得到。
当CU的合并标志为真时,用信号表示其FRUC标志。当FRUC标志为假时,用信号表示合并索引并且使用常规合并模式。当FRUC标志为真时,用信号表示另外的FRUC模式标志以指示将使用哪种方法(双边匹配或模板匹配)来得到块的运动信息。
在编码器端,关于对CU是否使用FRUC合并模式的决定是基于对正常合并候选者所做的RD成本选择。通过使用RD成本选择针对CU检查这两种匹配模式(双边匹配和模板匹配)。将成本最低的一个进一步与其它CU模式进行比较。如果FRUC匹配模式是最有效的模式,则CU的FRUC标志被设置为真,并使用相关的匹配模式。
FRUC合并模式中的运动得到过程有两个步骤。首先执行CU层级运动搜索,然后执行子CU层级运动细化。在CU层级,基于双边匹配或模板匹配得到整个CU的初始运动矢量。首先,生成MV候选者列表,并且选择使匹配成本最小的候选者作为用于进一步CU层级细化的起始点。接着执行基于起始点周围的双边匹配或模板匹配的局部搜索,并且将使匹配成本最小的MV作为整个CU的MV。随后,以得到的CU运动矢量作为起始点,在子CU层级处对运动信息进行进一步细化。
例如,针对W×H个CU运动信息得到执行以下得到过程。在第一阶段,得到整个W×H个CU的MV。在第二阶段,将CU进一步分成M×M个子CU。如方程式(1.8)计算M的值,D是预定义的分割深度,在JEM中默认设置为3。然后得到每个子CU的MV。
Figure PCTCN2018109233-appb-000010
使用双边匹配通过在两个不同的参考图片中找到沿着当前CU的运动轨迹的两个块之间的最接近匹配来得到当前CU的运动信息。在连续运动轨迹的前提下,指向两个参考块的运动矢量MV0和MV1应与当前图片与两个参考图片之间的时间距离、即TD0和TD1成比例。在当前图片在时间上在两个参考图片之间并且从当前图片到两个参考图片的时间距离相同时,双边匹配成为基于镜像的双向MV。
在双边匹配合并模式中,由于基于在两个不同的参考图片中沿着当前CU的运动轨迹的两个块之间的最近匹配来得到CU的运动信息,所以始终应用双向预测。对于模板匹配合并模式没有此类限制。在模板匹配合并模式中,编码器可以在来自list0的单向预测、来自list1的单向预测或针对CU的双向预测之间进行选择。基于模板匹配成本进行选择,如下:
若costBi<=因子*min(cost0,cost1)
使用双向预测;
否则,若cost0<=cost1
使用来自list0的单向预测;
否则,
使用来自list1的单向预测;
其中cost0是list0模板匹配的SAD,cost1是list1模板匹配的SAD,costBi是双向预测模板匹配的SAD。因子的值等于1.25,这意味着选择过程偏向于双向预测。帧间预测方向选择仅适用于CU层级模板匹配过程。
使用模板匹配通过找到当前图片中的模板(当前CU的顶部和/或左侧相邻块)与参考图片中的块(与模板相同大小)之间的最接近匹配来得到当前CU的运动信息。除了前面提到的FRUC合并模式外,模板匹配也适用于AMVP模式。使用模板匹配方法,得到新的候选者。如果通过模板匹配新得到的候选者与第一现有AMVP候选者不同,则将其插入到所述AMVP候选者列表的最开始,然后将列表大小设置为2(意味着除去第二现有AMVP候选者)。当应用于AMVP模式时,仅应用CU层级搜索。
设置在CU层级的MV候选者包括:a.如果当前CU处于AMVP模式,则选择原始AMVP候选者;b.所有合并候选者;c.内插MV场中的几个MV;d.顶部和左侧的相邻运动矢量。
应注意,上述内插MV场是在基于单边ME对整个图片的图片编码之前生成的。然后运动场可以稍后用作CU层级或子CU层级MV候选者。首先,在4×4块层级遍历两个参考列表中的每个参考图片的运动场。对于每个4×4块,如果与块相关的运动通过当前图片中的4×4块,并且所述块尚未被分配任何内插运动,则根据时间距离TD0和TD1将参考块的运动缩放到当前图片(与HEVC中的TMVP的MV缩放的方式相同),并且将缩放后的运动分配给当前帧中的块。如果未将缩放MV分配给4×4块,则块的运动在内插运动场中被标记为不可用。
当使用双边匹配时,合并候选者的每个有效MV被用作输入以在假定双边匹配的情况下生成MV对。例如,合并候选者的一个有效MV在参考列表A处为(MVa,refa)。然后,在另一个参考列表B中找到其配对双边MV的参考图片refb,使得refa和refb在时间上处于当前图片的不同侧。如果这样的refb在参考列表B中不可用,则将refb确定为与refa不同的参考,并且其到当前图片的时间距离是列表B中的最小值。在确定refb 之后,通过基于当前图片与refa、refb之间的时间距离缩放MVa来得到MVb。
来自内插MV场的四个MV也被添加到CU层级候选者列表。更具体地说,添加在当前CU的位置(0,0)、(W/2,0)、(0,H/2)和(W/2,H/2)处的内插MV。
当在AMVP模式中应用FRUC时,原始AMVP候选者也被添加到CU层级MV候选者集。
在CU层级,候选者列表中添加了最多15个AMVP CU的MV和最多13个合并CU的MV。
设置在子CU层级的MV候选者包括:a.从CU层级搜索确定的MV;b.顶部、左侧、左上和右上的相邻MV;c.来自参考图片的同位MV的缩放版本;d.最多4个ATMVP候选者;e.最多4个STMVP候选者。
如下得到来自参考图片的缩放的MV。遍历两个列表中的所有参考图片。将参考图片中的子CU的同位位置处的MV缩放成起始CU层级MV的参考。
ATMVP和STMVP候选者限于前四个。
在子CU层级,将最多17个MV添加到候选者列表。
运动矢量细化
运动矢量可以通过结合不同帧间预测模式的不同方法来细化。
FRUC中的MV细化
MV细化是基于模式的MV搜索,具有双边匹配成本或模板匹配成本的标准。在当前的开发中,支持两种搜索模式-分别在CU层级和子CU层级上的无限制中心偏置菱形搜索(unrestricted center-biased diamond search,UCBDS)和用于MV细化的自适应交叉搜索。对于CU和子CU层级MV的细化,以四分之一亮度样本MV精度直接搜索MV,然后是八分之一亮度样本MV细化。用于CU和子CU步骤的MV细化的搜索范围被设置成等于8个亮度样本。
解码器端运动矢量细化
在双向预测操作中,为了预测一个块区域,组合分别使用list0的MV和list1的MV形成的两个预测块以形成单个预测信号。在解码器端运动矢量细化(decoder-side motion vector refinement,DMVR)方法中,通过双边模板匹配过程进一步细化双向预测的两个运动矢量。在解码器中应用双边模板匹配以在双边模板与参考图片中的重构样本之间执行基于失真的搜索,以便获得细化的MV而无需发送另外的运动信息。
在DMVR中,分别从list0的初始MV0和list1的MV1生成双边模板作为两个预测块的加权组合(即,平均值)。模板匹配操作由计算所生成的模板与参考图片中的样本区域(初始预测块周围)之间的成本度量构成。对于两个参考图片中的每一个,产生最小模板成本的MV被视为所述列表的更新MV以替换原始MV。在当前的开发中,每个列表都会搜索九个MV候选者。九个MV候选者包含原始MV和8个周围MV,其中一个亮度样本沿水平或竖直方向或这两者偏移到原始MV。最后,使用两个新的MV、即MV0′和MV1′生成最终的双向预测结果。绝对差总和(sum of absolute differences,SAD)用作成本度量。
将DMVR应用于双向预测的合并模式,其中一个MV来自过去的参考图片,另一MV来自未来的参考图片,而无需发送另外的语法元素。
运动数据精度和存储
运动数据存储减小
在AMVP以及合并模式中使用TMVP需要将运动数据(包含运动矢量、参考索引和编码模式)存储在同位的参考图片中。考虑到运动表示的粒度,存储运动数据所需的内存大 小会很重要。HEVC采用运动数据存储减小(motion data storage reduction,MDSR)以通过对参考图片中的运动数据进行二次采样来减小运动数据缓冲器和相关联的内存访问带宽的大小。虽然H.264/AVC以4×4块为基础存储这些信息,但HEVC使用16×16块,其中,在对4×4网格进行二次采样的情况下,存储左上4×4块的信息。由于这种二次采样,MDSR影响了时间预测的质量。
此外,在同位图片中使用的MV的位置与MDSR所存储的MV的位置之间存在紧密的相关性。在HEVC的标准化过程中,结果表明,将左上块的运动数据与右下和中心TMVP候选者一起存储在16×16区域内提供了编码效率和内存带宽减小之间的最佳折衷。
更高的运动矢量存储精度
在HEVC中,运动矢量精度是四分之一像素(对于4:2:0视频为四分之一亮度样本和八分之一色度样本)。在当前的开发中,内部运动矢量存储和合并候选者的精度提高到1/16像素。在运动补偿帧间预测中针对以跳过/合并模式编码的CU使用更高的运动矢量精度(1/16像素)。对于以正常AMVP模式编码的CU,使用整数像素或四分之一像素运动。
自适应运动矢量差分辨率
在HEVC中,当片头中use_integer_mv_flag等于0时,以四分之一亮度样本为单位用信号表示运动矢量差(motion vector difference,MVD)。在当前的开发中,引入了局部自适应运动矢量分辨率(locally adaptive motion vector resolution,LAMVR)。MVD可以四分之一亮度样本、整数亮度样本或四亮度样本为单位进行编码。在编码单元(coding unit,CU)层级控制MVD分辨率,并且针对具有至少一个非零MVD分量的每个CU有条件地用信号表示MVD分辨率标志。
对于具有至少一个非零MVD分量的CU,用信号表示第一标志以指示在CU中是否使用四分之一亮度样本MV精度。当第一标志(等于1)指示未使用四分之一亮度样本MV精度时,用信号表示另一标志以指示是否使用整数亮度样本MV精度或四亮度样本MV精度。
当CU的第一MVD分辨率标志为零或未针对CU编码(意味着CU中的所有MVD均为零)时,对所述CU使用四分之一亮度样本MV分辨率。当CU使用整数亮度样本MV精度或四亮度样本MV精度时,CU的AMVP候选者列表中的MVP被舍入到相应的精度。
在编码器中,使用CU层级RD检查以确定针对CU使用哪个MVD分辨率。即,对于每个MVD分辨率,执行三次CU层级RD检查。
分数样本内插模块
当运动矢量指向分数样本位置时,需要运动补偿内插。对于亮度内插滤波,如表1.2所示,对于2/4精度样本,使用8抽头可分离的基于DCT的内插滤波器,对于1/4精度样本,使用7抽头可分离的基于DCT的内插滤波器。
表1.2
位置 滤波器系数
1/4 {-1,4,-10,58,17,-5,1}
2/4 {-1,4,-11,40,40,-11,4,-1}
3/4 {1,-5,17,58,-10,4,-1}
类似地,对于色度内插滤波器,使用4抽头可分离的基于DCT的内插滤波器器,如表 1.3所示。
表1.3
位置 滤波器系数
1/8 {-2,58,10,-2}
2/8 {-4,54,16,-2}
3/8 {-6,46,28,-4}
4/8 {-4,36,36,-4}
5/8 {-4,28,46,-6}
6/8 {-2,16,54,-4}
7/8 {-2,10,58,-2}
对于4:2:2的竖直内插以及4:4:4色度通道的水平和竖直内插,不使用表1.3中的奇数位置,从而产生1/4色度内插。
对于双向预测,在对两个预测信号求平均之前,将内插滤波器的输出的保持为14位精度,而不管源位深度如何。实际的求平均过程隐含在位深度减小过程中,如下所示:
predSamples[x,y]=(predSamplesL0[x,y]+predSamplesL1[x,y]+offset)>>shift     (1.9)
shift=15--BitDepth     (1.10)
offset=1<<(shift--1)    (1.11)
为了降低复杂度,对双边匹配和模板匹配使用双边线性内插而不是常规的8抽头HEVC内插。
匹配成本的计算在不同的步骤中有些不同。当从CU层级的候选者集选择候选者时,匹配成本是双边匹配或模板匹配的SAD。在确定起始MV之后,子CU层级搜索处的双边匹配的匹配成本C计算如下:
Figure PCTCN2018109233-appb-000011
其中w是经验地设定为4的加权因子,MV和MV s分别指示当前MV和起始MV。SAD仍被用作子CU层级搜索处的模板匹配的匹配成本。
在FRUC模式下,仅通过使用亮度样本得到MV。得到的运动将用于MC帧间预测的亮度和色度。在确定MV之后,使用用于亮度的8抽头内插滤波器和用于色度的4抽头内插滤波器来执行最终MC。
运动补偿模块
重叠块运动补偿
对于所有运动补偿(motion compensation,MC)块边界执行重叠块运动补偿(Overlapped Block Motion Compensation,OBMC),除了当前开发中的CU的右边界和底部边界之外。此外,它适用于亮度和色度两个分量。MC块对应于编码块。当CU以子CU模式(包含子CU合并、仿射和FRUC模式)编码时,CU的每个子块是MC块。为了以统一的方式处理CU边界,针对所有MC块边界在子块层级执行OBMC,其中子块大小被设 置成等于4×4。
当OBMC应用于当前子块时,除了当前运动矢量之外,如果四个连接的相邻子块的运动矢量可用并与当前运动矢量不相同,还使用这四个连接的相邻子块的运动矢量来得到当前子块的预测块。组合基于多个运动矢量的这些多个预测块以生成当前子块的最终预测信号。
基于相邻子块的运动矢量的预测块标示为PN,其中N表示相邻的上、下、左和右子块的索引,并且基于当前子块的运动矢量的预测块标示为PC。当PN是基于含有与当前子块相同的运动信息的相邻子块的运动信息时,不从PN执行OBMC。否则,将每个PN样本添加到PC中的相同样本中,即,将四行/列的PN添加到PC。对于PN使用加权因子{1/4,1/8,1/16,1/32},对于PC使用加权因子{3/4,7/8,15/16,31/32}。例外的是小的MC块(即,当编码块的高度或宽度等于4或者CU使用子CU模式编码时),对于这种块,只将两行/列的PN添加到PC。在这种情况下,对于PN使用加权因子{1/4,1/8},对于PC使用加权因子{3/4,7/8}。对于基于竖直(水平)相邻子块的运动矢量生成的PN,将PN的同一行(列)中的样本以相同的加权因子添加到PC。
在当前的开发中,对于大小小于或等于256个亮度样本的CU,用信号表示CU层级标志以指示对于当前CU是否应用OBMC。对于大小大于256个亮度样本或未采用AMVP模式编码的CU,默认地应用OBMC。在编码器处,当OBMC应用于CU时,在运动估计阶段将其影响考虑在内。使用由OBMC利用顶部相邻块和左侧相邻块的运动信息形成的预测信号来补偿当前CU的原始信号的顶部和左侧边界,然后应用正常运动估计处理。
优化工具
局部照度补偿
局部照度补偿(Local Illumination Compensation,LIC)基于照度变化的线性模型,使用比例因子a和偏移量b。并且其针对每个帧间模式编码的编码单元(coding unit,CU)自适应地启用或禁用。
当LIC应用于CU时,采用最小平方误差法通过使用当前CU的相邻样本及其对应的参考样本来得到参数a和b。使用CU的二次采样的(2:1二次采样)相邻样本和参考图片中的对应样本(由当前CU或子CU的运动信息标识)。得到IC参数并分别应用于每个预测方向。
当CU以合并模式编码时,以类似于合并模式下的运动信息复制的方式从相邻块复制LIC标志;否则,针对CU用信号表示LIC标志以指示LIC是否适用。
当针对图片启用LIC时,需要另外的CU层级RD检查以确定LIC是否适用于CU。当针对CU启用LIC时,使用均值移除绝对差总和(mean-removed sum of absolute difference,MR-SAD)和均值移除绝对阿达马变换差总和(mean-removed sum of absolute Hadamard-transformed difference,MR-SATD)代替SAD和SATD分别用于整数像素运动搜索和分数像素运动搜索。
双向光流
双向光流(Bi-directional Optical flow,BIO)是在双向预测的块方式运动补偿之上执行的采样方式运动细化。样本层级运动细化不使用信令。
假设I (k)是在块运动补偿之后来自参考k(k=0,1)的亮度值,且
Figure PCTCN2018109233-appb-000012
分别是I (k)梯度的水平分量和竖直分量。假设光流是有效的,运动矢量场(v x,v y)通过方程式 (1.13)给出
Figure PCTCN2018109233-appb-000013
将此光流方程式与埃尔米特内插结合起来用于每个样本的运动轨迹,结果得到独特的三阶多项式,它在两端与函数值I (k)和导数
Figure PCTCN2018109233-appb-000014
都匹配。此多项式在t=0时的值是BIO预测值:
Figure PCTCN2018109233-appb-000015
这里,τ 0和τ 1表示到参考帧的距离。根据Ref0和Ref1的POC计算距离τ 0和τ 1:τ 0=POC(当前)-POC(Ref0),τ 1=POC(Ref1)-POC(当前)。如果两个预测都来自同一时间方向(无论是都来自过去还是都来自未来),正负号就不同(即τ 0·τ 1<0)。在这种情况下,只有当预测不是来自相同时刻(即τ 0≠τ 1),两个参考区域都具有非零运动(MVx 0,MVy 0,MVx 1,MVy 1≠0)并且块运动矢量与时间距离(MVx 0/MVx 1=MVy 0/MVy 1=-τ 01)成比例时才应用BIO。
运动矢量场(v x,v y)通过最小化点A和B(运动轨迹和参考帧平面的交点)中的值之间的差值Δ来确定。模型仅使用Δ的局部泰勒展开式的第一个线性项:
Figure PCTCN2018109233-appb-000016
方程式(1.15)中的所有值取决于样本位置(i′,j′),目前从符号中省略。假设运动在局部周围区域是一致的,则在以当前预测点(i,j)为中心的(2M+1)×(2M+1)方窗Ω内最小化Δ,其中M等于2:
Figure PCTCN2018109233-appb-000017
对于此优化问题,当前的开发使用了一种简化的方法,首先在竖直方向上最小化,然后在水平方向上最小化。这得出
Figure PCTCN2018109233-appb-000018
Figure PCTCN2018109233-appb-000019
其中,
Figure PCTCN2018109233-appb-000020
为了避免被零或非常小的值除,在方程式(1.17)和(1.18)中引入正则化参数r和m。
r=500·4 d-8   (1.20)
m=700·4 d-8   (1.21)
这里d是视频样本的位深度。
为了保持BIO的内存访问与常规双向预测运动补偿相同,仅针对当前块内的位置计算所有的预测和梯度值
Figure PCTCN2018109233-appb-000021
在方程式(1.19)中,以预测块的边界上的当前预测点为中心的(2M+1)×(2M+1)方窗Ω需要访问块外的位置。在当前的开发中,块外的值
Figure PCTCN2018109233-appb-000022
被设置成等于块内最接近的可用值。例如,这可以实现为填充。
使用BIO时,可能会针对每个样本细化运动场,但为了降低计算复杂度,可以使用基于块的BIO设计。基于4×4块计算运动细化。在基于块的BIO中,聚合4×4块中的所有样本在方程式(1.19)中的值sn,然后使用聚合的值sn得到4×4块的BIO运动矢量偏移量。使用以下公式进行基于块的BIO得到:
Figure PCTCN2018109233-appb-000023
其中bk表示属于预测块的第k个4×4块的样本集合。方程式(1.17)和(1.18)中的sn替换为((sn,bk)>>4)以得到相关的运动矢量偏移量。
在一些情况下,由于噪音或不规则运动,BIO的MV方案可能不可靠。因此,在BIO中,MV方案的大小被截取为阈值thBIO。基于当前图片的参考图片是否都来自一个方向来确定所述阈值。如果当前图片的所有参考图片均来自一个方向,则将所述阈值的值设置成12×2 14-d;否则设置成12×2 13-d
使用与HEVC运动补偿过程一致的操作(2D可分离的FIR),与运动补偿内插同时地计算BIO的梯度。此2D可分离的FIR的输入对于运动补偿过程和根据块运动矢量的分数部分的分数位置(fracX,fracY)是相同的参考帧样本。在水平梯度
Figure PCTCN2018109233-appb-000024
信号的情况下,首先使用BIOfilterS对应于具有消除缩放位移d-8的分数位置fracY竖直内插,然后在水平方向上对应于具有消除缩放位移18-d的分数位置fracX应用梯度滤波器BIOfilterG。在竖直梯度
Figure PCTCN2018109233-appb-000025
的情况下,首先使用BIOfilterG对应于具有消除缩放位移d-8的分数位置fracY竖直应用梯度滤波器,然后使用BIOfilterS在水平方向上对应于具有消除缩放位移18-d的分数位置fracX执行信号位移。为了保持合理的复杂度,梯度计算BIOfilterG和信号位移BIOfilterF的内插滤波器的长度较短(6抽头)。表1.4示出了用于BIO中块运动矢量的不同分数位置的梯度计算的滤波器。表1.5示出了用于BIO中预测信号生成的内插滤波器。
表1.4
分数像素位置 用于梯度的内插滤波器(BIOfilterG)
0 {8,-39,-3,46,-17,5}
1/16 {8,-32,-13,50,-18,5}
1/8 {7.-27,-20,54,-19,5}
3/16 {6,-21,-29,57,-18,5}
1/4 {4,-17,-36,60,-15,4}
5/16 {3,-9,-44,61,-15,4}
3/8 {1,-4,-48,61,-13,3}
7/16 {0,1,-54,60,-9,2}
1/2 {-1,4,-57,57,-4,1}
表1.5
分数像素位置 用于预测信号的内插滤波器(BIOfilterS)
0 {0,0,64,0,0,0}
1/16 {1,-3,64,4,-2,0}
1/8 {1,-6,62,9,-3,1}
3/16 {2,-8,60,14,-5,1}
1/4 {2,-9,57,19,-7,2}
5/16 {3,-10,53,24,-8,2}
3/8 {3,-11,50,29,-9,2}
7/16 {3,-11,44,35,-10,3}
1/2 {3,-10,35,44,-11,3}
在当前的开发中,当两个预测来自不同参考图片时,将BIO应用于所有双向预测块。当针对CU启用LIC时,禁用BIO。在正常MC过程之后针对块应用OBMC。为了降低计算复杂性,在OBMC过程中不应用BIO。这意味着仅在使用其自身的MV时在MC过程中应用BIO,而在OBMC过程中使用相邻块的MV时在MC过程中不应用BIO。
加权样本预测模块
作为可选工具,HEVC提供加权预测(weighted prediction,WP)工具。WP的原理是用线性加权预测信号P'(具有权重w和偏移量o)代替帧间预测信号P:
单向预测:P'=w×P+o    (1.23)
双向预测:P'=(w0×P0+o0+w1×P1+o1)/2     (1.24)
由编码器选择适用的权重和偏移量并在码流内传送。L0和L1后缀分别定义参考图片列表的List0和List1。对于内插滤波器,在对预测信号求平均之前,将位深度保持为14位精度。
在每个列表L0和L1中具有可用的至少一个参考图片的双向预测的情况下,以下公式适用于与亮度通道有关的加权预测参数的显式信令。对应的公式适用于色度通道和单向预测的情况。
predSamples[x][y]=
Clip3(0,(1<<bitDepth)-1,(predSamplesL0[x][y]*w0+predSamplesL1[x][y]*w1+((o0+o1+1)<<log2WD))>>(log2WD+1))     (1.25)
其中
log2WD=luma_log2_weight_denom+14-bitDepth
w0=LumaWeightL0[refIdxL0],w1=LumaWeightL1[refIdxL1]
o0=luma_offset_l0[refIdxL0]*highPrecisionScaleFactor
o1=luma_offset_l1[refIdxL1]*highPrecisionScaleFactor
highPrecisionScaleFactor=(1<<(bitDepth--8))
边界预测滤波(Boundary prediction filters)是对于预测像素最左列或最上列进行进一步调整的帧内编码方法。在HEVC中,在针对竖直或水平内部模式已经产生内部预测块之后,分别对预测样本的最左列或最顶行进行进一步调整。此方法可进一步扩展到若干个对角线内部模式,并且使用双抽头(用于内部模式2和34)或三抽头滤波器(用于内部模式3到6和30到33)进一步调整至多四列或四行的边界样本。
在HEVC和之前的标准中,参考帧分为前向和后向两组,分别放置在两个参考帧列表(reference picture list)中,一般命名为list0和list1。通过帧间预测方向指示当前块使用前向预测、后向预测或双向预测其中的何种预测方向,根据预测的方向选择使用不同的参考帧列表list0、list1或者list0和list1。对于选定的参考帧列表,通过参考帧索引指明参考帧。在选定的参考帧中,通过运动矢量指示当前块的预测块在参考帧中的参考块相对当前帧中当前块的位置偏移。然后根据预测方向,使用从list0、list1或者list0和list1中的参考帧中取得的预测块生成最终的预测块。其中当预测方向为单向时,直接使用从list0或list1中的参考帧中取得的预测块,当预测方向为双向时,将list0和list1中的参考帧中取得的预测块通过加权平均的方式合成最终预测块。
本申请提出一种对于帧间编码的预测块进行空域滤波的方法,它应用于帧间预测中,在编码端和解码端中的处理是相同的。
关键术语定义
帧内预测编码:用周围邻近的像素值来预测当前的像素值,然后对预测误差进行编码 的编码方式。
编码图片:含有图片的所有编码树单元的图片的编码表示。
运动矢量(motion vector,MV):用于帧间预测的二维矢量,其提供从解码图片中的坐标到参考图片中的坐标的偏移量。
预测块:在其上应用相同预测的矩形M×N样本块。
预测过程:使用预测值提供当前被解码的数据元素(例如,样本值或运动矢量)的估计值。
预测值:指定值或后续数据元素解码过程中使用的先前解码数据元素(例如,样本值或运动矢量)的组合。
参考帧:作为短期参考图片或长期参考图片的图片或帧。参考帧含有可以按解码顺序用于后续图片的解码过程中的帧间预测的样本。
帧间预测:根据当前块的参考帧中的像素,通过运动矢量指示参考帧中用于预测的像素的位置,产生当前块的预测图像。
双向预测(B)片:可以使用帧内预测或帧间预测用最多两个运动矢量和参考索引预测每个块的样本值而解码的片。
CTU:编码树单元(coding tree unit),一幅图像由多个CTU构成,一个CTU通常对应于一个方形图像区域,包含这个图像区域中的亮度像素和色度像素(或者也可以只包含亮度像素,或者也可以只包含色度像素);CTU中还包含语法元素,这些语法元素指示如何将CTU划分成至少一个编码单元(coding unit,CU),以及解码每个编码单元得到重建图像的方法。
CU:编码单元,对应于图像中一个A×B的矩形区域,包含A×B亮度像素或/和它对应的色度像素,A为矩形的宽,B为矩形的高,A和B可以相同也可以不同,A和B的取值通常为2的整数次幂,例如128、64、32、16、8、4。一个编码单元包含预测图像和残差图像,预测图像与残差图像相加得到编码单元的重建图像。预测图像通过帧内预测或帧间预测生成,残差图像通过对变换系数进行反量化和反变换处理生成。
VTM:JVET组织开发的新式编解码器参考软件。
融合编码(merge):一种帧间编码编码方式,其运动矢量不直接在码流中传递。当前块可根据融合序号(merge index)从融合候选列表(merge candidate list)中选择对应的融合候选,将融合候选的运动信息作为当前块的运动信息,或者对融合候选的运动信息经过缩放后作为当前块的运动信息。
图13为本申请实施例的示意性流程图,涉及一种预测运动信息的解码方法。
帧间预测模式获得的的预测像素在空域上存在一定的不连续性,影响预测效率,造成预测残差能量较大。本申请提出一种对于帧间编码预测块进行空域滤波的方法,在生成预测像素之后,利用周围相邻重建像素对于预测像素进行滤波。
在本申请实施例中对图像中的至少一个使用帧间预测的图像块进行解码处理,得到该图像块的重建图像。
具体的,本方法包括:
S1301、解析码流,以获得待处理图像块的运动信息。
若当前块(即待处理图像块)为merge/skip模式,生成融合运动信息候选列表。具体包括:将与当前块的空间候选和时域候选加入当前块的融合运动信息候选列表中,其 方法与HEVC中的方法相同。如图14所示,空间融合候选包含A0、A1、B0、B1、和B2,时域融合候选包括T0和T1。在VTM中,时域融合候选也包括自适应时域运动矢量预测(ATMVP)技术提供的候选。本发明不涉及生成融合运动信息候选列表相关的过程,该过程可采用HEVC或者VTM中的方法进行,也可采用其他生成融合运动信息候选列表的方法。若当前块为Inter MVP模式,则生成运动矢量预测候选列表。
然后获取原有假设的运动信息,若当前块为merge/skip模式,则根据码流中携带的融合索引确定当前块的运动信息。若当前块为Inter MVP模式,则根据码流中传送的帧间预测方向、参考帧索引、运动矢量预测值索引、运动矢量残差值确定当前块运动信息。
本步骤可采用HEVC或者VTM中的方法进行,也可采用其他生成运动矢量预测候选列表的方法,不作限定。
S1302、(可选的)确定更新所述待处理图像块的预测块。
在一种可行的实施方式中,解析所述码流,以获得所述待处理图像块的更新判别标识信息;确定所述更新判别标识信息指示更新所述待处理图像块的预测块。
在另一种可行的实施方式中,获取所述待处理图像块的预设的更新判别标识信息;确定所述更新判别标识信息指示更新所述待处理图像块的预测块。
即在码流中不传送帧间编码预测像素滤波方法的标识。
示例性的,确定帧间预测空域滤波标识(即更新判别标识),若为真则对预测块进行滤波。
S1303、(可选的)确定待处理图像块的预测模式。
在一种可行的实施方式中,解析所述码流,以获得所述待处理图像块的预测模式;确定所述预测模式为融合模式(merge)或跳过模式(skip)。
即在仅仅对于帧间编码块以merge/skip模式继续进行编码的块执行帧间编码预测像素滤波方法。
S1304、基于所述运动信息对所述待处理图像块进行运动补偿,以获得所述待处理图像块的预测块。
具体的,利用参考帧方向、参考帧序号和运动矢量,从参考帧中得到预测块。参考帧方向为前向预测是指当前编码单元从前向参考图像集合中选择一个参考图像获取参考块。参考帧方向为后向预测是指当前编码单元从后向参考图像集合中选择一个参考图像获取参考块。参考帧方向为双向预测是指从前向和后向参考图像集合中各选择一个参考图像获取参考块。当使用双向预测方法时,当前编码单元会存在两个参考块,每个参考块各自需要运动矢量和参考帧索引进行指示。然后根据参考块内像素点的像素值确定当前块的预测块内像素点像素值。
本步骤可采用HEVC或者VTM中的方法进行,也可采用其他生成运动矢量预测候选列表的方法,不作限定。
S1306、将一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预测值进行加权计算,以更新所述目标像素点的预测值。
其中,所述参考像素点与所述目标像素点具有预设的空域位置关系。
即利用当前CU空域临近已重建像素对于预测像素进行空域滤波,当前CU空域临近已重建像素与预测块中的预测像素加权平均得到新的预测像素值。
在一种可行的实施方式中,所述一个或多个参考像素点包括与所述目标像素点具有相同横坐标且具有预设纵坐标差的已重构像素点,或者,与所述目标像素点具有相同纵 坐标且具有预设横坐标差的已重构像素点。
在一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
Figure PCTCN2018109233-appb-000026
其中,所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(xN,yN),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,recon(xN-M1,yP),recon(xP,yN-M2)分别为位于坐标位置(xN-M1,yP),(xP,yN-M2)的所述参考像素点的重构值,w1,w2,w3,w4,w5,w6为预设常数,M1,M2为预设正整数。
在一种可行的实施方式中,R为2的n次方,其中,n为非负整数,R=w1+w2,或,R=w3+w4,或,R=w5+w6+w7。
具体的:
当前像素的坐标为(xP,yP),当前预测像素值为predP(xP,yP),当前CU左上角像素的坐标为(xN,yN),当前帧中已重建像素的像素值为recon(x,y)。
若xN大于零,yN等于零,当前帧中(xN-M1,yP)位置的像素已编码重建,则滤波后的预测值predQ(xP,yP)等于
predQ(xP,yP)=(w1*predP(xP,yP)+w2*recon(xN-M1,yP))/(w1+w2)
其中w1和w2为预设的加权系数,M1为预设偏移值,取值为正整数。
若xN等于零,yN大于零,当前帧中(xP,yN-M2)位置的像素已编码重建,则滤波后的预测值predQ(xP,yP)等于
predQ(xP,yP)=(w1*predP(xP,yP)+w2*recon(xP,yN-M2))/(w1+w2)
其中w1和w2为预设的加权系数,M2为预设偏移值,取值为正整数。
若xN大于零,yN大于零,当前帧中(xN-M,yP)和(xP,yN-M)位置的像素已编码重建,则滤波后的预测值predQ(xP,yP)等于
predQ(xP,yP)=(w1*predP(xP,yP)+w2*recon(xN-M1,yP)+w3*recon(xP,yN-M2))/(w1+w2+w3)
其中w1、w2和w3为预设的加权系数,M1和M2为预设偏移值,取值为正整数。
M1和M2取值为正整数,取值为1,2,3,4等。
为了减少除法运算,加权系数组(w1,w2)或(w1,w2,w3)可采取w1+w2或w1+w2+w3等于2的N次幂的数值组合,例如(6,2)、(5,3)、(4,4)或(6,1,1)、(5,2,1)等等。
在一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
Figure PCTCN2018109233-appb-000027
其中,所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(xN,yN),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,recon(xN-M1,yP),recon(xN-M2,yP),recon(xP,yN-M3),recon(xP,yN-M4)分别为位于坐标位置(xN-M1,yP),(xN-M2,yP),(xP,yN-M3),(xP,yN-M4)的所述参考像素点的重构值,w1,w2,w3,w4,w5,w6,w7,w8,w9,w10,w11为预设常数,M1,M2,M3,M4为预设正整数。
在一种可行的实施方式中,S为2的n次方,其中,n为非负整数,S=w1+w2+w3,或,S=w4+w5+w6,或,S=w7+w8+w9+w10+w11。
具体的:
当前像素的坐标为(xP,yP),当前预测像素值为predP(xP,yP),当前CU左上角像素的坐标为(xN,yN),当前帧中已重建像素的像素值为recon(x,y)。
若xN大于零,yN等于零,当前帧中(xN-M1,yP)和(xN-M2,yP)位置的像素已编码重建,则滤波后的预测值predQ(xP,yP)等于predQ(xP,yP)=(w1*predP(xP,yP)+w2*recon(xN-M1,yP)+w3*recon(xN-M2,yP))/(w1+w2+w3)
其中w1、w2和w3为预设的加权系数,M1和M2为预设偏移值,取值为正整数。
若xN等于零,yN大于零,当前帧中(xP,yN-M3)和(xP,yN-M4)位置的像素已编码重建,则滤波后的预测值predQ(xP,yP)等于predQ(xP,yP)=(w1*predP(xP,yP)+w2*recon(xP,yN-M3)+w3*recon(xP,yN-M4))/(w1+w2+w3)
其中w1、w2和w3为预设的加权系数,M3和M4为预设偏移值,取值为正整数。
若xN大于零,yN大于零,当前帧中(xN-M1,yP)、(xN-M2,yP)、(xP,yN-M3)和(xP,yN-M4)位置的像素已编码重建,则滤波后的预测值predQ(xP,yP)等于predQ(xP,yP)=(w1*predP(xP,yP)+w2*recon(xN-M1,yP)+w3*recon(xN-M2,yP)+w4*recon(xP,yN-M3)+w5*recon(xP,yN-M4))/(w1+w2+w3+w4+w5)
其中w1、w2和w3为预设的加权系数,M1、M2、M3和M4为预设偏移值取值为正整数。
M1、M2、M3和M4取值为正整数,取值为1,2,3,4等。
为了减少除法运算,加权系数组(w1,w2,w3)或(w1,w2,w3,w4,w5)可采取 w1+w2+w3或w1+w2+w3+w4+w5等于2的N次幂的数值组合,例如(6,1,1)、(5,2,1)或(12,1,1,1,1)、(10,2,2,1,1)、(8,2,2,2,2)、(8,3,3,1,1)等等。
在一种可行的实施方式中,所述一个或多个参考像素点包括以下像素点中的一个或多个:
与所述目标像素点具有相同横坐标且与所述待处理图像块的上边缘相邻接的已重构像素点;或者,
与所述目标像素点具有相同纵坐标且与所述待处理图像块的左边缘相邻接的已重构像素点;或者,
所述待处理图像块的右上角的已重构像素点;或者,
所述待处理图像块的左下角的已重构像素点;或者,
所述待处理图像块的左上角的已重构像素点。
在一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
predQ(xP,yP)=(w1*predP(xP,yP)+w2*predP1(xP,yP)+((w1+w2)/2))/(w1+w2)
其中,
predP1(xP,yP)=(predV(xP,yP)+predH(xP,yP)+nTbW*nTbH)>>(Log2(nTbW)+Log2(nTbH)+1),predV(xP,yP)=((nTbH-1-yP)*p(xP,-1)+(yP+1)*p(-1,nTbH))<<Log2(nTbW),predH(xP,yP)=((nTbW-1-xP)*p(-1,yP)+(xP+1)*p(nTbW,-1))<<Log2(nTbH),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,nTbH),p(-1,yP),p(nTbW,-1)分别为位于坐标位置(xP,-1),(-1,nTbH),(-1,yP),(nTbW,-1)的所述参考像素点的重构值,w1,w2为预设常数,nTbW和nTbH为所述待处理图像块的宽度和高度。
在一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
predQ(xP,yP)=(w1*predP(xP,yP)
                 +w2*predV(xP,yP)
                 +w3*predH(xP,yP)+((w1+w2+w3)/2))/(w1+w2+w3)
其中,
predV(xP,yP)=((nTbH-1-yP)*p(xP,-1)+(yP+1)*p(-1,nTbH)+nTbH/2)>>Log2(nTbH),
predH(xP,yP)=((nTbW-1-xP)*p(-1,yP)+(xP+1)*p(nTbW,-1)+nTbW/2)>>Log2(nTbW),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,nTbH),p(-1,yP),p(nTbW,-1)分别为位于坐标位置(xP,-1),(-1,nTbH),(-1,yP),(nTbW,-1)的所述参考像素点的重构值,w1,w2为预设常数,nTbW和nTbH为所述待处理图像块的宽度和高度。
在一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
predQ(xP,yP)=(((w1*predP(xP,yP))<<(Log2(nTbW)+Log2(nTbH)+1))
                  +w2*predV(xP,yP)
                  +w3*predH(xP,yP)
                    +(((w1+w2+w3)/2)<<(Log2(nTbW)+Log2(nTbH)+1)))
                    /(((w1+w2+w3)<<(Log2(nTbW)+Log2(nTbH)+1)))
其中,
predV(xP,yP)=((nTbH-1-yP)*p(xP,-1)+(yP+1)*p(-1,nTbH))<<Log2(nTbW),
predH(xP,yP)=((nTbW-1-xP)*p(-1,yP)+(xP+1)*p(nTbW,-1))<<Log2(nTbH),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,nTbH),p(-1,yP),p(nTbW,-1)分别为位于坐标位置(xP,-1),(-1,nTbH),(-1,yP),(nTbW,-1)的所述参考像素点的重构值,w1,w2为预设常数,nTbW和nTbH为所述待处理图像块的宽度和高度。
在一种可行的实施方式中,w1和w2的和为2的n次方,其中,n为非负整数。
具体的:
可以首先使用帧内预测中的平面模式(PLANAR),从空域临近像素预测得到第二预测像素值。假设当前像素的坐标为(xP,yP),当前预测像素值为predP(xP,yP),当前帧中已重建像素的像素值为p(x,y),第二预测像素值为predP1(xP,yP),则新的预测像素值predQ(xP,yP)等于
predQ(xP,yP)=(w1*predP(xP,yP)+w2*predP1(xP,yP))/(w1+w2)
其中w1和w2为预设的加权系数,取值为正整数。为了减少除法运算,加权系数组(w1,w2)可采取w1+w2等于2的N次幂的数值组合,例如(6,2)、(5,3)、(4,4)等等。
predP1(xP,yP)使用VTM中的方法获得,如下式:
predV(x,y)=((nTbH-1-y)*p(x,-1)+(y+1)*p(-1,nTbH))<<Log2(nTbW)
predH(x,y)=((nTbW-1-x)*p(-1,y)+(x+1)*p(nTbW,-1))<<Log2(nTbH)
predP1
(x,y)=(predV(x,y)+predH(x,y)+nTbW*nTbH)>>(Log2(nTbW)+Log2(nTbH)+1)
其中nTbW和nTbH为当前CU的宽度和高度,p(x,y)表示临近像素,x和y为坐标,详细过程参考JVET-K1001VVC spec。
需要说明的是生成第二预测像素值为predP1(xP,yP)所使用平面模式(PLANAR)算法不仅限于VTM中的算法,也可以使用HEVC和H.264中的PLANAR算法。
在一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
predQ(xP,yP)=clip1Cmp((refL(xP,yP)*wL(xP)+refT(xP,yP)*wT(yP)-p(-1,-1)*wTL(xP,yP)+(64-wL(xP)-wT(yP)+wTL(xP,yP))*predP(xP,yP)+32)>>6),
其中,refL(xP,yP)=p(-1,yP),refT(xP,yP)=p(xP,-1),wT(yP)=32>>((yP<<1)>>nScale),wL(xP)=32>>((xP<<1)>>nScale),wTL(xP,yP)=((wL(xP)>>4)+(wT(yP)>>4)),nScale=((Log2(nTbW)+Log2(nTbH)-2)>>2),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,yP),p(-1,-1)分别为位于坐标位置(xP,-1), (-1,yP),(-1,-1)的所述参考像素点的重构值,nTbW和nTbH为所述待处理图像块的宽度和高度,clip1Cmp为钳位操作。
具体的:
可以利用帧内预测中的基于位置的帧内预测联合处理技术(Position-dependent intra prediction combination process)对于帧间预测块进行处理。假设当前像素的坐标为(xP,yP),当前预测像素值为predP(xP,yP),当前帧中已重建像素的像素值为p(x,y),则新的预测像素值为predQ(xP,yP)。
predQ(xP,yP)使用VTM中帧内预测联合处理技术的DC模式的方法获得,如下式:
predQ(x,y)=clip1Cmp((refL(x,y)*wL(x)+refT(x,y)*wT(y)-
                     p(-1,-1)*wTL(x,y)+
                     (64-wL(x)-wT(y)+wTL(x,y))*predP(x,y)+32)
                       >>6)
其中
refL(x,y)=p(-1,y)
refT(x,y)=p(x,-1)
wT(y)=32>>((y<<1)>>nScale)
wL(x)=32>>((x<<1)>>nScale)
wTL(x,y)=((wL(x)>>4)+(wT(y)>>4))
nScale=((Log2(nTbW)+Log2(nTbH)-2)>>2)
其中nTbW和nTbH为当前CU的宽度和高度,p(x,y)表示临近像素,x和y为坐标,详细过程参考JVET-K1001VVC spec。
需要说明的是所使用帧内预测联合处理技术不仅限于VTM中的算法,也可以使用JEM中的算法。
在一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
predQ(xP,yP)=clip1Cmp((refL(xP,yP)*wL(xP)+refT(xP,yP)*wT(yP)+(64-wL(xP)-w T(yP))*predP(xP,yP)+32)>>6),
其中,refL(xP,yP)=p(-1,yP),refT(xP,yP)=p(xP,-1),wT(yP)=32>>((yP<<1)>>nScale),wL(xP)=32>>((xP<<1)>>nScale),nScale=((Log2(nTbW)+Log2(nTbH)-2)>>2),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,yP)分别为位于坐标位置(xP,-1),(-1,yP)的所述参考像素点的重构值,nTbW和nTbH为所述待处理图像块的宽度和高度,clip1Cmp为钳位操作。
具体的:
可以利用帧内预测中的基于位置的帧内预测联合处理技术(Position-dependent intra prediction combination process)对于帧间预测块进行处理。假设当前像素的坐标为(xP,yP),当前预测像素值为predP(xP,yP),当前帧中已重建像素的像素值为p(x,y),则新的预测像素值为predQ(xP,yP)。
predQ(xP,yP)使用VTM中帧内预测联合处理技术的PLANAR模式的方法获得,如下式:
predQ(x,y)=clip1Cmp((refL(x,y)*wL(x)+refT(x,y)*wT(y)
                   (64-wL(x)-wT(y))*predP(x,y)+32)
                     >>6)
其中
refL(x,y)=p(-1,y)
refT(x,y)=p(x,-1)
wT(y)=32>>((y<<1)>>nScale)
wL(x)=32>>((x<<1)>>nScale)
nScale=((Log2(nTbW)+Log2(nTbH)-2)>>2)
其中nTbW和nTbH为当前CU的宽度和高度,p(x,y)表示临近像素,x和y为坐标,详细过程参考JVET-K1001VVC spec。
需要说明的是所使用帧内预测联合处理技术不仅限于VTM中的算法,也可以使用JEM中的算法。
S1305、(可选的)对参考像素点进行滤波处理。
示例性的,当所述参考像素点位于所述待处理图像块的上方时,对所述参考像素点的重构值和所述参考像素点的左右相邻像素点的重构值进行加权计算;当所述参考像素点位于所述待处理图像块的左方时,对所述参考像素点的重构值和所述参考像素点的上下相邻像素点的重构值进行加权计算;采用所述加权计算的结果更新所述参考像素点的重构值。
具体的,可以利用帧内预测中边界滤波技术对于帧间预测像素进行滤波,边界滤波技术可以参照HEVC的方法进行。
应理解的是,本申请实施例还可以包括在S1306之前或S1306之后,根据运动信息和码流信息,利用除本方法之外的帧间编码技术继续进行帧间预测(S1307)。
需要说明的是本发明不对编码器和解码器在帧间预测时采用的除本方法之外的帧间编码进行限定,可使用HEVC或者VTM中的技术,包括并不仅限于双向光流方法、解码端运动矢量改良方法,亮度补偿技术(LIC)、通用加权预测(GBI)、覆盖块运动补偿(OBMC)、解码段运动矢量补偿(DMVD)技术。可以采用HEVC或者VTM中的方法进行,也可采用其他生成运动矢量预测候选列表的方法,不作限定。
在一种可行的实施方式中,在所述基于所述运动信息对所述待处理图像块进行运动补偿之前,还包括:通过第一预设算法对所述运动信息进行初始更新;对应的,所述基于所述运动信息对所述待处理图像块进行运动补偿,包括:基于所述初始更新后的运动信息对所述待处理图像块进行运动补偿。
在另一种可行的实施方式中,在所述获得所述待处理图像块的预测块之后,还包括:通过第二预设算法对所述预测块进行预更新;对应的,所述将一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预测值进行加权计算,包括:将所述一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预更新后的预测值进行加权计算。
在另一种可行的实施方式中,在所述将一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预测值进行加权计算,以更新所述目标像素点的预测值之后,还包括:通过第二预设算法对所述目标像素点的预测值进行更新。
还应理解的是,在获得目标像素点的预测值之后,还可以包括:将最终帧间预测图像与残差图像相加,得到当前块的重建图像。
具体的,如果当前块存在残差,则将残差信息和预测图像相加,获得当前块的重建图像;如果当前块没有残差,则预测图像为当前块的重建图像。上述过程可采用与HEVC或者VTM相同的方法,也可采用其他运动补偿、图像重建方法,不做限定。
本申请实施例的技术效果在于提高编码压缩效率,提升PSNR BDrate 0.5%。相比于现有技术,在生成帧间预测像素的过程中,对于帧间预测像素进行空域滤波,提高了编码效率。
图15为本申请实施例的示意性框图,涉及一种预测运动信息的解码装置1500,包括:
解析模块1501,用于解析码流,以获得待处理图像块的运动信息;补偿模块1502,用于基于所述运动信息对所述待处理图像块进行运动补偿,以获得所述待处理图像块的预测块;计算模块1503,用于将一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预测值进行加权计算,以更新所述目标像素点的预测值,其中,所述参考像素点与所述目标像素点具有预设的空域位置关系。
在一种可行的实施方式中,所述一个或多个参考像素点包括与所述目标像素点具有相同横坐标且具有预设纵坐标差的已重构像素点,或者,与所述目标像素点具有相同纵坐标且具有预设横坐标差的已重构像素点。
在一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
Figure PCTCN2018109233-appb-000028
其中,所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(xN,yN),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目 标像素点的更新后的预测值,recon(xN-M1,yP),recon(xP,yN-M2)分别为位于坐标位置(xN-M1,yP),(xP,yN-M2)的所述参考像素点的重构值,w1,w2,w3,w4,w5,w6为预设常数,M1,M2为预设正整数。
在一种可行的实施方式中,R为2的n次方,其中,n为非负整数,R=w1+w2,或,R=w3+w4,或,R=w5+w6+w7。
在一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
Figure PCTCN2018109233-appb-000029
其中,所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(xN,yN),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,recon(xN-M1,yP),recon(xN-M2,yP),recon(xP,yN-M3),recon(xP,yN-M4)分别为位于坐标位置(xN-M1,yP),(xN-M2,yP),(xP,yN-M3),(xP,yN-M4)的所述参考像素点的重构值,w1,w2,w3,w4,w5,w6,w7,w8,w9,w10,w11为预设常数,M1,M2,M3,M4为预设正整数。
在第二方面的一种可行的实施方式中,S为2的n次方,其中,n为非负整数,S=w1+w2+w3,或,S=w4+w5+w6,或,S=w7+w8+w9+w10+w11。
在一种可行的实施方式中,所述一个或多个参考像素点包括以下像素点中的一个或多个:与所述目标像素点具有相同横坐标且与所述待处理图像块的上边缘相邻接的已重构像素点;或者,与所述目标像素点具有相同纵坐标且与所述待处理图像块的左边缘相邻接的已重构像素点;或者,所述待处理图像块的右上角的已重构像素点;或者,所述待处理图像块的左下角的已重构像素点;或者,所述待处理图像块的左上角的已重构像素点。
在一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
predQ(xP,yP)=(w1*predP(xP,yP)+w2*predP1(xP,yP)+((w1+w2)/2))/(w1+w2)
其中,
predP1(xP,yP)=(predV(xP,yP)+predH(xP,yP)+nTbW*nTbH)>>(Log2(nTbW)+Log2(nTbH)+1),predV(xP,yP)=((nTbH-1-yP)*p(xP,-1)+(yP+1)*p(-1,nTbH))<<Log2(nTbW),
predH(xP,yP)=((nTbW-1-xP)*p(-1,yP)+(xP+1)*p(nTbW,-1))<<Log2(nTbH),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,nTbH),p(-1,yP),p(nTbW,-1)分别为位于坐标位置(xP,-1),(-1,nTbH),(-1,yP),(nTbW,-1)的所述参考像素点的重构值,w1,w2为预设常数,nTbW和nTbH为所述待处理图像块的宽度和高度。
在一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
predQ(xP,yP)=(w1*predP(xP,yP)
                  +w2*predV(xP,yP)
                  +w3*predH(xP,yP)+((w1+w2+w3)/2))/(w1+w2+w3)
其中,
predV(xP,yP)=((nTbH-1-yP)*p(xP,-1)+(yP+1)*p(-1,nTbH)+nTbH/2)>>Log2(nTbH),
predH(xP,yP)=((nTbW-1-xP)*p(-1,yP)+(xP+1)*p(nTbW,-1)+nTbW/2)>>Log2(nTbW),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,nTbH),p(-1,yP),p(nTbW,-1)分别为位于坐标位置(xP,-1),(-1,nTbH),(-1,yP),(nTbW,-1)的所述参考像素点的重构值,w1,w2为预设常数,nTbW和nTbH为所述待处理图像块的宽度和高度。
在一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
predQ(xP,yP)=(((w1*predP(xP,yP))<<(Log2(nTbW)+Log2(nTbH)+1))
                   +w2*predV(xP,yP)
                   +w3*predH(xP,yP)
                   +(((w1+w2+w3)/2)<<(Log2(nTbW)+Log2(nTbH)+1)))
                   /(((w1+w2+w3)<<(Log2(nTbW)+Log2(nTbH)+1)))
其中,
predV(xP,yP)=((nTbH-1-yP)*p(xP,-1)+(yP+1)*p(-1,nTbH))<<Log2(nTbW),
predH(xP,yP)=((nTbW-1-xP)*p(-1,yP)+(xP+1)*p(nTbW,-1))<<Log2(nTbH),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,nTbH),p(-1,yP),p(nTbW,-1)分别为位于坐标位置(xP,-1),(-1,nTbH),(-1,yP),(nTbW,-1)的所述参考像素点的重构值,w1,w2为预设常数,nTbW和nTbH为所述待处理图像块的宽度和高度。
在一种可行的实施方式中,w1和w2的和为2的n次方,其中,n为非负整数。
在一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
predQ(xP,yP)=clip1Cmp((refL(xP,yP)*wL(xP)+refT(xP,yP)*wT(yP)-p(-1,-1)*wTL(xP,yP)+(64-wL(xP)-wT(yP)+wTL(xP,yP))*predP(xP,yP)+32)>>6),
其中,refL(xP,yP)=p(-1,yP),refT(xP,yP)=p(xP,-1),wT(yP)=32>>((yP<<1)>>nScale),wL(xP)=32>>((xP<<1)>>nScale),wTL(xP,yP)=((wL(xP)>>4)+(wT(yP)>>4)),nScale=((Log2(nTbW)+Log2(nTbH)-2)>>2),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为 (0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,yP),p(-1,-1)分别为位于坐标位置(xP,-1),(-1,yP),(-1,-1)的所述参考像素点的重构值,nTbW和nTbH为所述待处理图像块的宽度和高度,clip1Cmp为钳位操作。
在一种可行的实施方式中,根据以下公式更新所述目标像素点的预测值:
predQ(xP,yP)=clip1Cmp((refL(xP,yP)*wL(xP)+refT(xP,yP)*wT(yP)+(64-wL(xP)-w T(yP))*predP(xP,yP)+32)>>6),
其中,refL(xP,yP)=p(-1,yP),refT(xP,yP)=p(xP,-1),wT(yP)=32>>((yP<<1)>>nScale),wL(xP)=32>>((xP<<1)>>nScale),nScale=((Log2(nTbW)+Log2(nTbH)-2)>>2),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,yP)分别为位于坐标位置(xP,-1),(-1,yP)的所述参考像素点的重构值,nTbW和nTbH为所述待处理图像块的宽度和高度,clip1Cmp为钳位操作。
在一种可行的实施方式中,所述计算模块1503还用于:当所述参考像素点位于所述待处理图像块的上方时,对所述参考像素点的重构值和所述参考像素点的左右相邻像素点的重构值进行加权计算;当所述参考像素点位于所述待处理图像块的左方时,对所述参考像素点的重构值和所述参考像素点的上下相邻像素点的重构值进行加权计算;采用所述加权计算的结果更新所述参考像素点的重构值。
在一种可行的实施方式中,所述计算模块1503还用于:通过第一预设算法对所述运动信息进行初始更新;对应的,所述补偿模块1502具体用于:基于所述初始更新后的运动信息对所述待处理图像块进行运动补偿。
在一种可行的实施方式中,所述计算模块1503还用于:通过第二预设算法对所述预测块进行预更新;对应的,所述计算模块1503具体用于:将所述一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预更新后的预测值进行加权计算。
在一种可行的实施方式中,所述计算模块1503还用于:通过第二预设算法对所述目标像素点的预测值进行更新。
在一种可行的实施方式中,所述解析模块1501还用于:解析所述码流,以获得所述待处理图像块的预测模式;确定所述预测模式为融合模式(merge)或跳过模式(skip)。
在一种可行的实施方式中,所述解析模块1501还用于:解析所述码流,以获得所述待处理图像块的更新判别标识信息;确定所述更新判别标识信息指示更新所述待处理图像块的预测块。
在一种可行的实施方式中,所述计算模块1503还用于:获取所述待处理图像块的预设的更新判别标识信息;确定所述更新判别标识信息指示更新所述待处理图像块的预测块。
图16为本申请实施例中的预测运动信息的解码设备1600的一种示意性结构框图。具体的,包括:处理器1601和耦合于所述处理器的存储器1602;所述处理器1601用于执行图13所示的实施例以及各种可行的实施方式。
虽然关于视频编码器100及视频解码器200已描述本申请的特定方面,但应理解,本申请的技术可通过许多其它视频编码和/或编码单元、处理器、处理单元、例如编码器/解码器(CODEC)的基于硬件的编码单元及类似者来应用。此外,应理解,仅作为可行的实施方式而提 供关于图17所展示及描述的步骤。即,图13的可行的实施方式中所展示的步骤无需必定按图13中所展示的次序执行,且可执行更少、额外或替代步骤。
此外,应理解,取决于可行的实施方式,本文中所描述的方法中的任一者的特定动作或事件可按不同序列执行,可经添加、合并或一起省去(例如,并非所有所描述的动作或事件为实践方法所必要的)。此外,在特定可行的实施方式中,动作或事件可(例如)经由多线程处理、中断处理或多个处理器来同时而非顺序地执行。另外,虽然出于清楚的目的将本申请的特定方面描述为通过单一模块或单元执行,但应理解,本申请的技术可通过与视频解码器相关联的单元或模块的组合执行。
在一个或多个可行的实施方式中,所描述的功能可以硬件、软件、固件或其任何组合来实施。如果以软件来实施,那么功能可作为一个或多个指令或代码而存储于计算机可读媒体上或经由计算机可读媒体来传输,且通过基于硬件的处理单元来执行。计算机可读媒体可包含计算机可读存储媒体或通信媒体,计算机可读存储媒体对应于例如数据存储媒体的有形媒体,通信媒体包含促进计算机程序(例如)根据通信协议从一处传送到另一处的任何媒体。
以这个方式,计算机可读媒体示例性地可对应于(1)非暂时性的有形计算机可读存储媒体,或(2)例如信号或载波的通信媒体。数据存储媒体可为可由一个或多个计算机或一个或多个处理器存取以检索用于实施本申请中所描述的技术的指令、代码和/或数据结构的任何可用媒体。计算机程序产品可包含计算机可读媒体。
作为可行的实施方式而非限制,此计算机可读存储媒体可包括RAM、ROM、EEPROM、CD-ROM或其它光盘存储装置、磁盘存储装置或其它磁性存储装置、快闪存储器或可用于存储呈指令或数据结构的形式的所要代码且可由计算机存取的任何其它媒体。同样,任何连接可适当地称作计算机可读媒体。例如,如果使用同轴缆线、光纤缆线、双绞线、数字订户线(DSL),或例如红外线、无线电及微波的无线技术而从网站、服务器或其它远端源传输指令,那么同轴缆线、光纤缆线、双绞线、DSL,或例如红外线、无线电及微波的无线技术包含于媒体的定义中。
然而,应理解,计算机可读存储媒体及数据存储媒体不包含连接、载波、信号或其它暂时性媒体,而替代地针对非暂时性有形存储媒体。如本文中所使用,磁盘及光盘包含紧密光盘(CD)、雷射光盘、光盘、数字多功能光盘(DVD)、软性磁盘及蓝光光盘,其中磁盘通常以磁性方式再现数据,而光盘通过雷射以光学方式再现数据。以上各物的组合也应包含于计算机可读媒体的范围内。
可通过例如一个或多个数字信号处理器(DSP)、通用微处理器、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或其它等效集成或离散逻辑电路的一个或多个处理器来执行指令。因此,如本文中所使用,术语“处理器”可指前述结构或适于实施本文中所描述的技术的任何其它结构中的任一者。另外,在一些方面中,可将本文所描述的功能性提供于经配置以用于编码及解码的专用硬件和/或软件模块内,或并入于组合式编码解码器中。同样,技术可完全实施于一个或多个电路或逻辑元件中。
本申请的技术可实施于广泛多种装置或设备中,包含无线手机、集成电路(IC)或IC的集合(例如,芯片组)。本申请中描述各种组件、模块或单元以强调经配置以执行所揭示的技术的装置的功能方面,但未必需要通过不同硬件单元实现。更确切来说,如前文所描述,各种单元可组合于编码解码器硬件单元中或由互操作的硬件单元(包含如前文所描述的一个或多个处理器)结合合适软件和/或固件的集合来提供。
以上所述,仅为本申请示例性的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应该以权利要求的保护范围为准。

Claims (36)

  1. 一种帧间预测方法,其特征在于,包括:
    解析码流,以获得待处理图像块的运动信息;
    基于所述运动信息对所述待处理图像块进行运动补偿,以获得所述待处理图像块的预测块;
    将一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预测值进行加权计算,以更新所述目标像素点的预测值,其中,所述参考像素点与所述目标像素点具有预设的空域位置关系。
  2. 根据权利要求1所述的方法,其特征在于,所述一个或多个参考像素点包括与所述目标像素点具有相同横坐标且具有预设纵坐标差的已重构像素点,或者,与所述目标像素点具有相同纵坐标且具有预设横坐标差的已重构像素点。
  3. 根据权利要求2所述的方法,其特征在于,根据以下公式更新所述目标像素点的预测值:
    Figure PCTCN2018109233-appb-100001
    其中,所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(xN,yN),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,recon(xN-M1,yP),recon(xP,yN-M2)分别为位于坐标位置(xN-M1,yP),(xP,yN-M2)的所述参考像素点的重构值,w1,w2,w3,w4,w5,w6为预设常数,M1,M2为预设正整数。
  4. 根据权利要求3所述的方法,其特征在于,R为2的n次方,其中,n为非负整数,R=w1+w2,或,R=w3+w4,或,R=w5+w6+w7。
  5. 根据权利要求2所述的方法,其特征在于,根据以下公式更新所述目标像素点的预测值:
    Figure PCTCN2018109233-appb-100002
    其中,所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(xN,yN),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,recon(xN-M1,yP),recon(xN-M2,yP),recon(xP,yN-M3),recon(xP,yN-M4)分别为位于坐标位置(xN-M1,yP),(xN-M2,yP),(xP,yN-M3),(xP,yN-M4)的所述参考像素点的重构值,w1,w2,w3,w4,w5,w6,w7,w8,w9,w10,w11为预设常数,M1,M2,M3,M4为预设正整数。
  6. 根据权利要求5所述的方法,其特征在于,S为2的n次方,其中,n为非负整数,S=w1+w2+w3,或,S=w4+w5+w6,或,S=w7+w8+w9+w10+w11。
  7. 根据权利要求1所述的方法,其特征在于,所述一个或多个参考像素点包括以下像素点中的一个或多个:
    与所述目标像素点具有相同横坐标且与所述待处理图像块的上边缘相邻接的已重构像素点;或者,
    与所述目标像素点具有相同纵坐标且与所述待处理图像块的左边缘相邻接的已重构像素点;或者,
    所述待处理图像块的右上角的已重构像素点;或者,
    所述待处理图像块的左下角的已重构像素点;或者,
    所述待处理图像块的左上角的已重构像素点。
  8. 根据权利要求7所述的方法,其特征在于,根据以下公式更新所述目标像素点的预测值:
    predQ(xP,yP)=(w1*predP(xP,yP)+w2*predP1(xP,yP)
    +((w1+w2)/2))/(w1+w2)
    其中,
    predP1(xP,yP)=(predV(xP,yP)+predH(xP,yP)+nTbW*nTbH)>>(Log2(nTbW)+Log2(nTbH)+1),predV(xP,yP)=((nTbH-1-yP)*p(xP,-1)+(yP+1)*p(-1,nTbH))<<Log2(nTbW),
    predH(xP,yP)=((nTbW-1-xP)*p(-1,yP)+(xP+1)*p(nTbW,-1))<<Log2(nTbH),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,nTbH),p(-1,yP),p(nTbW,-1)分别为位于坐标位置(xP,-1),(-1,nTbH),(-1,yP),(nTbW,-1)的所述参考像素点的重构值,w1,w2为预设常数,nTbW和nTbH为所述待处理图像块的宽度和高度;
    或者,根据以下公式更新所述目标像素点的预测值:
    predQ(xP,yP)=(w1*predP(xP,yP)
    +w2*predV(xP,yP)
    +w3*predH(xP,yP)+((w1+w2+w3)/2))/(w1+w2+w3)
    其中,
    predV(xP,yP)=((nTbH-1-yP)*p(xP,-1)+(yP+1)*p(-1,nTbH)+nTbH/2)>>Log2(nTbH),
    predH(xP,yP)=((nTbW-1-xP)*p(-1,yP)+(xP+1)*p(nTbW,-1)+nTbW/2)>>Log2(nTbW),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,nTbH),p(-1,yP),p(nTbW,-1)分别为位于坐标位置(xP,-1),(-1,nTbH),(-1,yP),(nTbW,-1)的所述参考像素点的重构值,w1,w2,w3为预设常数,nTbW和nTbH为所述待处理图像块的宽度和高度;
    或者,根据以下公式更新所述目标像素点的预测值:
    predQ(xP,yP)=(((w1*predP(xP,yP))<<(Log2(nTbW)+Log2(nTbH)+1))
    +w2*predV(xP,yP)
    +w3*predH(xP,yP)
    +(((w1+w2+w3)/2)<<(Log2(nTbW)+Log2(nTbH)+1)))
    /(((w1+w2+w3)<<(Log2(nTbW)+Log2(nTbH)+1)))
    其中,
    predV(xP,yP)=((nTbH-1-yP)*p(xP,-1)+(yP+1)*p(-1,nTbH))<<Log2(nTbW),
    predH(xP,yP)=((nTbW-1-xP)*p(-1,yP)+(xP+1)*p(nTbW,-1))<<Log2(nTbH),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,nTbH),p(-1,yP),p(nTbW,-1)分别为位于坐标位置(xP,-1),(-1,nTbH),(-1,yP),(nTbW,-1)的所述参考像素点的重构值,w1,w2,w3为预设常数,nTbW和nTbH为所述待处理图像块的宽度和高度。
  9. 根据权利要求8所述的方法,其特征在于,w1和w2的和为2的n次方,其中,n为非负整数。
  10. 根据权利要求7所述的方法,其特征在于,根据以下公式更新所述目标像素点的预测值:
    predQ(xP,yP)=clip1Cmp((refL(xP,yP)*wL(xP)+refT(xP,yP)*wT(yP)-p(-1,-1)*wTL(xP,yP)+(64-wL(xP)-wT(yP)+wTL(xP,yP))*predP(xP,yP)+32)>>6),
    其中,refL(xP,yP)=p(-1,yP),refT(xP,yP)=p(xP,-1),wT(yP)=32>>((yP<<1)>>nScale),wL(xP)=32>>((xP<<1)>>nScale),wTL(xP,yP)=((wL(xP)>>4)+(wT(yP)>>4)),nScale=((Log2(nTbW)+Log2(nTbH)-2)>>2),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,yP),p(-1,-1)分别为位于坐标位置(xP,-1),(-1,yP),(-1,-1)的所述参考像素点的重构值,nTbW和nTbH为所述待处理图像块的宽度和高度,clip1Cmp为钳位操作。
  11. 根据权利要求7所述的方法,其特征在于,根据以下公式更新所述目标像素点的预测值:
    predQ(xP,yP)=clip1Cmp((refL(xP,yP)*wL(xP)+refT(xP,yP)*wT(yP)+(64-wL(xP)-wT(yP))*predP(xP,yP)+32)>>6),
    其中,refL(xP,yP)=p(-1,yP),refT(xP,yP)=p(xP,-1),wT(yP)=32>>((yP<<1)>>nScale),wL(xP)=32>>((xP<<1)>>nScale),nScale=((Log2(nTbW)+Log2(nTbH)-2)>>2),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,yP)分别为位于坐标位置(xP,-1),(-1,yP)的所述参考像素点的重构值,nTbW和nTbH为所述待处理图像块的宽度和高度,clip1Cmp为钳位操作。
  12. 根据权利要求1至11任一项所述的方法,其特征在于,在所述将一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预测值进行加权计算之前,包括:
    当所述参考像素点位于所述待处理图像块的上方时,对所述参考像素点的重构值和所述参考像素点的左右相邻像素点的重构值进行加权计算;
    当所述参考像素点位于所述待处理图像块的左方时,对所述参考像素点的重构值和所述参考像素点的上下相邻像素点的重构值进行加权计算;
    采用所述加权计算的结果更新所述参考像素点的重构值。
  13. 根据权利要求1至12任一项所述的方法,其特征在于,在所述基于所述运动信息对所述待处理图像块进行运动补偿之前,还包括:
    通过第一预设算法对所述运动信息进行初始更新;
    对应的,所述基于所述运动信息对所述待处理图像块进行运动补偿,包括:
    基于所述初始更新后的运动信息对所述待处理图像块进行运动补偿。
  14. 根据权利要求1至13任一项所述的方法,其特征在于,在所述获得所述待处理图像块的预测块之后,还包括:
    通过第二预设算法对所述预测块进行预更新;
    对应的,所述将一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预测值进行加权计算,包括:
    将所述一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预更新后的预测值进行加权计算。
  15. 根据权利要求1至13任一项所述的方法,其特征在于,在所述将一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预测值进行加权计算,以更新所述目标像素点的预测值之后,还包括:
    通过第二预设算法对所述目标像素点的预测值进行更新。
  16. 根据权利要求1至15任一项所述的方法,其特征在于,在所述将一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预测值进行加权计算之前,还包括:
    解析所述码流,以获得所述待处理图像块的预测模式;
    确定所述预测模式为融合模式(merge)或跳过模式(skip)。
  17. 根据权利要求1至16任一项所述的方法,其特征在于,在所述将一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预测值进行加权计算之前,还包括:
    解析所述码流,以获得所述待处理图像块的更新判别标识信息;
    确定所述更新判别标识信息指示更新所述待处理图像块的预测块。
  18. 根据权利要求1至16任一项所述的方法,其特征在于,在所述将一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预测值进行加权计算之前,还包括:
    获取所述待处理图像块的预设的更新判别标识信息;
    确定所述更新判别标识信息指示更新所述待处理图像块的预测块。
  19. 一种帧间预测装置,其特征在于,包括:
    解析模块,用于解析码流,以获得待处理图像块的运动信息;
    补偿模块,用于基于所述运动信息对所述待处理图像块进行运动补偿,以获得所述待处理图像块的预测块;
    计算模块,用于将一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预测值进行加权计算,以更新所述目标像素点的预测值,其中,所述参考像素点与所述目标像素点具有预设的空域位置关系。
  20. 根据权利要求19所述的装置,其特征在于,所述一个或多个参考像素点包括与所述目标像素点具有相同横坐标且具有预设纵坐标差的已重构像素点,或者,与所述目标像素点具有相同纵坐标且具有预设横坐标差的已重构像素点。
  21. 根据权利要求20所述的装置,其特征在于,根据以下公式更新所述目标像素点的预测值:
    Figure PCTCN2018109233-appb-100003
    其中,所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(xN,yN),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,recon(xN-M1,yP),recon(xP,yN-M2)分别为位于坐标位置(xN-M1,yP),(xP,yN-M2)的所述参考像素点的重构值,w1,w2,w3,w4,w5,w6为预设常数,M1,M2为预设正整数。
  22. 根据权利要求21所述的装置,其特征在于,R为2的n次方,其中,n为非负整数,R=w1+w2,或,R=w3+w4,或,R=w5+w6+w7。
  23. 根据权利要求20所述的装置,其特征在于,根据以下公式更新所述目标像素点的预测值:
    Figure PCTCN2018109233-appb-100004
    其中,所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(xN,yN),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,
    recon(xN-M1,yP),recon(xN-M2,yP),recon(xP,yN-M3),recon(xP,yN-M4)分别为位于坐标位置(xN-M1,yP),(xN-M2,yP),(xP,yN-M3),(xP,yN-M4)的所述参考像素点的重构值,w1,w2,w3,w4,w5,w6,w7,w8,w9,w10,w11为预设常数,M1,M2,M3,M4为预设正整数。
  24. 根据权利要求23所述的装置,其特征在于,S为2的n次方,其中,n为非负整数,S=w1+w2+w3,或,S=w4+w5+w6,或,S=w7+w8+w9+w10+w11。
  25. 根据权利要求19所述的装置,其特征在于,所述一个或多个参考像素点包括以下像素点中的一个或多个:
    与所述目标像素点具有相同横坐标且与所述待处理图像块的上边缘相邻接的已重构像素点;或者,
    与所述目标像素点具有相同纵坐标且与所述待处理图像块的左边缘相邻接的已重构像素点;或者,
    所述待处理图像块的右上角的已重构像素点;或者,
    所述待处理图像块的左下角的已重构像素点;或者,
    所述待处理图像块的左上角的已重构像素点。
  26. 根据权利要求25所述的装置,其特征在于,根据以下公式更新所述目标像素点的预测值:
    predQ(xP,yP)=(w1*predP(xP,yP)+w2*predP1(xP,yP)
    +((w1+w2)/2))/(w1+w2)
    其中,
    predP1(xP,yP)=(predV(xP,yP)+predH(xP,yP)+nTbW*nTbH)>>(Log2(nTbW)+Log2(nTbH )+1),predV(xP,yP)=((nTbH-1-yP)*p(xP,-1)+(yP+1)*p(-1,nTbH))<<Log2(nTbW),
    predH(xP,yP)=((nTbW-1-xP)*p(-1,yP)+(xP+1)*p(nTbW,-1))<<Log2(nTbH),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,nTbH),p(-1,yP),p(nTbW,-1)分别为位于坐标位置(xP,-1),(-1,nTbH),(-1,yP),(nTbW,-1)的所述参考像素点的重构值,w1,w2为预设常数,nTbW和nTbH为所述待处理图像块的宽度和高度;
    或者,根据以下公式更新所述目标像素点的预测值:
    predQ(xP,yP)=(w1*predP(xP,yP)
    +w2*predV(xP,yP)
    +w3*predH(xP,yP)+((w1+w2+w3)/2))/(w1+w2+w3)
    其中,
    predV(xP,yP)=((nTbH-1-yP)*p(xP,-1)+(yP+1)*p(-1,nTbH)+nTbH/2)>>Log2(nTbH),
    predH(xP,yP)=((nTbW-1-xP)*p(-1,yP)+(xP+1)*p(nTbW,-1)+nTbW/2)>>Log2(nTbW),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,nTbH),p(-1,yP),p(nTbW,-1)分别为位于坐标位置(xP,-1),(-1,nTbH),(-1,yP),(nTbW,-1)的所述参考像素点的重构值,w1,w2为预设常数,nTbW和nTbH为所述待处理图像块的宽度和高度。
    或者,根据以下公式更新所述目标像素点的预测值:
    predQ(xP,yP)=(((w1*predP(xP,yP))<<(Log2(nTbW)+Log2(nTbH)+1))
    +w2*predV(xP,yP)
    +w3*predH(xP,yP)
    +(((w1+w2+w3)/2)<<(Log2(nTbW)+Log2(nTbH)+1)))
    /(((w1+w2+w3)<<(Log2(nTbW)+Log2(nTbH)+1)))
    其中,
    predV(xP,yP)=((nTbH-1-yP)*p(xP,-1)+(yP+1)*p(-1,nTbH))<<Log2(nTbW),
    predH(xP,yP)=((nTbW-1-xP)*p(-1,yP)+(xP+1)*p(nTbW,-1))<<Log2(nTbH),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,nTbH),p(-1,yP),p(nTbW,-1)分别为位于坐标位置(xP,-1),(-1,nTbH),(-1,yP),(nTbW,-1)的所述参考像素点的重构值,w1,w2,w3为预设常数,nTbW和nTbH为所述待处理图像块的宽度和高度。
  27. 根据权利要求26所述的装置,其特征在于,w1和w2的和为2的n次方,其 中,n为非负整数。
  28. 根据权利要求25所述的装置,其特征在于,根据以下公式更新所述目标像素点的预测值:
    predQ(xP,yP)=clip1Cmp((refL(xP,yP)*wL(xP)+refT(xP,yP)*wT(yP)-p(-1,-1)*wTL(xP,yP)+(64-wL(xP)-wT(yP)+wTL(xP,yP))*predP(xP,yP)+32)>>6),
    其中,refL(xP,yP)=p(-1,yP),refT(xP,yP)=p(xP,-1),wT(yP)=32>>((yP<<1)>>nScale),wL(xP)=32>>((xP<<1)>>nScale),wTL(xP,yP)=((wL(xP)>>4)+(wT(yP)>>4)),nScale=((Log2(nTbW)+Log2(nTbH)-2)>>2),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,yP),p(-1,-1)分别为位于坐标位置(xP,-1),(-1,yP),(-1,-1)的所述参考像素点的重构值,nTbW和nTbH为所述待处理图像块的宽度和高度,clip1Cmp为钳位操作。
  29. 根据权利要求25所述的装置,其特征在于,根据以下公式更新所述目标像素点的预测值:
    predQ(xP,yP)=clip1Cmp((refL(xP,yP)*wL(xP)+refT(xP,yP)*wT(yP)+(64-wL(xP)-wT(yP))*predP(xP,yP)+32)>>6),
    其中,refL(xP,yP)=p(-1,yP),refT(xP,yP)=p(xP,-1),wT(yP)=32>>((yP<<1)>>nScale),wL(xP)=32>>((xP<<1)>>nScale),nScale=((Log2(nTbW)+Log2(nTbH)-2)>>2),所述目标像素点的坐标为(xP,yP),所述待处理图像块内的左上角像素点的坐标为(0,0),predP(xP,yP)为所述目标像素点的更新前的预测值,predQ(xP,yP)为所述目标像素点的更新后的预测值,p(xP,-1),p(-1,yP)分别为位于坐标位置(xP,-1),(-1,yP)的所述参考像素点的重构值,nTbW和nTbH为所述待处理图像块的宽度和高度,clip1Cmp为钳位操作。
  30. 根据权利要求19至29任一项所述的装置,其特征在于,所述计算模块还用于:
    当所述参考像素点位于所述待处理图像块的上方时,对所述参考像素点的重构值和所述参考像素点的左右相邻像素点的重构值进行加权计算;
    当所述参考像素点位于所述待处理图像块的左方时,对所述参考像素点的重构值和所述参考像素点的上下相邻像素点的重构值进行加权计算;
    采用所述加权计算的结果更新所述参考像素点的重构值。
  31. 根据权利要求19至30任一项所述的装置,其特征在于,所述计算模块还用于:通过第一预设算法对所述运动信息进行初始更新;
    对应的,所述补偿模块具体用于:
    基于所述初始更新后的运动信息对所述待处理图像块进行运动补偿。
  32. 根据权利要求19至31任一项所述的装置,其特征在于,所述计算模块还用于:通过第二预设算法对所述预测块进行预更新;
    对应的,所述计算模块具体用于:
    将所述一个或多个参考像素点的重构值和所述待处理图像块中的目标像素点的预更新后的预测值进行加权计算。
  33. 根据权利要求19至31任一项所述的装置,其特征在于,所述计算模块还用于:通过第二预设算法对所述目标像素点的预测值进行更新。
  34. 根据权利要求19至32任一项所述的装置,其特征在于,所述解析模块还用于:
    解析所述码流,以获得所述待处理图像块的预测模式;
    确定所述预测模式为融合模式(merge)或跳过模式(skip)。
  35. 根据权利要求19至34任一项所述的装置,其特征在于,所述解析模块还用于:
    解析所述码流,以获得所述待处理图像块的更新判别标识信息;
    确定所述更新判别标识信息指示更新所述待处理图像块的预测块。
  36. 根据权利要求19至34任一项所述的装置,其特征在于,所述计算模块还用于:
    获取所述待处理图像块的预设的更新判别标识信息;
    确定所述更新判别标识信息指示更新所述待处理图像块的预测块。
PCT/CN2018/109233 2018-09-21 2018-10-01 一种视频编解码的方法与装置 WO2020056798A1 (zh)

Priority Applications (19)

Application Number Priority Date Filing Date Title
CN202010846274.8A CN112437299B (zh) 2018-09-21 2019-09-20 一种帧间预测方法、装置及存储介质
BR112021001563-9A BR112021001563A2 (pt) 2018-09-21 2019-09-20 método e aparelho de predição inter
CA3106125A CA3106125C (en) 2018-09-21 2019-09-20 Inter prediction method and apparatus
PCT/CN2019/107060 WO2020057648A1 (zh) 2018-09-21 2019-09-20 一种帧间预测方法和装置
JP2021507527A JP7259009B2 (ja) 2018-09-21 2019-09-20 インター予測方法および装置
CA3200616A CA3200616A1 (en) 2018-09-21 2019-09-20 Inter prediction method and apparatus
KR1020217003101A KR102616711B1 (ko) 2018-09-21 2019-09-20 인터 예측 방법 및 장치
KR1020237043657A KR20230175341A (ko) 2018-09-21 2019-09-20 인터 예측 방법 및 장치
CN201980011364.0A CN112655218B (zh) 2018-09-21 2019-09-20 一种帧间预测方法和装置
EP19863868.6A EP3849197A4 (en) 2018-09-21 2019-09-20 METHOD AND DEVICE FOR INTERFRAME PREDICTION
CN202210435828.4A CN115695782A (zh) 2018-09-21 2019-09-20 一种帧间预测方法和装置
SG11202100063YA SG11202100063YA (en) 2018-09-21 2019-09-20 Inter prediction method and apparatus
MX2021002868A MX2021002868A (es) 2018-09-21 2019-09-20 Metodo y aparato de interprediccion.
AU2019343426A AU2019343426B2 (en) 2018-09-21 2019-09-20 Inter prediction method and apparatus
PH12021550058A PH12021550058A1 (en) 2018-09-21 2021-01-11 Inter prediction method and apparatus
US17/249,189 US11647207B2 (en) 2018-09-21 2021-02-23 Inter prediction method and apparatus
US18/150,742 US20230164328A1 (en) 2018-09-21 2023-01-05 Inter prediction method and apparatus
JP2023014532A JP2023065381A (ja) 2018-09-21 2023-02-02 インター予測方法および装置
AU2023222943A AU2023222943A1 (en) 2018-09-21 2023-08-31 Inter prediction method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811109950 2018-09-21
CN201811109950.2 2018-09-21

Publications (1)

Publication Number Publication Date
WO2020056798A1 true WO2020056798A1 (zh) 2020-03-26

Family

ID=69888156

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/109233 WO2020056798A1 (zh) 2018-09-21 2018-10-01 一种视频编解码的方法与装置

Country Status (12)

Country Link
US (2) US11647207B2 (zh)
EP (1) EP3849197A4 (zh)
JP (2) JP7259009B2 (zh)
KR (2) KR102616711B1 (zh)
CN (2) CN110944172B (zh)
AU (2) AU2019343426B2 (zh)
BR (1) BR112021001563A2 (zh)
CA (2) CA3106125C (zh)
MX (1) MX2021002868A (zh)
PH (1) PH12021550058A1 (zh)
SG (1) SG11202100063YA (zh)
WO (1) WO2020056798A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022063265A1 (zh) * 2020-09-28 2022-03-31 华为技术有限公司 帧间预测方法及装置
WO2022093237A1 (en) 2020-10-29 2022-05-05 Telefonaktiebolaget Lm Ericsson (Publ) Multisession remote game rendering

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11277628B2 (en) * 2018-09-24 2022-03-15 Qualcomm Incorporated Restrictions for the worst-case bandwidth reduction in video coding
WO2020077003A1 (en) * 2018-10-10 2020-04-16 Interdigital Vc Holdings, Inc. Affine mode signaling in video encoding and decoding
CN113302918A (zh) * 2019-01-15 2021-08-24 北京字节跳动网络技术有限公司 视频编解码中的加权预测
KR20200115322A (ko) * 2019-03-26 2020-10-07 인텔렉추얼디스커버리 주식회사 영상 부호화/복호화 방법 및 장치
US11671616B2 (en) 2021-03-12 2023-06-06 Lemon Inc. Motion candidate derivation
US11936899B2 (en) * 2021-03-12 2024-03-19 Lemon Inc. Methods and systems for motion candidate derivation
US11917165B2 (en) * 2021-08-16 2024-02-27 Tencent America LLC MMVD signaling improvement
WO2023132509A1 (ko) * 2022-01-04 2023-07-13 현대자동차주식회사 공간적 상관성을 이용하는 디코더측 움직임벡터 유도를 위한 방법
US20230300364A1 (en) * 2022-03-15 2023-09-21 Tencent America LLC Temporal based subblock type motion vector predictor
WO2024010362A1 (ko) * 2022-07-06 2024-01-11 주식회사 케이티 영상 부호화/복호화 방법 및 비트스트림을 저장하는 기록 매체
WO2024012559A1 (en) * 2022-07-14 2024-01-18 Zhejiang Dahua Technology Co., Ltd. Methods, systems, and storage mediums for video encoding and decoding
WO2024051725A1 (en) * 2022-09-06 2024-03-14 Mediatek Inc. Method and apparatus for video coding
US20240137488A1 (en) * 2022-10-20 2024-04-25 Tencent America LLC Local illumination compensation for bi-prediction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101163249A (zh) * 2007-11-20 2008-04-16 北京工业大学 直流模式预测方法
CN101682781A (zh) * 2008-01-18 2010-03-24 松下电器产业株式会社 图像编码方法以及图像解码方法
US20140078250A1 (en) * 2012-09-19 2014-03-20 Qualcomm Incorporated Advanced inter-view residual prediction in multiview or 3-dimensional video coding
CN104104961A (zh) * 2013-04-10 2014-10-15 华为技术有限公司 一种视频编码方法、解码方法和装置

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1843037B (zh) 2003-08-26 2010-09-22 汤姆森特许公司 用于编码混合内部-相互编码块的方法和装置
CA2655970A1 (en) * 2006-07-07 2008-01-10 Telefonaktiebolaget L M Ericsson (Publ) Video data management
CN100534188C (zh) * 2007-06-08 2009-08-26 北京中星微电子有限公司 一种图像压缩方法和装置
US8582652B2 (en) * 2007-10-30 2013-11-12 General Instrument Corporation Method and apparatus for selecting a coding mode
CN101222646B (zh) * 2008-01-30 2010-06-02 上海广电(集团)有限公司中央研究院 一种适用于avs编码的帧内预测装置及预测方法
CN101877785A (zh) * 2009-04-29 2010-11-03 祝志怡 一种基于混合预测的视频编码方法
US20120163457A1 (en) 2010-12-28 2012-06-28 Viktor Wahadaniah Moving picture decoding method, moving picture coding method, moving picture decoding apparatus, moving picture coding apparatus, and moving picture coding and decoding apparatus
CN102595124B (zh) 2011-01-14 2014-07-16 华为技术有限公司 图像编码解码方法、处理图像数据方法及其设备
CN102238391B (zh) 2011-05-25 2016-12-07 深圳市云宙多媒体技术有限公司 一种预测编码方法、装置
US20130107949A1 (en) * 2011-10-26 2013-05-02 Intellectual Discovery Co., Ltd. Scalable video coding method and apparatus using intra prediction mode
WO2013131929A1 (en) 2012-03-05 2013-09-12 Thomson Licensing Method and apparatus for performing super-resolution
CN103581690A (zh) * 2012-08-09 2014-02-12 联发科技(新加坡)私人有限公司 视频译码方法、视频译码器、视频编码方法和视频编码器
CN103220488B (zh) 2013-04-18 2016-09-07 北京大学 一种视频帧率上转换装置及方法
US20180249156A1 (en) * 2015-09-10 2018-08-30 Lg Electronics Inc. Method for processing image based on joint inter-intra prediction mode and apparatus therefor
WO2017065509A2 (ko) 2015-10-13 2017-04-20 엘지전자 주식회사 영상 코딩 시스템에서 영상 디코딩 방법 및 장치
WO2017075804A1 (en) 2015-11-06 2017-05-11 Microsoft Technology Licensing, Llc Flexible reference picture management for video encoding and decoding
CN116647679A (zh) * 2016-11-29 2023-08-25 成均馆大学校产学协力团 影像编码/解码方法、装置以及对比特流进行存储的记录介质
EP3627836A4 (en) * 2017-06-21 2021-04-07 LG Electronics Inc. METHOD AND DEVICE FOR DECODING IMAGES ACCORDING TO INTRAPREDICTION IN AN IMAGE ENCODING SYSTEM
CN108495135B (zh) 2018-03-14 2020-11-10 宁波大学 一种屏幕内容视频编码的快速编码方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101163249A (zh) * 2007-11-20 2008-04-16 北京工业大学 直流模式预测方法
CN101682781A (zh) * 2008-01-18 2010-03-24 松下电器产业株式会社 图像编码方法以及图像解码方法
US20140078250A1 (en) * 2012-09-19 2014-03-20 Qualcomm Incorporated Advanced inter-view residual prediction in multiview or 3-dimensional video coding
CN104104961A (zh) * 2013-04-10 2014-10-15 华为技术有限公司 一种视频编码方法、解码方法和装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022063265A1 (zh) * 2020-09-28 2022-03-31 华为技术有限公司 帧间预测方法及装置
WO2022093237A1 (en) 2020-10-29 2022-05-05 Telefonaktiebolaget Lm Ericsson (Publ) Multisession remote game rendering

Also Published As

Publication number Publication date
EP3849197A1 (en) 2021-07-14
CA3200616A1 (en) 2020-03-26
BR112021001563A2 (pt) 2021-04-20
US20230164328A1 (en) 2023-05-25
KR102616711B1 (ko) 2023-12-20
CA3106125A1 (en) 2020-03-26
PH12021550058A1 (en) 2021-09-27
CN110944172A (zh) 2020-03-31
SG11202100063YA (en) 2021-02-25
JP2023065381A (ja) 2023-05-12
EP3849197A4 (en) 2021-11-03
JP7259009B2 (ja) 2023-04-17
MX2021002868A (es) 2021-05-28
CN112655218A (zh) 2021-04-13
AU2019343426B2 (en) 2023-06-01
AU2019343426A1 (en) 2021-02-04
US11647207B2 (en) 2023-05-09
KR20210024165A (ko) 2021-03-04
JP2022500894A (ja) 2022-01-04
AU2023222943A1 (en) 2023-09-21
US20210185328A1 (en) 2021-06-17
CN110944172B (zh) 2024-04-12
KR20230175341A (ko) 2023-12-29
CA3106125C (en) 2024-01-23
CN112655218B (zh) 2022-04-29

Similar Documents

Publication Publication Date Title
US11297340B2 (en) Low-complexity design for FRUC
CN110944172B (zh) 一种帧间预测方法和装置
CN107534766B (zh) 于视频译码中针对子块推导运动信息方法、装置
US10715810B2 (en) Simplified local illumination compensation
CN112437299B (zh) 一种帧间预测方法、装置及存储介质
WO2020057648A1 (zh) 一种帧间预测方法和装置
RU2785725C2 (ru) Устройство и способ внешнего предсказания

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18934089

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18934089

Country of ref document: EP

Kind code of ref document: A1