WO2014058796A1 - Procédé et appareil pour codage vidéo au moyen de vecteurs de mouvement de référence - Google Patents

Procédé et appareil pour codage vidéo au moyen de vecteurs de mouvement de référence Download PDF

Info

Publication number
WO2014058796A1
WO2014058796A1 PCT/US2013/063723 US2013063723W WO2014058796A1 WO 2014058796 A1 WO2014058796 A1 WO 2014058796A1 US 2013063723 W US2013063723 W US 2013063723W WO 2014058796 A1 WO2014058796 A1 WO 2014058796A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion vector
current block
frame
candidate motion
block
Prior art date
Application number
PCT/US2013/063723
Other languages
English (en)
Inventor
Yaowu Xu
Paul Gordon Wilkins
Adrian William Grange
Ronald Sebastiaan Bultje
Original Assignee
Google Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/647,076 external-priority patent/US9503746B2/en
Priority claimed from US13/974,678 external-priority patent/US9485515B2/en
Application filed by Google Inc filed Critical Google Inc
Publication of WO2014058796A1 publication Critical patent/WO2014058796A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/56Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search

Definitions

  • Digital video streams may represent video using a sequence of frames or still images.
  • Digital video can be used for various applications including, for example, video conferencing, high definition video entertainment, video advertisements, or sharing of user- generated videos.
  • a digital video stream can contain a large amount of data and consume a significant amount of computing or communication resources of a computing device for processing, transmission or storage of the video data.
  • Various approaches have been proposed to reduce the amount of data in video streams, including compression and other encoding techniques.
  • This disclosure relates generally to encoding and decoding video data and more particularly relates to video coding using reference motion vectors.
  • the teachings herein can reduce the number of bits required to encode motion vectors for inter prediction.
  • One method for decoding an encoded video bitstream described herein includes determining, from bits included in the encoded video bitstream, whether a motion vector for a current block to be decoded was encoded using a reference motion vector, the current block being one of a plurality of blocks of a current frame of the encoded video bitstream, identifying, for each previously decoded block of a plurality of previously decoded blocks of the current frame, a candidate motion vector used to inter predict the previously decoded block to define a plurality of candidate motion vectors, identifying, for the current block, a set of reconstructed pixel values corresponding to a set of previously decoded pixels, the set of previously decoded pixels in the current frame, generating, using each candidate motion vector of the plurality of candidate motion vectors, a corresponding set of predicted values for the set of previously decoded pixel values within each reference frame of a plurality of reference frames, determining a respective error value based on a difference between the set of reconstructed pixel values and each
  • Another method described herein is a method for encoding a video stream, including identifying, for each previously coded block of a plurality of previously coded blocks of a current frame of the video stream, a candidate motion vector used to inter predict the previously coded block to define a plurality of candidate motion vectors, identifying, for a current block to be encoded, a set of reconstructed pixel values corresponding to a set of previously coded pixels, the current block and the set of previously coded pixels in the current frame, generating, using each candidate motion vector of the plurality of candidate motion vectors, a corresponding set of predicted values for the set of previously coded pixel values within each reference frame of a plurality of reference frames, determining a respective error value based on a difference between the set of reconstructed pixel values and each set of predicted values, selecting, based on the error values, a reference motion vector from the plurality of candidate motion vectors, and encoding a motion vector for the current block using the reference motion vector.
  • An example of an apparatus for decoding an encoded video bitstream described herein includes a memory and a processor.
  • the processor is configured to execute instructions stored in the memory to determine, from bits included in the encoded video bitstream, whether a motion vector for a current block to be decoded was encoded using a reference motion vector, the current block being one of a plurality of blocks of a current frame of the encoded video bitstream, identify, for each previously decoded block of a plurality of previously decoded blocks of the current frame, a candidate motion vector used to inter predict the previously decoded block to define a plurality of candidate motion vectors, identify, for the current block, a set of reconstructed pixel values corresponding to a set of previously decoded pixels, the set of previously decoded pixels in the current frame, generate, using each candidate motion vector of the plurality of candidate motion vectors, a
  • An example of an apparatus for encoding a video stream described herein also includes a memory and a processor.
  • the processor is configured to execute instructions stored in the memory to identify, for each previously coded block of a plurality of previously coded blocks of a current frame of the video stream, a candidate motion vector used to inter predict the previously coded block to define a plurality of candidate motion vectors, identify, for a current block to be encoded, a set of reconstructed pixel values corresponding to a set of previously coded pixels, the current block and the set of previously coded pixels in the current frame, generate, using each candidate motion vector of the plurality of candidate motion vectors, a corresponding set of predicted values for the set of previously coded pixel values within each reference frame of a plurality of reference frames, determine a respective error value based on a difference between the set of reconstructed pixel values and each set of predicted values, select, based on the error values, a reference motion vector from the plurality of candidate motion vectors, and encode a motion vector for the
  • FIG. 1 is a schematic of a video encoding and decoding system in accordance with implementations of this disclosure
  • FIG. 2 is a diagram of an example video stream to be encoded and decoded in accordance with implementations of this disclosure
  • FIG. 3 is a block diagram of a video compression system in accordance with implementations of this disclosure.
  • FIG. 4 is a block diagram of a video decompression system in accordance with implementations of this disclosure.
  • FIG. 5 is a flow diagram of a process for encoding a video stream using reference motion vectors in accordance with an implementation of this disclosure
  • FIG. 6 is a diagram of a frame including a current block used to explain the process of FIG. 5;
  • FIG. 7 is a diagram of the current block of FIG. 6 and a set of previously coded pixel
  • FIG. 8 is a diagram of a set of predicted pixels for the set of previously coded pixels of FIG. 7;
  • FIG. 9 is a flow diagram of a process for decoding an encoded video stream using reference motion vectors in accordance with implementations of this disclosure.
  • FIG. 10 is a diagram of a series of frames of a first video stream in accordance with an implementation of this disclosure.
  • FIG. 11 is a diagram of a series of frames of a second video stream in accordance with an implementation of this disclosure.
  • Compression schemes related to coding video streams may include breaking each image into blocks and generating a digital video output bitstream using one or more techniques to limit the information included in the output.
  • a received bitstream can be decoded to re-create the blocks and the source images from the limited information.
  • Encoding a video stream, or a portion thereof, such as a frame or a block can include using temporal and spatial similarities in the video stream to improve coding efficiency.
  • a current block of a video stream may be encoded based on a previously encoded block in the video stream by predicting motion and color information for the current block based on the previously encoded block and identifying a difference (residual) between the predicted values and the current block. In this way, only the residual and parameters used to generate it need be added to the bitstream instead of including the entirety of the current block.
  • This technique may be referred to as inter prediction.
  • One of the parameters in inter prediction is a motion vector that represents the spatial displacement of the previously coded block relative to the current block.
  • the motion vector can be identified using a method of motion estimation, such as a motion search.
  • motion search a portion of a reference frame can be translated to a succession of locations to form a prediction block that can be subtracted from a portion of a current frame to form a series of residuals.
  • the X and Y translations corresponding to the location having the smallest residual can be selected as the motion vector.
  • Bits representing the motion vector can be included in the encoded bitstream to permit a decoder to reproduce the prediction block and decode the portion of the encoded video bitstream associated with the motion vector.
  • a motion vector can be differentially encoded using a reference motion vector, i.e., only the difference between the motion vector and the reference motion vector is encoded.
  • the reference motion vector can be selected from previously used motion vectors in the video stream, for example, the last non-zero motion vector from neighboring blocks. Selecting a previously used motion vector to encode a current motion vector can further reduce the number of bits included in the encoded video bitstream and thereby reduce transmission and storage bandwidth requirements.
  • a reference motion vector can be selected from candidate motion vectors based on a match score.
  • the match score can be based on the results of using candidate motion vectors (e.g., those used by previously decoded blocks) to predict a "trial" set of pixel values for those pixels close to the current block. Since the trial set has already been encoded and reconstructed, the predicted values can be compared against the corresponding reconstructed values to determine the match score. This permits the same procedure to take place at a decoder, where the reconstructed values would be available to calculate match scores before reconstructing the current block.
  • the motion vector of the candidate motion vectors that has the best match score may be selected as the reference motion vector for the actual motion vector of the current block. Fewer bits can be used to code the actual motion vector by coding the small difference in motion vectors, thus improving the overall coding efficiency. Other ways in which the selected motion vector may be used are discussed hereinafter.
  • the candidate motion vectors may be limited to spatial-temporal neighboring motion vectors. That is, the pool of candidate motion vectors may be selected from regions neighboring regions the current block. In some video coding schemes, particularly those where video frames are encoded out of order, it is desirable to include in the pool of candidate motion vectors motion information from video frames in the distant past or future. Encoding video frames can out of order may occur, for example, in the coding of so-called "alternate reference frames" that are not temporally neighboring to the frames coded immediately before or after them.
  • An alternate reference frame may be a synthesized frame that does not occur in the input video stream or is a duplicate frame to one in the input video stream that is used for prediction and is generally not displayed following decoding.
  • Such a frame can resemble a video frame in the non-adjacent future.
  • Another example in which out of order encoding may occur is through the use of a so-called “golden reference frame,” which is a reconstructed video frame that may or may not be neighboring to a current video frame and is stored in memory for use as a reference frame until replaced, e.g., by a new golden reference frame.
  • alternate reference frames and golden reference frames are used to infer motion vectors for a block of a frame of video data using pixels from the non-adjacent or adjacent video frames to predict reconstructed pixels spatially near the block to be predicted.
  • alternate reference frames and golden reference frames also called alternate frames and golden frames
  • adjacent video frames are used to infer motion vectors for a block of a frame of video data using pixels from the non-adjacent or adjacent video frames to predict reconstructed pixels spatially near the block to be predicted.
  • FIG. 1 is a schematic of a video encoding and decoding system 100 in which aspects of the disclosure can be implemented.
  • An exemplary transmitting station 102 can be, for example, a computer having an internal configuration of hardware including a processor such as a central processing unit (CPU) 104 and a memory 106.
  • CPU 104 is a controller for controlling the operations of transmitting station 102.
  • CPU 104 can be connected to the memory 106 by, for example, a memory bus.
  • Memory 106 can be read only memory (ROM), random access memory (RAM) or any other suitable memory device.
  • Memory 106 can store data and program instructions that are used by CPU 104.
  • Other suitable implementations of transmitting station 102 are possible. For example, the processing of transmitting station 102 can be distributed among multiple devices.
  • a network 108 connects transmitting station 102 and a receiving station 110 for encoding and decoding of the video stream.
  • the video stream can be encoded in transmitting station 102 and the encoded video stream can be decoded in receiving station 110.
  • Network 108 can be, for example, the Internet.
  • Network 108 can also be a local area network (LAN), wide area network (WAN), virtual private network (VPN), a cellular telephone network or any other means of transferring the video stream from transmitting station 102 to, in this example, receiving station 110.
  • Receiving station 110 can, in one example, be a computer having an internal configuration of hardware including a processor such as a CPU 112 and a memory 114.
  • CPU 112 is a controller for controlling the operations of receiving station 110.
  • CPU 112 can be connected to memory 114 by, for example, a memory bus.
  • Memory 114 can be ROM, RAM or any other suitable memory device.
  • Memory 114 can store data and program instructions that are used by CPU 112.
  • Other suitable implementations of receiving station 110 are possible. For example, the processing of receiving station 110 can be distributed among multiple devices.
  • a display 116 configured to display a video stream can be connected to receiving station 110.
  • Display 116 can be implemented in various ways, including by a liquid crystal display (LCD), a cathode-ray tube (CRT), or a light emitting diode display (LED), such as an OLED display.
  • Display 116 is coupled to CPU 112 and can be configured to display a rendering 118 of the video stream decoded in receiving station 110.
  • encoder and decoder system 100 can omit network 108 and/or display 116.
  • a video stream can be encoded and then stored for transmission at a later time by receiving station 110 or any other device having memory.
  • receiving station 110 receives (e.g. , via network 108, a computer bus, or some communication pathway) the encoded video stream and stores the video stream for later decoding.
  • additional components can be added to the encoder and decoder system 100.
  • a display or a video camera can be attached to transmitting station 102 to capture the video stream to be encoded.
  • FIG. 2 is a diagram of an example video stream 200 to be encoded and decoded.
  • Video stream 200 (also referred to herein as video data) includes a video sequence 204.
  • video sequence 204 includes a number of adjacent frames 206. While three frames are depicted in adjacent frames 206, video sequence 204 can include any number of adjacent frames.
  • Adjacent frames 206 can then be further subdivided into individual frames, e.g., a single frame 208. Each frame 208 can capture a scene with one or more objects, such as people, background elements, graphics, text, a blank wall, or any other information.
  • single frame 208 can be divided into a set of blocks 210, which can contain data corresponding to, in some of the examples described below, a 8x8 pixel group in frame 208.
  • Block 210 can also be of any other suitable size such as a block of 16x8 pixels, a block of 8x8 pixels, a block of 16x16 pixels, a block of 4x4 pixels, or of any other size.
  • the term 'block' can include a macroblock, a subblock (i.e., a subdivision of a macroblock), a segment, a slice, a residual block or any other portion of a frame.
  • a frame, a block, a pixel, or a combination thereof can include display
  • luminance information such as luminance information, chrominance information, or any other information that can be used to store, modify, communicate, or display the video stream or a portion thereof.
  • FIG. 3 is a block diagram of an encoder 300 in accordance with
  • Encoder 300 can be implemented, as described above, in transmitting station 102 such as by providing a computer software program stored in memory 106, for example.
  • the computer software program can include machine instructions that, when executed by CPU 104, cause transmitting station 102 to encode video data in the manner described in FIG. 3.
  • Encoder 300 can also be implemented as specialized hardware in, for example, transmitting station 102.
  • Encoder 300 has the following stages to perform the various functions in a forward path (shown by the solid connection lines) to produce an encoded or a compressed bitstream 320 using input video stream 200: an intra/inter prediction stage 304, a transform stage 306, a quantization stage 308, and an entropy encoding stage 310.
  • Encoder 300 may include a reconstruction path (shown by the dotted connection lines) to reconstruct a frame for encoding of future blocks.
  • encoder 300 has the following stages to perform the various functions in the reconstruction path: a dequantization stage 312, an inverse transform stage 314, a reconstruction stage 316, and a loop filtering stage 318.
  • Other structural variations of encoder 300 can be used to encode video stream 200.
  • each frame 208 within video stream 200 can be processed in units of blocks.
  • each block can be encoded using either intra prediction (i.e., within a single frame) or inter prediction (i.e. from frame to frame). In either case, a prediction block can be formed. The prediction block is then subtracted from the block to produce a residual block (also referred to herein as residual).
  • Intra prediction also referred to herein as intra-prediction or intra-frame prediction
  • inter prediction also referred to herein as inter-prediction or inter-frame prediction
  • intra-prediction a prediction block can be formed from samples in the current frame that have been previously encoded and reconstructed.
  • a prediction block can be formed from samples in one or more previously constructed reference frames, such as the last frame (i.e., the adjacent frame immediately before the current frame), the golden frame or the constructed or alternate frame described above.
  • the prediction block is then subtracted from the current block.
  • the difference, or residual is then encoded and transmitted to decoders.
  • Image or video codecs may support many different intra and inter prediction modes; each block may use one of the prediction modes to obtain a prediction block that is most similar to the block to minimize the information to be encoded in the residual so as to re-create the block.
  • the prediction mode for each block of transform coefficients can also be encoded and transmitted so a decoder can use the same prediction mode(s) to form prediction blocks in the decoding and reconstruction process.
  • the prediction mode may be selected from one of multiple intra-prediction modes.
  • the prediction mode may be selected from one of multiple inter- prediction modes using one or more reference frames including, for example, last frame, golden frame, alternative reference frame, or any other reference frame in an encoding scheme.
  • the inter prediction modes can include, for example, a mode (sometimes called ZERO_MV mode) in which a block from the same location within a reference frame as the current block is used as the prediction block; a mode (sometimes called a NEW_MV mode) in which a motion vector is transmitted to indicate the location of a block within a reference frame to be used as the prediction block relative to the current block; or a mode (sometimes called a NEAR_MV or NEAREST_MV mode) in which no motion vector is transmitted and the current block uses the last or second-to-last non-zero motion vector used by neighboring, previously coded blocks to generate the prediction block. Inter-prediction modes may be used with any of the available reference frames.
  • transform stage 306 transforms the residual into a block of transform coefficients in, for example, the frequency domain.
  • block- based transforms include the Karhunen-Loeve Transform (KLT), the Discrete Cosine Transform (DCT), Walsh-Hadamard Transform (WHT), the Singular Value Decomposition Transform (SVD) and the Asymmetric Discrete Sine Transform (ADST).
  • KLT Karhunen-Loeve Transform
  • DCT Discrete Cosine Transform
  • WHT Walsh-Hadamard Transform
  • Singular Value Decomposition Transform SVD
  • ADST Asymmetric Discrete Sine Transform
  • the DCT transforms the block into the frequency domain.
  • the transform coefficient values are based on spatial frequency, with the lowest frequency (e.g., DC) coefficient at the top-left of the matrix and the highest frequency coefficient at the bottom- right of the matrix.
  • Quantization stage 308 converts the block of transform coefficients into discrete quantum values, which are referred to as quantized transform coefficients, using a quantizer value or quantization level.
  • the quantized transform coefficients are then entropy encoded by entropy encoding stage 310.
  • the entropy-encoded coefficients, together with other information used to decode the block, which can include for example the type of prediction used, motion vectors and quantization value, are then output to compressed bitstream 320.
  • Compressed bitstream 320 can be formatted using various techniques, such as variable length encoding (VLC) and arithmetic coding.
  • VLC variable length encoding
  • Compressed bitstream 320 can also be referred to as an encoded video stream and the terms will be used interchangeably herein.
  • the reconstruction path in FIG. 3 can be used to provide both encoder 300 and a decoder 400 (described below) with the same reference frames to decode compressed bitstream 320.
  • the reconstruction path performs functions that are similar to functions that take place during the decoding process that are discussed in more detail below, including dequantizing the quantized transform coefficients at dequantization stage 312 to generate dequantized transform coefficients and inverse transforming the dequantized transform coefficients at inverse transform stage 314 to produce a derivative residual block (i.e., derivative residual).
  • the prediction block that was predicted at intra/inter prediction stage 304 can be added to the derivative residual to create a reconstructed block.
  • loop filtering stage 318 can be applied to the reconstructed block to reduce distortion such as blocking artifacts.
  • encoder 300 can be used.
  • a non-transform based encoder 300 can quantize the residual block directly without transform stage 304.
  • an encoder 300 can have quantization stage 308 and dequantization stage 312 combined into a single stage.
  • FIG. 4 is a block diagram of a decoder 400 in accordance with
  • Decoder 400 can be implemented, for example, in receiving station 110, such as by providing a computer software program stored in memory for example.
  • the computer software program can include machine instructions that, when executed by CPU 112, cause receiving station 110 to decode video data in the manner described in FIG. 4.
  • Decoder 400 can also be implemented as specialized hardware or firmware in, for example, transmitting station 102 or receiving station 110.
  • Decoder 400 similar to the reconstruction path of encoder 300 discussed above, includes in one example the following stages to perform various functions to produce an output video stream 416 from compressed bitstream 320: an entropy decoding stage 402, a dequantization stage 404, an inverse transform stage 406, an intra/inter prediction stage 408, a reconstruction stage 410, a loop filtering stage 412, and a deblocking filtering stage 414.
  • Other structural variations of decoder 400 can be used to decode compressed bitstream 320.
  • the data elements within compressed bitstream 320 can be decoded by the entropy decoding stage 402 (using, for example, arithmetic coding) to produce a set of quantized transform coefficients.
  • Dequantization stage 404 dequantizes the quantized transform coefficients and inverse transform stage 406 inverse transforms the dequantized transform coefficients to produce a derivative residual that can be identical to that created by reconstruction stage 316 in encoder 300.
  • decoder 400 can use header information decoded from compressed bitstream 320 to use intra/inter prediction stage 408 to create the same prediction block as was created in encoder 300, e.g., at intra/inter prediction stage 304.
  • the reference frame from which the prediction block is generated may be transmitted in the bitstream or constructed by the decoder using information contained within the bitstream.
  • the prediction block can be added to the derivative residual to create a reconstructed block that can be identical to the block created by reconstruction stage 316 in encoder 300.
  • loop filtering stage 412 can be applied to the reconstructed block to reduce blocking artifacts.
  • Deblocking filtering stage 414 can be applied to the reconstructed block to reduce blocking distortion, and the result is output as output video stream 416.
  • Output video stream 416 can also be referred to as a decoded video stream and the terms will be used interchangeably herein.
  • decoder 400 can be used to decode compressed bitstream
  • decoder 400 can produce output video stream 416 without deblocking filtering stage 414.
  • FIG. 5 is a flow diagram showing a process 500 for encoding a video stream using reference motion vectors in accordance with an implementation of this disclosure.
  • Process 500 can be implemented in an encoder such as encoder 300 (shown in FIG. 3) and can be implemented, for example, as a software program that can be executed by computing devices such as transmitting station 102 or receiving station 110 (shown in FIG. 1).
  • the software program can include machine-readable instructions that can be stored in a memory such as memory 106 or memory 114, and that can be executed by a processor, such as CPU 104, to cause the computing device to perform process 500.
  • Process 500 can be implemented using specialized hardware or firmware.
  • Some computing devices can have multiple memories, multiple processors, or both.
  • the steps of process 500 can be distributed using different processors, memories, or both.
  • Use of the terms "processor” or “memory” in the singular encompasses computing devices that have one processor or one memory as well as devices that have multiple processors or multiple memories that can each be used in the performance of some or all of the recited steps.
  • process 500 is depicted and described as a series of steps.
  • steps in accordance with this disclosure can occur in various orders and/or concurrently. Additionally, steps in accordance with this disclosure may occur with other steps not presented and described herein. Furthermore, not all illustrated steps may be required to implement a method in accordance with the disclosed subject matter.
  • Process 500 assumes that a stream of video data having multiple frames, each having multiple blocks, is being encoded using a video encoder such as video encoder 300 executing on a computing device such as transmitting station 102.
  • the video data or stream can be received by the computing device in any number of ways, such as by receiving the video data over a network, over a cable, or by reading the video data from a primary memory or other storage device, including a disk drive or removable media such as a CompactFlash (CF) card, Secure Digital (SD) card, or any other device capable of communicating video data.
  • video data can be received from a video camera connected to the computing device operating the encoder. At least some of the blocks within frames are encoded using inter prediction as described in more detail below.
  • process 500 identifies candidate motion vectors from previously coded blocks in the video stream.
  • the previously coded blocks in the video stream can include any block encoded using inter-prediction before the current block, such as a block from a previously coded frame or a block from the same frame as the current block that has been encoded before the current block.
  • the previously coded blocks can include a block above, to the left, or to the above-left of the current block in the same frame.
  • the previously coded blocks can also include, for example, a block from the immediately previous frame (i.e., last frame), a block from the golden frame (described at intra/inter prediction stage 304), a block from any other reference frame, or any combination thereof.
  • the candidate motion vectors are obtained from previously coded blocks that correspond in some way to the current block based on the theory that such blocks, due to the proximity of their pixels to the current block, are likely to have similar motion characteristics to the current block.
  • FIG. 6 is a diagram of a frame 600 including a current block 602 used to explain the process of FIG. 5.
  • Frame 600 includes blocks that have been encoded before current block 602, such as the shaded blocks 604 to the left of or above current block 602 in FIG. 6.
  • the candidate motion vectors may include the motion vector from a block 604A above current block 602, the motion vector from a block 604B to the left of current block 602 and the motion vector from a block 604C to the above-left of current block 602. If any of blocks 604A, 604B or 604C were not intra predicted, they would not have a motion vector to contribute to the candidate motion vectors.
  • the candidate motion vectors can also include motion vectors from other frames as illustrated by FIGS. 10 and 11.
  • FIG. 10 is a diagram of a series 1000 of frames F 1; F 2 ... F k -i, F k of a first video stream in accordance with an implementation of this disclosure.
  • Frame F k is the current frame to be encoded following encoding and reconstructing frames F 1; F 2 ... F k -i.
  • Frame F k includes the current block referred to in FIG. 5, for example.
  • Frame F k -i is temporally adjacent to frame F ⁇ while frames Fi and F 2 are temporally non-adjacent to frame 3 ⁇ 4.
  • a frame (e.g., a reference frame) is temporally non-adjacent to another frame when the frames are separated within a temporal sequence of the plurality of frames of the video stream by at least one frame.
  • reconstructed frame F 2 may be stored as a golden reference frame as discussed above.
  • Frame F k -i is the reconstructed frame stored in a "last" reference frame buffer available for coding blocks of current frame F k .
  • frame F was used as the "last" reference frame.
  • a block that spatially corresponds to the current block in last frame F k -i may be used to obtain a motion vector for the candidate motion vectors in step 502.
  • a motion vector used for the prediction of the block in last frame F k -i at the same pixel location as the current block may be added to the candidate motion vectors.
  • Motion vectors from other blocks in last frame F k -i, such as those adjacent to the same pixel location as the current block, may also be used as candidate motion vectors in some cases.
  • Pixel locations may be designated by X- and Y-coordinates with the top-left pixel designated as position (0,0) for example.
  • frame F 2 is a golden frame available for inter prediction of blocks in current frame F k . Therefore, one or more of the adjacent blocks to the current block in frame F k may refer to frame F 2 such that its motion vector is included among the candidate motion vectors. Further, one or more motion vectors used for the prediction of the blocks in golden frame F 2 may also be added to the candidate motion vectors. For example, a dominant motion vector of the frame could be selected. In some cases, motion vectors of inter predicted blocks in golden frame F 2 within a specified spatial neighborhood of, for example, the same pixel position as the current block may be used as candidate motion vectors. Flags may be associated with frame F k (such as bits in its header) to indicate that a motion vector used in coding frame F 2 (e.g., against frame F is available to some blocks in frame F k as a candidate motion vector.
  • FIG. 11 is a diagram of a series 1100 of frames F 1 ; A 1 ; F 2 , .... F k -i, F k , ... F k+m of a second video stream in accordance with an implementation of this disclosure.
  • Series 1100 is similar to series 1000 but includes an alternate reference frame Ai. Alternate reference frames may be purely constructed frames and, as such, may not have the same dimensions as the remaining frames in series 1100. For simplicity in this explanation, it is assumed that frame Al resembles a future video frame F k+m - When encoding frame A 1 ;
  • motion vectors may be used against reference frame F 1 ; for example.
  • a motion vector from encoded and reconstructed frame Ai can now be selected and identified to be used as a candidate motion vectors in encoding one or more blocks in frame 3 ⁇ 4.
  • a motion vector to be included in the candidate motion vectors may be one associated with a spatially corresponding block of alternate reference frame Ai or one associated with another nearby block. Further, any of the blocks adjacent to the current block in frame F k may refer to frame Ai such that the corresponding motion vector is included among the candidate motion vectors.
  • process 500 includes selecting or identifying a set of reconstructed pixel values corresponding to a set of previously coded pixels at step 504.
  • the set of previously coded pixel values can include one or more rows of pixel values above the current block, or one or more columns of pixel values to the left of the current block, or both. The following examples are described using data in the two rows immediately above and the two columns immediately to the left of the current block. When the scan order is other than raster scan order, other adjacent pixels may be used. In other implementations, data from rows or columns not immediately adjacent to the current block, including data from blocks that are not adjacent to the current block, can be included in the set of previously coded pixel values. Due to the proximity of the set of previously coded pixel values to the current block, it is likely that the current block has similar motion characteristics as the set of previously coded pixel values.
  • the set of reconstructed pixel values Due to the proximity of the set of previously coded pixel values to the current block, it is likely that the current block has similar motion characteristics as the set
  • corresponding to the set of previously coded pixel values is available, for example, from the reconstruction path in FIG. 3 at encoder 300.
  • FIG. 7 is a diagram of the current block 602 of FIG. 6 and a set 702 of previously coded pixels that may be identified in step 504 of FIG. 5.
  • the values of set 702 form the set of reconstructed pixel values in step 504.
  • Set 702 can include, for example, two rows 702A, 702B of pixels immediately above current block 602 and two columns 702C, 702D of pixels to the immediate left of current block 602.
  • Rows 702A, 702B are associated with block 604A
  • columns 702C, 702D are associated with block 604B.
  • Blocks, such as current block 602 and previously coded blocks 604A, 604B, are shown in FIG.
  • a block 7 to have a set of 8x8 pixels, which can be represented by an 8x8 matrix of pixel values.
  • any other block size can be used.
  • a block is formed by a matrix of 16x16 pixels, for example, a 16x2 region from the block above and a 2x16 region from the block to the left of the current block may be used.
  • the number of pixels can be altered to include fewer or more pixels.
  • Each of the pixels in rows 702A, 702B and columns 702C, 702D has a reconstructed pixel value resulting from encoding and decoding blocks 604A, 604B, respectively.
  • an error value also called a match score
  • a candidate motion vector may be used to generate predicted pixel values for the pixels of rows 702A, 702B and columns 702C, 702D for a comparison against the reconstructed pixel values.
  • the motion vector is applied to the selected pixels, which produces predicted pixel values from a reference frame, and then the predicted pixel values are compared against the selected reconstructed pixel values to produce the error value for each motion vector.
  • Step 504 can be implemented, for example, at intra/inter prediction stage 306 of encoder 300 in FIG. 3, and one implementation is explained using FIG. 8.
  • FIG. 8 is a diagram of a set of predicted pixels for the set of previously coded pixels 702 of FIG. 7.
  • current block 602 of current frame 600 is being encoded.
  • the set of predicted values is determined using a candidate motion vector (indicated generally by arrow 802) identified at step 502.
  • the set of previously coded pixels 702 include, for example, two rows 702A, 702B and two columns 702C, 702D described above with reference to FIG. 7.
  • rows 702A, 702B can be predicted by rows 804A, 804B in a reference frame 800 and columns 702C, 702D can be predicted by columns 804C, 804D in reference frame 800.
  • a set of predicted pixels represented by rows 804A, 804B and columns 804C, 804D in reference frame 800 is identified.
  • the values of the predicted pixels form the set of predicted values for comparison with the set of reconstructed pixel values of the pixels of rows 702A, 702B and columns 702C, 702D.
  • the error value can be determined for candidate motion vector 802 by the comparison.
  • block 806 is shown in the same spatial position in reference frame 800 as current block 602 is in current frame 600 to illustrate the pixels of rows 804A, 804B and columns 804C, 804D selected as the prediction pixels based on the candidate motion vector.
  • determining error values for motion vectors acquired from reference frames either temporally adjacent or temporally non-adjacent to the current frame includes using the motion vectors to translate pixels from a reference frame to positions coincident with the set of reconstructed pixels spatially near the current block from the current frame to be predicted.
  • a comparison may be performed by subtracting the translated pixel values from the reconstructed pixel values.
  • the residual or difference for each set of pixel values may be combined to produce a match score or error value that represents the magnitude of the residual summed, the absolute values summed, the squared differences summed or the differences averaged, the absolute values of the differences averaged or any other technique for arriving at a relative magnitude of the residuals.
  • the error value can be determined using metrics such as sum of absolute differences (SAD), sum of squared error (SSE), mean squared error (MSE), or any other error metric.
  • SAD sum of absolute differences
  • SSE sum of squared error
  • MSE mean squared error
  • the set of predicted values can be compared against the set of reconstructed pixel values to determine a SAD value for each motion vector.
  • different weights can be associated with different pixels in the set of previously coded pixel values. For example, more weight can be given to the row or column of pixels immediately adjacent to the current block, or less weight can be given to the row or column of pixels further away from the current block. Error values may be similarly determined for each candidate motion vector and each possible reference frame as described below.
  • step 506 may be a temporally adjacent frame (such as last frame F k -i) or a temporally non- adjacent frame (such as golden frame F 2 or alternate frame A .
  • each available reference frame is used as part of a rate-distortion loop within an encoder that determines the best coding mode for the current block by comparing the rate (e.g., the bit cost) of each coding mode with the resulting image distortion (e.g., the change in image due to the coding) for each tested mode.
  • the candidate motion vectors may be generated using frames separated by different temporal distances than the current frame and the particular reference frame under consideration. Accordingly, step 506 also includes scaling candidate motion vectors where needed, which is described by reference again to FIGS. 10 and 11.
  • Scaling up or down a motion vector so that it may be applied as a candidate motion vector means adjusting its magnitude.
  • the magnitude of the candidate can be scaled depending upon the results of comparing the temporal distance and direction between the reference frame and the frame including the current block and the temporal distance and direction used to form the candidate motion vector.
  • the temporal distance between frames can be determined by their respective positions in the video stream.
  • a candidate motion vector is a motion vector that was used to encode a block of frame F 2 against frame F 1; the magnitude of the motion vector can be used directly for encoding frame F k against reference frame F k -i since frames F k and F k -i are, like frames Fi and F 2 , a frame apart temporally (that is, they are adjacent frames in the frame sequence).
  • a motion vector used in previously coding a block of current frame F k against F k -i will be scaled up using a factor proportional to k-2 to become a candidate motion vector for generation of the prediction pixels when the current block, such as block 602 of FIG. 8, is in evaluation to be coded against reference frame F 2 .
  • Scaling up or down a motion vector so that it may be applied as a candidate motion vector means adjusting its magnitude.
  • the magnitude of the candidate can be scaled depending upon the results of comparing the temporal distance and direction between the reference frame and the frame including the current block and the temporal distance and direction used to form the candidate motion vector.
  • An alternate reference frame such as frame Ai may be treated similarly to other references frames, such as the last or golden reference frame. However, since an alternate reference frame may be constructed using portions of multiple frames from multiple temporal positions in the video stream, techniques may be used to determine a temporal position in the video stream that most closely matches the image data included in the alternate frame.
  • a reference motion vector can be selected from the candidate motion vectors identified at step 504.
  • the selection can be based on, for example, selecting the motion vector from the candidate motion vectors associated with the best match score, which can be, for example, the motion vector with the lowest error value among all the candidate motion vectors generated in step 506.
  • Other selection criteria can also be used. For example, if it is determined that candidate motion vector 802 has the lowest error value among the candidate motion vectors, candidate motion vector 802 can be selected as the reference motion vector, which can be used for further processing.
  • the motion vector of the current block can be encoded using the reference motion vector in step 510 before processing begins again for the next block of the current frame.
  • the current block can be encoded according to the process described with respect to FIG. 3.
  • process 500 may be part of a rate-distortion loop used to select the inter prediction mode for the current block to be encoded.
  • the actual motion vector for inter prediction of the current block may be determined through a motion search according to any number of techniques.
  • One use of the reference motion vector may include using the reference motion vector as a starting parameter for the motion search algorithm based on the reasoning that the actual motion vector is likely to be close to those used in selecting the reference motion vector.
  • a motion search may alternatively be performed before or in parallel with process 500.
  • step 510 may include using the reference motion vector to differentially encode the actual motion vector.
  • a difference value can be calculated by subtracting the reference motion vector from the motion vector used to encode the current block.
  • the difference value can be encoded and included in the video stream. Since the reference motion vector was formed using previously encoded and decoded data, the same data can be available at a decoder to identify the same reference motion vector as was used in forming the motion vector at the encoder, thus no motion vector is required to be encoded and transmitted for the current block.
  • the decoded difference value can be added to the reference motion vector identified by the decoder as described below to form a motion vector to decode the current block. Note that the reference motion vector is associated with one of the available reference frames used to generate the set of predicted values and hence the error value.
  • the reference motion vector may be scaled as described previously so as to generate the difference between the reference motion vector and the actual motion vector.
  • a separate indication of the reference frame used would also be encoded into the bitstream.
  • the reference motion vector may be used to choose a probability distribution to encode the magnitude of the motion vector used to encode the current block.
  • bits can be included in the video stream to identify the encoded magnitude of the motion vector and which predetermined probability distribution to use to form the motion vector based on the encoded magnitude.
  • One or more bits indicating which reference frame to use in decoding the current block may also be included in the bitstream in some variations.
  • the reference motion vector may also be scaled to the extent it is desirable.
  • the reference motion vector may also be used directly in the encoding of the current block. This can occur, for example, when a comparison of the rate-distortion value involved in coding the current block using the motion vector determined by the motion search is higher than that involved in coding the current block using the reference motion vector.
  • the reference frame used would desirably be the one used in selecting the reference motion vector so no scaling is needed.
  • the decision as to whether or not use the reference motion vector may be tied to the difference between the reference motion vector and the motion vector resulting from the search. When the difference is small (or zero), the difference in prediction results for the reference frame resulting from the search using the reference motion vector versus the actual motion vector is also small (or zero).
  • the reference motion vector is used directly to encode the current block, no motion vector would need to be separately encoded at step 510. Instead, one or more bits would be inserted into the bitstream in association with the current block to indicate use of the reference motion vector for encoding.
  • the use of a reference motion vector may reduce the number of bits needed to represent the motion vector needed to decode an inter coded block.
  • the motion vector used for encoding the current frame would not be separately. Bits may be inserted into frame, slice and/or block headers indicating whether reference motion vectors are used and how they are used for encoding the current block.
  • the motion vector found by the motion search or the motion vector differential and/or the reference frame used in encoding the current block are also
  • a prediction block can be determined based on a reference frame by applying a candidate motion vector to the previously coded pixel values of the reference frame.
  • the prediction block can be subtracted from the current block to form a residual that can be further encoded according to the processing described with respect to FIG. 3 and included in an encoded video bitstream.
  • FIG. 9 is a flow diagram of a process 900 for decoding an encoded video stream using reference motion vectors in accordance with implementations of this disclosure.
  • Process 900 can be implemented, for example, as a software program that may be executed by computing devices such as transmitting station 102 or receiving station 110.
  • the software program can include machine-readable instructions that may be stored in a memory such as memory 106 or 114, and that, when executed by a processor, such as CPU 104 or 112, may cause the computing device to perform process 900.
  • Process 900 can be implemented using specialized hardware or firmware. As explained above, some computing devices may have multiple memories or processors, and the steps of process 900 can be distributed using multiple processors, memories, or both.
  • process 900 is depicted and described as a series of steps. However, steps in accordance with this disclosure can occur in various orders and/or concurrently. Additionally, steps in accordance with this disclosure may occur with other steps not presented and described herein. Furthermore, not all illustrated steps may be required to implement a method in accordance with the disclosed subject matter.
  • process 900 substantially conforms to process 500. There are some differences, however, that are pointed out in the following description of process 900. Where steps are substantially similar to those in process 500, reference will be made to the description above.
  • the decoder determines whether the motion vector for the current block was encoded using a reference motion vector.
  • This information can be communicated by reading and decoding bits from an encoded video bitstream that indicate the use of a reference motion vector according to one of the techniques disclosed above.
  • the encoded bitstream (or encoded video data) may have been received by decoder of a computing device in any number of ways, such as by receiving the video data over a network, over a cable, or by reading the video data from a primary memory or other storage device, including a disk drive or removable media such as a DVD, CompactFlash (CF) card, Secure Digital (SD) card, or any other device capable of communicating a video stream.
  • CF CompactFlash
  • SD Secure Digital
  • Step 902 involves decoding at least a portion of the encoded video bitstream to extract the information regarding the motion vector for the current block.
  • This information can be included in a header associated with a current block or a frame header, for example.
  • the information in the one or more headers indicate to the decoder that the current block is to be decoded using inter prediction and that the motion vector used for that inter prediction relies on the reference motion vector as described previously.
  • information in the bitstream could indicate that the actual motion vector used in encoding the current block was differentially encoded using the reference motion vector. Alternatively, information could indicate that the reference motion vector was used directly for encoding the current block.
  • process 900 advances to step 904 to identify candidate motion vectors from previously decoded blocks.
  • the identified candidate motion vectors should be the same as those identified by the encoder in step 502, which may be accomplished by flags as described previously and/or by a priori rules regarding the selection of candidate motion vectors that are available to both the encoder and decoder based on the position of the current block.
  • a set of reconstructed pixel values corresponding to a set of previously decoded pixels is selected or identified at step 906.
  • the set of pixels corresponds to the set of pixels used in step 504 of FIG. 5.
  • the set of reconstructed pixel values in step 906 is the same as the set of reconstructed pixel values in step 504.
  • an error value can be determined each candidate motion vector based on the set of reconstructed pixel values and a set of predicted values for the set of previously decoded pixel values associated with the candidate motion vector as described above with respect to step 506 of FIG. 5.
  • the reference motion vector is selected in the same manner as in step 508 of FIG. 5, such as by selecting the candidate motion vector (and associated reference frame) with the lowest error value.
  • the motion vector used to encode the current block can be decoded using the selected reference motion vector at step 912.
  • the decoded motion vector may then be used to decode the current block according to the process of FIG. 4.
  • the decoder can decode the motion vector by, for example, decoding an encoded difference value that can then be added to the reference motion vector selected at step 910 to generate the actual motion vector. Then, the actual motion vector may be used to decode the current block using inter prediction.
  • the reference motion vector can be used to identify a predetermined probability distribution, which can be used to decode a magnitude value of the motion vector used to encode the current block before decoding the current block using the motion vector. Similar to the discussion in step 510 of FIG. 5, this may involve scaling the reference motion vector.
  • the reference motion vector may be used directly as the motion vector to decode the current block after decoding one or more bits indicating that the reference motion vector should be so used.
  • next block may be processed.
  • process 900 may be repeated.
  • a frame can be reconstructed from the blocks derived from reconstructed values by intra or inter prediction, or both.
  • the output can be an output video stream, such as the output video stream 416 shown in FIG. 4.
  • a reference motion vector may be selected so as to reduce the number of bits required to encode a motion vector determined by, for example, motion search techniques.
  • the teachings herein take advantage of temporal motion continuity to reduce the number of bits required to transmit motion vector information by referring to motion vectors from adjacent and non-adjacent video frames.
  • the decoder has all the information the encoder has to select the reference motion vector, allowing the selection of the reference motion vector without explicit transfer of further information.
  • example or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example” or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion.
  • the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, "X includes A or B" is intended to mean any of the natural inclusive permutations.
  • Implementations of transmitting station 102 and/or receiving station 110 can be realized in hardware, software, or any combination thereof.
  • the hardware can include, for example, computers, intellectual property (IP) cores, application-specific integrated circuits (ASICs), programmable logic arrays, optical processors, programmable logic controllers, microcode, microcontrollers, servers, microprocessors, digital signal processors or any other suitable circuit.
  • IP intellectual property
  • ASICs application-specific integrated circuits
  • programmable logic arrays optical processors
  • programmable logic controllers programmable logic controllers
  • microcode microcontrollers
  • servers microprocessors, digital signal processors or any other suitable circuit.
  • signal processors digital signal processors
  • transmitting station 102 or receiving station 110 can be implemented using a general purpose computer or general purpose processor with a computer program that, when executed, carries out any of the respective methods, algorithms and/or instructions described herein.
  • a special purpose computer/processor can be utilized which can contain other hardware for carrying out any of the methods, algorithms, or instructions described herein.
  • Transmitting station 102 and receiving station 110 can, for example, be implemented on computers in a video conferencing system.
  • transmitting station 102 can be implemented on a server and receiving station 110 can be implemented on a device separate from the server, such as a hand-held communications device.
  • transmitting station 102 can encode content using an encoder 300 into an encoded video signal and transmit the encoded video signal to the communications device.
  • the communications device can then decode the encoded video signal using a decoder 400.
  • the communications device can decode content stored locally on the communications device, for example, content that was not transmitted by transmitting station 102.
  • Other suitable transmitting station 102 and receiving station 110 implementation schemes are available.
  • receiving station 110 can be a generally stationary personal computer rather than a portable communications device and/or a device including an encoder 300 may also include a decoder 400.
  • implementations of the present invention can take the form of a computer program product accessible from, for example, a tangible computer- usable or computer-readable medium.
  • a computer-usable or computer-readable medium can be any device that can, for example, tangibly contain, store, communicate, or transport the program for use by or in connection with any processor.
  • the medium can be, for example, an electronic, magnetic, optical, electromagnetic, or a semiconductor device. Other suitable mediums are also available.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne des techniques pour utiliser un vecteur de mouvement de référence afin de réduire la quantité de bits nécessaire pour coder des vecteurs de mouvement dans une prédiction inter. Un procédé consiste à identifier un vecteur de mouvement candidat pour prédire les inter-trames de chaque bloc d'une pluralité de blocs codés précédemment afin de définir une pluralité de vecteurs de mouvement candidats, à identifier un ensemble de valeurs de pixels reconstruits correspondant à un ensemble de pixels codés précédemment pour le bloc actuel, et à générer, au moyen de chaque vecteur de mouvement candidat, un ensemble correspondant de valeurs prédites pour l'ensemble de valeurs de pixels codés précédemment à l'intérieur de chaque trame de référence d'une pluralité de trames de référence. Une valeur d'erreur respective basée sur une différence entre l'ensemble de valeurs de pixels reconstruits et chaque ensemble de valeurs prédites est utilisée pour sélectionner un vecteur de mouvement candidat parmi les vecteurs de mouvement candidats qui sont utilisés pour coder le vecteur de mouvement pour le bloc actuel.
PCT/US2013/063723 2012-10-08 2013-10-07 Procédé et appareil pour codage vidéo au moyen de vecteurs de mouvement de référence WO2014058796A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US13/647,076 2012-10-08
US13/647,076 US9503746B2 (en) 2012-10-08 2012-10-08 Determine reference motion vectors
US13/974,678 US9485515B2 (en) 2013-08-23 2013-08-23 Video coding using reference motion vectors
US13/974,678 2013-08-23

Publications (1)

Publication Number Publication Date
WO2014058796A1 true WO2014058796A1 (fr) 2014-04-17

Family

ID=49447838

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/063723 WO2014058796A1 (fr) 2012-10-08 2013-10-07 Procédé et appareil pour codage vidéo au moyen de vecteurs de mouvement de référence

Country Status (1)

Country Link
WO (1) WO2014058796A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017180203A1 (fr) * 2016-04-15 2017-10-19 Google Inc. Codage de type de filtre d'interpolation
CN109155856A (zh) * 2016-06-09 2019-01-04 英特尔公司 用于视频编解码的利用近邻块模式的运动估计的方法和系统
CN109791695A (zh) * 2016-10-13 2019-05-21 Ati科技无限责任公司 基于图像块的运动向量确定所述块的方差
CN110546956A (zh) * 2017-06-30 2019-12-06 华为技术有限公司 一种帧间预测的方法及装置
CN110572677A (zh) * 2019-09-27 2019-12-13 腾讯科技(深圳)有限公司 视频编解码方法和装置、存储介质及电子装置
CN111343464A (zh) * 2018-12-18 2020-06-26 三星电子株式会社 基于减少的候选块执行运动估计的电子电路和电子设备
KR20210002563A (ko) * 2018-05-16 2021-01-08 후아웨이 테크놀러지 컴퍼니 리미티드 비디오 코딩 방법 및 장치
CN113383542A (zh) * 2019-01-11 2021-09-10 Vid拓展公司 使用合并模式运动向量候选对象的改善的帧内平面预测

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010086041A1 (fr) * 2009-01-30 2010-08-05 Gottfried Wilhelm Leibniz Universität Hannover Procédé et appareil de codage et de décodage d'un signal vidéo
GB2477033A (en) * 2009-07-03 2011-07-20 Intel Corp Decoder-side motion estimation (ME) using plural reference frames
WO2012125178A1 (fr) * 2011-03-15 2012-09-20 Intel Corporation Dérivation de vecteurs de mouvement avec accès en mémoire réduit

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010086041A1 (fr) * 2009-01-30 2010-08-05 Gottfried Wilhelm Leibniz Universität Hannover Procédé et appareil de codage et de décodage d'un signal vidéo
GB2477033A (en) * 2009-07-03 2011-07-20 Intel Corp Decoder-side motion estimation (ME) using plural reference frames
WO2012125178A1 (fr) * 2011-03-15 2012-09-20 Intel Corporation Dérivation de vecteurs de mouvement avec accès en mémoire réduit

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
LAROCHE G ET AL: "RD Optimized Coding for Motion Vector Predictor Selection", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 18, no. 9, 1 September 2008 (2008-09-01), pages 1247 - 1257, XP011231739, ISSN: 1051-8215, DOI: 10.1109/TCSVT.2008.928882 *
LI S ET AL: "Direct Mode Coding for Bipredictive Slices in the H.264 Standard", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 15, no. 1, 1 January 2005 (2005-01-01), pages 119 - 126, XP011124673, ISSN: 1051-8215, DOI: 10.1109/TCSVT.2004.837021 *
STEFFEN KAMP ET AL: "Decoder side motion vector derivation for inter frame video coding", IMAGE PROCESSING, 2008. ICIP 2008. 15TH IEEE INTERNATIONAL CONFERENCE, IEEE, PISCATAWAY, NJ, USA, 12 October 2008 (2008-10-12), pages 1120 - 1123, XP031374203, ISBN: 978-1-4244-1765-0 *
STEFFEN KAMP ET AL: "Improving AVC compression performance by template matching with decoder-side motion vector derivation", 84. MPEG MEETING; 28-4-2008 - 2-5-2008; ARCHAMPS; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11),, no. M15375, 25 April 2008 (2008-04-25), XP030043972 *
UEDA M ET AL: "TE1: Refinement Motion Compensation using Decoder-side Motion Estimation", 2. JCT-VC MEETING; 21-7-2010 - 28-7-2010; GENEVA; (JOINT COLLABORATIVETEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL:HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-B032, 18 July 2010 (2010-07-18), XP030007612, ISSN: 0000-0048 *
YI-JEN CHIU ET AL: "<title>Self-derivation of motion estimation techniques to improve video coding efficiency</title>", PROCEEDINGS OF SPIE, vol. 7798, 19 August 2010 (2010-08-19), pages 77980Z, XP055096206, ISSN: 0277-786X, DOI: 10.1117/12.862719 *
Y-JEN CHIU ET AL: "TE1: Fast techniques to improve self derivation of motion estimation", 2. JCT-VC MEETING; 21-7-2010 - 28-7-2010; GENEVA; (JOINT COLLABORATIVETEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL:HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-B047, 28 July 2010 (2010-07-28), XP030007627, ISSN: 0000-0048 *
YUE WANG ET AL: "Advanced spatial and Temporal Direct Mode for B picture coding", VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2011 IEEE, IEEE, 6 November 2011 (2011-11-06), pages 1 - 4, XP032081400, ISBN: 978-1-4577-1321-7, DOI: 10.1109/VCIP.2011.6116006 *
Y-W HUANG ET AL: "TE1: Decoder-side motion vector derivation with switchable template matching", 2. JCT-VC MEETING; 21-7-2010 - 28-7-2010; GENEVA; (JOINT COLLABORATIVETEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL:HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-B076, 23 July 2010 (2010-07-23), XP030007656, ISSN: 0000-0046 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10602176B2 (en) 2016-04-15 2020-03-24 Google Llc Coding interpolation filter type
USRE49615E1 (en) 2016-04-15 2023-08-15 Google Llc Coding interpolation filter type
WO2017180203A1 (fr) * 2016-04-15 2017-10-19 Google Inc. Codage de type de filtre d'interpolation
CN109155856A (zh) * 2016-06-09 2019-01-04 英特尔公司 用于视频编解码的利用近邻块模式的运动估计的方法和系统
CN109155856B (zh) * 2016-06-09 2023-10-24 英特尔公司 用于视频编解码的利用近邻块模式的运动估计的方法和系统
US11616968B2 (en) 2016-06-09 2023-03-28 Intel Corporation Method and system of motion estimation with neighbor block pattern for video coding
CN109791695A (zh) * 2016-10-13 2019-05-21 Ati科技无限责任公司 基于图像块的运动向量确定所述块的方差
CN109791695B (zh) * 2016-10-13 2023-06-20 Ati科技无限责任公司 基于图像块的运动向量确定所述块的方差
US11197018B2 (en) 2017-06-30 2021-12-07 Huawei Technologies Co., Ltd. Inter-frame prediction method and apparatus
CN110546956A (zh) * 2017-06-30 2019-12-06 华为技术有限公司 一种帧间预测的方法及装置
CN110546956B (zh) * 2017-06-30 2021-12-28 华为技术有限公司 一种帧间预测的方法及装置
KR102542196B1 (ko) * 2018-05-16 2023-06-12 후아웨이 테크놀러지 컴퍼니 리미티드 비디오 코딩 방법 및 장치
EP3783888A4 (fr) * 2018-05-16 2021-08-04 Huawei Technologies Co., Ltd. Procédé et appareil de codage et de décodage vidéo
RU2767993C1 (ru) * 2018-05-16 2022-03-22 Хуавей Текнолоджиз Ко., Лтд. Устройство и способ кодирования видео
CN112653895B (zh) * 2018-05-16 2022-08-02 华为技术有限公司 一种运动矢量获取方法,装置,设备及计算机可读存储介质
AU2018423422B2 (en) * 2018-05-16 2023-02-02 Huawei Technologies Co., Ltd. Video coding method and apparatus
CN112653895A (zh) * 2018-05-16 2021-04-13 华为技术有限公司 一种运动矢量获取方法,装置,设备及计算机可读存储介质
CN112399184A (zh) * 2018-05-16 2021-02-23 华为技术有限公司 一种运动矢量获取方法,装置,设备及计算机可读存储介质
KR20210002563A (ko) * 2018-05-16 2021-01-08 후아웨이 테크놀러지 컴퍼니 리미티드 비디오 코딩 방법 및 장치
US11765378B2 (en) 2018-05-16 2023-09-19 Huawei Technologies Co., Ltd. Video coding method and apparatus
CN111343464A (zh) * 2018-12-18 2020-06-26 三星电子株式会社 基于减少的候选块执行运动估计的电子电路和电子设备
CN113383542A (zh) * 2019-01-11 2021-09-10 Vid拓展公司 使用合并模式运动向量候选对象的改善的帧内平面预测
CN110572677A (zh) * 2019-09-27 2019-12-13 腾讯科技(深圳)有限公司 视频编解码方法和装置、存储介质及电子装置
CN110572677B (zh) * 2019-09-27 2023-10-24 腾讯科技(深圳)有限公司 视频编解码方法和装置、存储介质及电子装置

Similar Documents

Publication Publication Date Title
US10986361B2 (en) Video coding using reference motion vectors
US10484707B1 (en) Dynamic reference motion vector coding mode
US10142652B2 (en) Entropy coding motion vector residuals obtained using reference motion vectors
EP3590258B1 (fr) Sélection de noyau de transformée et codage entropique
US10555000B2 (en) Multi-level compound prediction
US9826250B1 (en) Transform-domain intra prediction
US20170347094A1 (en) Block size adaptive directional intra prediction
WO2014058796A1 (fr) Procédé et appareil pour codage vidéo au moyen de vecteurs de mouvement de référence
US9503746B2 (en) Determine reference motion vectors
US10582212B2 (en) Warped reference motion vectors for video compression
US9615100B2 (en) Second-order orthogonal spatial intra prediction
CN107231557B (zh) 用于在视频编码中的高级帧内预测的递归块分区中的智能重排的编、解码方法及装置
US8396127B1 (en) Segmentation for video coding using predictive benefit
WO2018132150A1 (fr) Prédiction composée pour codage vidéo
US9756346B2 (en) Edge-selective intra coding
US10419777B2 (en) Non-causal overlapped block prediction in variable block size video coding
US11785226B1 (en) Adaptive composite intra prediction for image and video compression

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13779698

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13779698

Country of ref document: EP

Kind code of ref document: A1